Back to Search Start Over

Hard and soft em in bayesian network learning from incomplete data

Authors :
Ruggieri, A
Stranieri, F
Stella, F
Scutari, M
Ruggieri A.
Stranieri F.
Stella F.
Scutari M.
Ruggieri, A
Stranieri, F
Stella, F
Scutari, M
Ruggieri A.
Stranieri F.
Stella F.
Scutari M.
Publication Year :
2020

Abstract

Incomplete data are a common feature in many domains, from clinical trials to industrial applications. Bayesian networks (BNs) are often used in these domains because of their graphical and causal interpretations. BN parameter learning from incomplete data is usually implemented with the Expectation-Maximisation algorithm (EM), which computes the relevant sufficient statistics (“soft EM”) using belief propagation. Similarly, the Structural Expectation-Maximisation algorithm (Structural EM) learns the network structure of the BN from those sufficient statistics using algorithms designed for complete data. However, practical implementations of parameter and structure learning often impute missing data (“hard EM”) to compute sufficient statistics instead of using belief propagation, for both ease of implementation and computational speed. In this paper, we investigate the question: what is the impact of using imputation instead of belief propagation on the quality of the resulting BNs? From a simulation study using synthetic data and reference BNs, we find that it is possible to recommend one approach over the other in several scenarios based on the characteristics of the data. We then use this information to build a simple decision tree to guide practitioners in choosing the EM algorithm best suited to their problem.

Details

Database :
OAIster
Notes :
ELETTRONICO, English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1308937678
Document Type :
Electronic Resource