Learning Bayesian network parameters from imperfect data: Enhancements to the EM algorithm

Rohitha Hewawasam, Kamal Premaratne

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

The recent years have seen many developments in uncertainty reasoning taking place around Bayesian Networks (BNs). BNs allow fast and efficient probabilistic reasoning. One of the key issues that researchers have faced in using a BN is determining its parameters and structure for a given problem. Many techniques have been developed for learning BN parameters from a given dataset pertaining to a particular problem. Most of the methods developed for learning BN parameters from partially observed data have evolved around the Expectation-Maximization (EM) algorithm. In its original form, EM algorithm is a deterministic iterative two-step procedure that converges towards the maximum-likelihood (ML) estimates. The EM algorithm mainly focuses on learning BN parameters from imperfect data where some of the values are missing. However in many practical applications, partial observability results in a wider range of imperfections, e.g., uncertainties arising from incomplete, ambiguous, probabilistic, and belief theoretic data. Moreover, while convergence is to their ML estimates, the EM algorithm does not guarantee convergence to the underlying true parameters. In this paper, we propose an approach that enables one to learn BN parameters from a dataset containing a wider variety of imperfections. In addition, by introducing an early stopping criterion together with a new initialization method to the EM-algorithm, we show how the BN parameters could be learnt so that they are closer to the underlying true parameters than the converged ML estimated parameters.

Original languageEnglish
Title of host publicationProceedings of SPIE - The International Society for Optical Engineering
Volume6560
DOIs
StatePublished - Nov 15 2007
EventIntelligent Computing: Theory and Applications V - Orlando, FL, United States
Duration: Apr 9 2007Apr 10 2007

Other

OtherIntelligent Computing: Theory and Applications V
CountryUnited States
CityOrlando, FL
Period4/9/074/10/07

Fingerprint

Bayesian networks
learning
augmentation
Maximum likelihood
maximum likelihood estimates
Defects
Observability
defects
stopping

Keywords

  • Bayesian networks
  • Data uncertainties
  • EM algorithm
  • Imperfect data
  • Learning parameters

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Condensed Matter Physics

Cite this

Hewawasam, R., & Premaratne, K. (2007). Learning Bayesian network parameters from imperfect data: Enhancements to the EM algorithm. In Proceedings of SPIE - The International Society for Optical Engineering (Vol. 6560). [65600E] https://doi.org/10.1117/12.719290

Learning Bayesian network parameters from imperfect data : Enhancements to the EM algorithm. / Hewawasam, Rohitha; Premaratne, Kamal.

Proceedings of SPIE - The International Society for Optical Engineering. Vol. 6560 2007. 65600E.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Hewawasam, R & Premaratne, K 2007, Learning Bayesian network parameters from imperfect data: Enhancements to the EM algorithm. in Proceedings of SPIE - The International Society for Optical Engineering. vol. 6560, 65600E, Intelligent Computing: Theory and Applications V, Orlando, FL, United States, 4/9/07. https://doi.org/10.1117/12.719290
Hewawasam R, Premaratne K. Learning Bayesian network parameters from imperfect data: Enhancements to the EM algorithm. In Proceedings of SPIE - The International Society for Optical Engineering. Vol. 6560. 2007. 65600E https://doi.org/10.1117/12.719290
Hewawasam, Rohitha ; Premaratne, Kamal. / Learning Bayesian network parameters from imperfect data : Enhancements to the EM algorithm. Proceedings of SPIE - The International Society for Optical Engineering. Vol. 6560 2007.
@inproceedings{48be22bc1f0b4dd8a38f599742e01c96,
title = "Learning Bayesian network parameters from imperfect data: Enhancements to the EM algorithm",
abstract = "The recent years have seen many developments in uncertainty reasoning taking place around Bayesian Networks (BNs). BNs allow fast and efficient probabilistic reasoning. One of the key issues that researchers have faced in using a BN is determining its parameters and structure for a given problem. Many techniques have been developed for learning BN parameters from a given dataset pertaining to a particular problem. Most of the methods developed for learning BN parameters from partially observed data have evolved around the Expectation-Maximization (EM) algorithm. In its original form, EM algorithm is a deterministic iterative two-step procedure that converges towards the maximum-likelihood (ML) estimates. The EM algorithm mainly focuses on learning BN parameters from imperfect data where some of the values are missing. However in many practical applications, partial observability results in a wider range of imperfections, e.g., uncertainties arising from incomplete, ambiguous, probabilistic, and belief theoretic data. Moreover, while convergence is to their ML estimates, the EM algorithm does not guarantee convergence to the underlying true parameters. In this paper, we propose an approach that enables one to learn BN parameters from a dataset containing a wider variety of imperfections. In addition, by introducing an early stopping criterion together with a new initialization method to the EM-algorithm, we show how the BN parameters could be learnt so that they are closer to the underlying true parameters than the converged ML estimated parameters.",
keywords = "Bayesian networks, Data uncertainties, EM algorithm, Imperfect data, Learning parameters",
author = "Rohitha Hewawasam and Kamal Premaratne",
year = "2007",
month = "11",
day = "15",
doi = "10.1117/12.719290",
language = "English",
isbn = "0819466824",
volume = "6560",
booktitle = "Proceedings of SPIE - The International Society for Optical Engineering",

}

TY - GEN

T1 - Learning Bayesian network parameters from imperfect data

T2 - Enhancements to the EM algorithm

AU - Hewawasam, Rohitha

AU - Premaratne, Kamal

PY - 2007/11/15

Y1 - 2007/11/15

N2 - The recent years have seen many developments in uncertainty reasoning taking place around Bayesian Networks (BNs). BNs allow fast and efficient probabilistic reasoning. One of the key issues that researchers have faced in using a BN is determining its parameters and structure for a given problem. Many techniques have been developed for learning BN parameters from a given dataset pertaining to a particular problem. Most of the methods developed for learning BN parameters from partially observed data have evolved around the Expectation-Maximization (EM) algorithm. In its original form, EM algorithm is a deterministic iterative two-step procedure that converges towards the maximum-likelihood (ML) estimates. The EM algorithm mainly focuses on learning BN parameters from imperfect data where some of the values are missing. However in many practical applications, partial observability results in a wider range of imperfections, e.g., uncertainties arising from incomplete, ambiguous, probabilistic, and belief theoretic data. Moreover, while convergence is to their ML estimates, the EM algorithm does not guarantee convergence to the underlying true parameters. In this paper, we propose an approach that enables one to learn BN parameters from a dataset containing a wider variety of imperfections. In addition, by introducing an early stopping criterion together with a new initialization method to the EM-algorithm, we show how the BN parameters could be learnt so that they are closer to the underlying true parameters than the converged ML estimated parameters.

AB - The recent years have seen many developments in uncertainty reasoning taking place around Bayesian Networks (BNs). BNs allow fast and efficient probabilistic reasoning. One of the key issues that researchers have faced in using a BN is determining its parameters and structure for a given problem. Many techniques have been developed for learning BN parameters from a given dataset pertaining to a particular problem. Most of the methods developed for learning BN parameters from partially observed data have evolved around the Expectation-Maximization (EM) algorithm. In its original form, EM algorithm is a deterministic iterative two-step procedure that converges towards the maximum-likelihood (ML) estimates. The EM algorithm mainly focuses on learning BN parameters from imperfect data where some of the values are missing. However in many practical applications, partial observability results in a wider range of imperfections, e.g., uncertainties arising from incomplete, ambiguous, probabilistic, and belief theoretic data. Moreover, while convergence is to their ML estimates, the EM algorithm does not guarantee convergence to the underlying true parameters. In this paper, we propose an approach that enables one to learn BN parameters from a dataset containing a wider variety of imperfections. In addition, by introducing an early stopping criterion together with a new initialization method to the EM-algorithm, we show how the BN parameters could be learnt so that they are closer to the underlying true parameters than the converged ML estimated parameters.

KW - Bayesian networks

KW - Data uncertainties

KW - EM algorithm

KW - Imperfect data

KW - Learning parameters

UR - http://www.scopus.com/inward/record.url?scp=35948945052&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=35948945052&partnerID=8YFLogxK

U2 - 10.1117/12.719290

DO - 10.1117/12.719290

M3 - Conference contribution

AN - SCOPUS:35948945052

SN - 0819466824

SN - 9780819466822

VL - 6560

BT - Proceedings of SPIE - The International Society for Optical Engineering

ER -