Back to Search Start Over

Learning Topic Models by Belief Propagation

Authors :
Zeng, Jia
Cheung, William K.
Liu, Jiming
Source :
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 33, Number 5, Pages 1121-1134, 2013
Publication Year :
2011

Abstract

Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model for probabilistic topic modeling, which attracts worldwide interests and touches on many important applications in text mining, computer vision and computational biology. This paper represents LDA as a factor graph within the Markov random field (MRF) framework, which enables the classic loopy belief propagation (BP) algorithm for approximate inference and parameter estimation. Although two commonly-used approximate inference methods, such as variational Bayes (VB) and collapsed Gibbs sampling (GS), have gained great successes in learning LDA, the proposed BP is competitive in both speed and accuracy as validated by encouraging experimental results on four large-scale document data sets. Furthermore, the BP algorithm has the potential to become a generic learning scheme for variants of LDA-based topic models. To this end, we show how to learn two typical variants of LDA-based topic models, such as author-topic models (ATM) and relational topic models (RTM), using BP based on the factor graph representation.<br />Comment: 14 pages, 17 figures

Subjects

Subjects :
Computer Science - Learning

Details

Database :
arXiv
Journal :
IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 33, Number 5, Pages 1121-1134, 2013
Publication Type :
Report
Accession number :
edsarx.1109.3437
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TPAMI.2012.185