Back to Search Start Over

SaberLDA: Sparsity-Aware Learning of Topic Models on GPUs.

Authors :
Li, Kaiwei
Chen, Jianfei
Chen, Wenguang
Zhu, Jun
Source :
IEEE Transactions on Parallel & Distributed Systems. Sep2020, Vol. 31 Issue 9, p2112-2124. 13p.
Publication Year :
2020

Abstract

Latent Dirichlet Allocation (LDA) is a popular tool for analyzing discrete count data such as text and images, which are required to model datasets and a large number of topics, e.g., tens of thousands of topics for industry scale applications. Although distributed CPU systems have been used to address this problem, they are slow and resource inefficient. GPU-based systems have emerged as a promising alternative because of their high computational power and memory bandwidth. However, existing GPU-based LDA systems can only learn thousands of topics, because they use dense data structures, and have linear time complexity to the number of topics. In this article, we propose SaberLDA, a GPU-based LDA system that implements a sparsity-aware algorithm to achieve sublinear time complexity to learn a large number of topics. To address the challenges introduced by sparsity, we propose a novel data layout, a warp-based sampling kernel, an efficient sparse matrix counting method, and a fine-grained load balancing strategy. SaberLDA achieves linear speedup on 4 GPUs and is 6–10 times faster than existing GPU systems in thousands of topics. It can learn 40,000 topics from a dataset of billions of tokens in two hours, which was previously only achievable using clusters of tens of CPU servers. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10459219
Volume :
31
Issue :
9
Database :
Academic Search Index
Journal :
IEEE Transactions on Parallel & Distributed Systems
Publication Type :
Academic Journal
Accession number :
143316167
Full Text :
https://doi.org/10.1109/TPDS.2020.2979702