Back to Search Start Over

Adapting Hidden Naive Bayes for Text Classification

Authors :
Shengfeng Gan
Shiqi Shao
Long Chen
Liangjun Yu
Liangxiao Jiang
Source :
Mathematics, Vol 9, Iss 19, p 2378 (2021)
Publication Year :
2021
Publisher :
MDPI AG, 2021.

Abstract

Due to its simplicity, efficiency, and effectiveness, multinomial naive Bayes (MNB) has been widely used for text classification. As in naive Bayes (NB), its assumption of the conditional independence of features is often violated and, therefore, reduces its classification performance. Of the numerous approaches to alleviating its assumption of the conditional independence of features, structure extension has attracted less attention from researchers. To the best of our knowledge, only structure-extended MNB (SEMNB) has been proposed so far. SEMNB averages all weighted super-parent one-dependence multinomial estimators; therefore, it is an ensemble learning model. In this paper, we propose a single model called hidden MNB (HMNB) by adapting the well-known hidden NB (HNB). HMNB creates a hidden parent for each feature, which synthesizes all the other qualified features’ influences. For HMNB to learn, we propose a simple but effective learning algorithm without incurring a high-computational-complexity structure-learning process. Our improved idea can also be used to improve complement NB (CNB) and the one-versus-all-but-one model (OVA), and the resulting models are simply denoted as HCNB and HOVA, respectively. The extensive experiments on eleven benchmark text classification datasets validate the effectiveness of HMNB, HCNB, and HOVA.

Details

Language :
English
ISSN :
22277390
Volume :
9
Issue :
19
Database :
Directory of Open Access Journals
Journal :
Mathematics
Publication Type :
Academic Journal
Accession number :
edsdoj.2e97697dbd76449999976e055cf9edb2
Document Type :
article
Full Text :
https://doi.org/10.3390/math9192378