Back to Search Start Over

Arabic Sentiment Analysis with Noisy Deep Explainable Model

Authors :
Atabuzzaman, Md.
Shajalal, Md
Baby, Maksuda Bilkis
Boden, Alexander
Source :
NLPIR 2023: 2023 7th International Conference on Natural Language Processing and Information Retrieval, Seoul, Republic of Korea, December 2023
Publication Year :
2023

Abstract

Sentiment Analysis (SA) is an indispensable task for many real-world applications. Compared to limited resourced languages (i.e., Arabic, Bengali), most of the research on SA are conducted for high resourced languages (i.e., English, Chinese). Moreover, the reasons behind any prediction of the Arabic sentiment analysis methods exploiting advanced artificial intelligence (AI)-based approaches are like black-box - quite difficult to understand. This paper proposes an explainable sentiment classification framework for the Arabic language by introducing a noise layer on Bi-Directional Long Short-Term Memory (BiLSTM) and Convolutional Neural Networks (CNN)-BiLSTM models that overcome over-fitting problem. The proposed framework can explain specific predictions by training a local surrogate explainable model to understand why a particular sentiment (positive or negative) is being predicted. We carried out experiments on public benchmark Arabic SA datasets. The results concluded that adding noise layers improves the performance in sentiment analysis for the Arabic language by reducing overfitting and our method outperformed some known state-of-the-art methods. In addition, the introduced explainability with noise layer could make the model more transparent and accountable and hence help adopting AI-enabled system in practice.<br />Comment: This is the pre-print version of our accepted paper at the 7th International Conference on Natural Language Processing and Information Retrieval~(ACM NLPIR'2023)

Details

Database :
arXiv
Journal :
NLPIR 2023: 2023 7th International Conference on Natural Language Processing and Information Retrieval, Seoul, Republic of Korea, December 2023
Publication Type :
Report
Accession number :
edsarx.2309.13731
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3639233.3639241