Back to Search
Start Over
Exploring transformer models for sentiment classification: A comparison of BERT, RoBERTa, ALBERT, DistilBERT, and XLNet.
- Source :
-
Expert Systems . Nov2024, Vol. 41 Issue 11, p1-27. 27p. - Publication Year :
- 2024
-
Abstract
- Transfer learning models have proven superior to classical machine learning approaches in various text classification tasks, such as sentiment analysis, question answering, news categorization, and natural language inference. Recently, these models have shown exceptional results in natural language understanding (NLU). Advanced attention‐based language models like BERT and XLNet excel at handling complex tasks across diverse contexts. However, they encounter difficulties when applied to specific domains. Platforms like Facebook, characterized by continually evolving casual and sophisticated language, demand meticulous context analysis even from human users. The literature has proposed numerous solutions using statistical and machine learning techniques to predict the sentiment (positive or negative) of online customer reviews, but most of them rely on various business, review, and reviewer features, which leads to generalizability issues. Furthermore, there have been very few studies investigating the effectiveness of state‐of‐the‐art pre‐trained language models for sentiment classification in reviews. Therefore, this study aims to assess the effectiveness of BERT, RoBERTa, ALBERT, DistilBERT, and XLNet in sentiment classification using the Yelp reviews dataset. The models were fine‐tuned, and the results obtained with the same hyperparameters are as follows: 98.30 for RoBERTa, 98.20 for XLNet, 97.40 for BERT, 97.20 for ALBERT, and 96.00 for DistilBERT. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 02664720
- Volume :
- 41
- Issue :
- 11
- Database :
- Academic Search Index
- Journal :
- Expert Systems
- Publication Type :
- Academic Journal
- Accession number :
- 180109754
- Full Text :
- https://doi.org/10.1111/exsy.13701