Back to Search Start Over

Data Augmentation Methods for Enhancing Robustness in Text Classification Tasks.

Authors :
Tang, Huidong
Kamei, Sayaka
Morimoto, Yasuhiko
Source :
Algorithms; Jan2023, Vol. 16 Issue 1, p59, 21p
Publication Year :
2023

Abstract

Text classification is widely studied in natural language processing (NLP). Deep learning models, including large pre-trained models like BERT and DistilBERT, have achieved impressive results in text classification tasks. However, these models' robustness against adversarial attacks remains an area of concern. To address this concern, we propose three data augmentation methods to improve the robustness of such pre-trained models. We evaluated our methods on four text classification datasets by fine-tuning DistilBERT on the augmented datasets and exposing the resulting models to adversarial attacks to evaluate their robustness. In addition to enhancing the robustness, our proposed methods can improve the accuracy and F1-score on three datasets. We also conducted comparison experiments with two existing data augmentation methods. We found that one of our proposed methods demonstrates a similar improvement in terms of performance, but all demonstrate a superior robustness improvement. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
19994893
Volume :
16
Issue :
1
Database :
Complementary Index
Journal :
Algorithms
Publication Type :
Academic Journal
Accession number :
161421000
Full Text :
https://doi.org/10.3390/a16010059