Back to Search Start Over

Contrastive learning with text augmentation for text classification.

Authors :
Jia, Ouyang
Huang, Huimin
Ren, Jiaxin
Xie, Luodi
Xiao, Yinyin
Source :
Applied Intelligence; Aug2023, Vol. 53 Issue 16, p19522-19531, 10p
Publication Year :
2023

Abstract

Various contrastive learning models have been successfully applied to representation learning for downstream tasks. The positive samples used in contrastive learning are often derived from augmented data, which improve the performance of many computer vision tasks while still not being fully utilized for natural language processing tasks, such as text classification. The existing data augmentation methods have been rarely applied to contrastive learning in the field of NLP. In this paper, we propose a Text Augmentation Contrastive Learning Representation model, TACLR, that combines the easy text augmentation techniques (i.e., synonym replacement, random insertion, random swap and random deletion) and textMixup augmentation method with contrastive learning for text classification task. Furthermore, we propose a unified method that allows flexibly adapting supervised, semi-supervised and unsupervised learning. Experimental results on five text classification datasets show that our TACLR can significantly improve text classification accuracies. We also provide extensive ablation studies for exploring the validity of each component of our model. The source code of our work is publicly available from https://gitlab.com/models-for-paper/taclr. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0924669X
Volume :
53
Issue :
16
Database :
Complementary Index
Journal :
Applied Intelligence
Publication Type :
Academic Journal
Accession number :
170748553
Full Text :
https://doi.org/10.1007/s10489-023-04453-3