Back to Search Start Over

Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text Classification

Authors :
Dong, Xin
Zhu, Yaxin
Zhang, Yupeng
Fu, Zuohui
Xu, Dongkuan
Yang, Sen
de Melo, Gerard
Publication Year :
2020

Abstract

In cross-lingual text classification, one seeks to exploit labeled data from one language to train a text classification model that can then be applied to a completely different language. Recent multilingual representation models have made it much easier to achieve this. Still, there may still be subtle differences between languages that are neglected when doing so. To address this, we present a semi-supervised adversarial training process that minimizes the maximal loss for label-preserving input perturbations. The resulting model then serves as a teacher to induce labels for unlabeled target language samples that can be used during further adversarial training, allowing us to gradually adapt our model to the target language. Compared with a number of strong baselines, we observe significant gains in effectiveness on document and intent classification for a diverse set of languages.<br />Comment: SIGIR 2020 (Short Paper)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2007.15072
Document Type :
Working Paper