Back to Search Start Over

An Empirical Survey of Unsupervised Text Representation Methods on Twitter Data

Authors :
Wang, Lili
Gao, Chongyang
Wei, Jason
Ma, Weicheng
Liu, Ruibo
Vosoughi, Soroush
Publication Year :
2020

Abstract

The field of NLP has seen unprecedented achievements in recent years. Most notably, with the advent of large-scale pre-trained Transformer-based language models, such as BERT, there has been a noticeable improvement in text representation. It is, however, unclear whether these improvements translate to noisy user-generated text, such as tweets. In this paper, we present an experimental survey of a wide range of well-known text representation techniques for the task of text clustering on noisy Twitter data. Our results indicate that the more advanced models do not necessarily work best on tweets and that more exploration in this area is needed.<br />Comment: In proceedings of the 6th Workshop on Noisy User-generated Text (W-NUT) at EMNLP 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2012.03468
Document Type :
Working Paper
Full Text :
https://doi.org/10.18653/v1/2020.wnut-1.27