1. Source Prompt: Coordinated Pre-training of Language Models on Diverse Corpora from Multiple Sources
- Author
-
Xu, Yipei, Lu, Dakuan, Liang, Jiaqing, Wang, Xintao, Geng, Yipeng, Xin, Yingsi, Wu, Hengkui, Chen, Ken, zhang, ruiji, and Xiao, Yanghua
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Pre-trained language models (PLMs) have established the new paradigm in the field of NLP. For more powerful PLMs, one of the most popular and successful way is to continuously scale up sizes of the models and the pre-training corpora. These large corpora are generally obtained by converging smaller ones from multiple sources, they are thus growing increasingly diverse. However, the side-effects of these colossal converged corpora remain understudied. In this paper, we identify the disadvantage of heterogeneous corpora from multiple sources for pre-training PLMs. Towards coordinated pre-training on diverse corpora, we further propose source prompts (SP), which explicitly prompt the model of the data source at the pre-training and fine-tuning stages. Results of extensive experiments demonstrate that PLMs pre-trained with SP on diverse corpora gain significant improvement in various downstream tasks.
- Published
- 2023