Back to Search Start Over

On the Impact of Cross-Domain Data on German Language Models

Authors :
Dada, Amin
Chen, Aokun
Peng, Cheng
Smith, Kaleb E
Idrissi-Yaghir, Ahmad
Seibold, Constantin Marc
Li, Jianning
Heiliger, Lars
Yang, Xi
Friedrich, Christoph M.
Truhn, Daniel
Egger, Jan
Bian, Jiang
Kleesiek, Jens
Wu, Yonghui
Publication Year :
2023

Abstract

Traditionally, large language models have been either trained on general web crawls or domain-specific data. However, recent successes of generative large language models, have shed light on the benefits of cross-domain datasets. To examine the significance of prioritizing data diversity over quality, we present a German dataset comprising texts from five domains, along with another dataset aimed at containing high-quality data. Through training a series of models ranging between 122M and 750M parameters on both datasets, we conduct a comprehensive benchmark on multiple downstream tasks. Our findings demonstrate that the models trained on the cross-domain dataset outperform those trained on quality data alone, leading to improvements up to $4.45\%$ over the previous state-of-the-art. The models are available at https://huggingface.co/ikim-uk-essen<br />Comment: 13 pages, 1 figure, accepted at Findings of the Association for Computational Linguistics: EMNLP 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.07321
Document Type :
Working Paper