Back to Search Start Over

Task Diversity Shortens the ICL Plateau

Authors :
Kim, Jaeyeon
Kwon, Sehyun
Choi, Joo Young
Park, Jongho
Cho, Jaewoong
Lee, Jason D.
Ryu, Ernest K.
Publication Year :
2024

Abstract

In-context learning (ICL) describes a language model's ability to generate outputs based on a set of input demonstrations and a subsequent query. To understand this remarkable capability, researchers have studied simplified, stylized models. These studies have consistently observed long loss plateaus, during which models exhibit minimal improvement, followed by a sudden, rapid surge of learning. In this work, we reveal that training on multiple diverse ICL tasks simultaneously shortens the loss plateaus, making each task easier to learn. This finding is surprising as it contradicts the natural intuition that the combined complexity of multiple ICL tasks would lengthen the learning process, not shorten it. Our result suggests that the recent success in large-scale training of language models may be attributed not only to the richness of the data at scale but also to the easier optimization (training) induced by the diversity of natural language training data.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.05448
Document Type :
Working Paper