Sorry, I don't understand your search. ×
Back to Search Start Over

Preference Curriculum: LLMs Should Always Be Pretrained on Their Preferred Data

Authors :
Zhang, Xuemiao
Xu, Liangyu
Duan, Feiyu
Zhou, Yongwei
Wang, Sirui
Weng, Rongxiang
Wang, Jingang
Cai, Xunliang
Publication Year :
2025

Abstract

Large language models (LLMs) generally utilize a consistent data distribution throughout the pretraining process. However, as the model's capability improves, it is intuitive that its data preferences dynamically change, indicating the need for pretraining with different data at various training stages. To achieve it, we propose the Perplexity Difference (PD) based Preference Curriculum learning (PDPC) framework, which always perceives and uses the data preferred by LLMs to train and boost them. First, we introduce the PD metric to quantify the difference in how challenging a sample is for weak versus strong models. Samples with high PD are more challenging for weak models to learn and are more suitable to be arranged in the later stage of pretraining. Second, we propose the preference function to approximate and predict the data preference of the LLM at any training step, so as to complete the arrangement of the dataset offline and ensure continuous training without interruption. Experimental results on 1.3B and 3B models demonstrate that PDPC significantly surpasses baselines. Notably, the 3B model trained on 1T tokens achieves an increased average accuracy of over 8.1% across MMLU and CMMLU.<br />Comment: 18 pages, 13 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.13126
Document Type :
Working Paper