Back to Search Start Over

AdaGC: Improving Training Stability for Large Language Model Pretraining

Authors :
Wang, Guoxia
Li, Shuai
Chen, Congliang
Zeng, Jinle
Yang, Jiabin
Sun, Tao
Ma, Yanjun
Yu, Dianhai
Shen, Li
Publication Year :
2025

Abstract

Large Language Models (LLMs) face increasing loss spikes during scaling, undermining training stability and final performance. While gradient clipping mitigates this issue, traditional global approaches poorly handle parameter-specific gradient variations and decaying gradient norms. We propose **AdaGC**, an adaptive gradient clipping framework that automatically adjusts local thresholds per parameter through exponential moving average of gradient norms. Theoretical analysis proves AdaGC's convergence under non-convex conditions. Extensive experiments demonstrate significant improvements: On Llama-2 7B/13B, AdaGC completely eliminates loss spikes while reducing WikiText perplexity by 3.5% (+0.14pp LAMBADA accuracy) for 7B and achieving 0.65% lower training loss with 1.47% reduced validation perplexity for 13B compared to global clipping. For CLIP ViT-Base, AdaGC converges 25% faster than StableAdamW with full spike elimination. The method shows universal effectiveness across architectures (Llama-2 7B/13B) and modalities (CLIP), with successful integration into diverse optimizers like AdamW and Lion. Source code will be released on GitHub.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.11034
Document Type :
Working Paper