Back to Search Start Over

Enhancing Stability for Large Models Training in Constrained Bandwidth Networks

Authors :
Dai, Yun
Dharamsi, Tejas
Hsu, Byron
Song, Tao
Firooz, Hamed
Publication Year :
2024

Abstract

Training extremely large language models with billions of parameters is a computationally intensive task that pushes the limits of current data parallel training systems. While techniques like ZeRO++ have enabled efficient distributed training of such giant models on inexpensive low-bandwidth clusters, they can suffer from convergence issues due to potential race conditions in the hierarchical partitioning (hpZ) scheme employed to reduce cross-machine communication. In this work, we first show how these race conditions cause instability when training models with billions of parameters. We then propose a modification to the partitioning algorithm that addresses these convergence challenges while maintaining competitive training efficiency. Empirical evaluation on training the multi-billion parameters Falcon Models and Llama-2 models demonstrates the updated algorithm's ability to achieve reliable convergence on these massive models, where stock ZeRO++ hpZ fails to converge. The updated algorithm enables robust training of larger models with 98\% throughput and model training speed improvement without sacrificing the quality of convergence.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.01614
Document Type :
Working Paper