1. GRIN: GRadient-INformed MoE
- Author
-
Liu, Liyuan, Kim, Young Jin, Wang, Shuohang, Liang, Chen, Shen, Yelong, Cheng, Hao, Liu, Xiaodong, Tanaka, Masahiro, Wu, Xiaoxia, Hu, Wenxiang, Chaudhary, Vishrav, Lin, Zeqi, Zhang, Chenruidong, Xue, Jilong, Awadalla, Hany, Gao, Jianfeng, and Chen, Weizhu
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Mixture-of-Experts (MoE) models scale more effectively than dense models due to sparse computation through expert routing, selectively activating only a small subset of expert modules. However, sparse computation challenges traditional training practices, as discrete expert routing hinders standard backpropagation and thus gradient-based optimization, which are the cornerstone of deep learning. To better pursue the scaling power of MoE, we introduce GRIN (GRadient-INformed MoE training), which incorporates sparse gradient estimation for expert routing and configures model parallelism to avoid token dropping. Applying GRIN to autoregressive language modeling, we develop a top-2 16$\times$3.8B MoE model. Our model, with only 6.6B activated parameters, outperforms a 7B dense model and matches the performance of a 14B dense model trained on the same data. Extensive evaluations across diverse tasks demonstrate the potential of GRIN to significantly enhance MoE efficacy, achieving 79.4 on MMLU, 83.7 on HellaSwag, 74.4 on HumanEval, and 58.9 on MATH., Comment: 58 pages
- Published
- 2024