1. DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
- Author
-
Dai, Damai, Deng, Chengqi, Zhao, Chenggang, Xu, R. X., Gao, Huazuo, Chen, Deli, Li, Jiashi, Zeng, Wangding, Yu, Xingkai, Wu, Y., Xie, Zhenda, Li, Y. K., Huang, Panpan, Luo, Fuli, Ruan, Chong, Sui, Zhifang, Liang, Wenfeng, Dai, Damai, Deng, Chengqi, Zhao, Chenggang, Xu, R. X., Gao, Huazuo, Chen, Deli, Li, Jiashi, Zeng, Wangding, Yu, Xingkai, Wu, Y., Xie, Zhenda, Li, Y. K., Huang, Panpan, Luo, Fuli, Ruan, Chong, Sui, Zhifang, and Liang, Wenfeng
- Abstract
In the era of large language models, Mixture-of-Experts (MoE) is a promising architecture for managing computational costs when scaling up model parameters. However, conventional MoE architectures like GShard, which activate the top-$K$ out of $N$ experts, face challenges in ensuring expert specialization, i.e. each expert acquires non-overlapping and focused knowledge. In response, we propose the DeepSeekMoE architecture towards ultimate expert specialization. It involves two principal strategies: (1) finely segmenting the experts into $mN$ ones and activating $mK$ from them, allowing for a more flexible combination of activated experts; (2) isolating $K_s$ experts as shared ones, aiming at capturing common knowledge and mitigating redundancy in routed experts. Starting from a modest scale with 2B parameters, we demonstrate that DeepSeekMoE 2B achieves comparable performance with GShard 2.9B, which has 1.5 times the expert parameters and computation. In addition, DeepSeekMoE 2B nearly approaches the performance of its dense counterpart with the same number of total parameters, which set the upper bound of MoE models. Subsequently, we scale up DeepSeekMoE to 16B parameters and show that it achieves comparable performance with LLaMA2 7B, with only about 40% of computations. Further, our preliminary efforts to scale up DeepSeekMoE to 145B parameters consistently validate its substantial advantages over the GShard architecture, and show its performance comparable with DeepSeek 67B, using only 28.5% (maybe even 18.2%) of computations.
- Published
- 2024