1. PLaMo-100B: A Ground-Up Language Model Designed for Japanese Proficiency
- Author
-
Elements, Preferred, Abe, Kenshin, Chubachi, Kaizaburo, Fujita, Yasuhiro, Hirokawa, Yuta, Imajo, Kentaro, Kataoka, Toshiki, Komatsu, Hiroyoshi, Mikami, Hiroaki, Mogami, Tsuguo, Murai, Shogo, Nakago, Kosuke, Nishino, Daisuke, Ogawa, Toru, Okanohara, Daisuke, Ozaki, Yoshihiko, Sano, Shotaro, Suzuki, Shuji, Xu, Tianqi, and Yanase, Toshihiko
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
We introduce PLaMo-100B, a large-scale language model designed for Japanese proficiency. The model was trained from scratch using 2 trillion tokens, with architecture such as QK Normalization and Z-Loss to ensure training stability during the training process. Post-training techniques, including Supervised Fine-Tuning and Direct Preference Optimization, were applied to refine the model's performance. Benchmark evaluations suggest that PLaMo-100B performs well, particularly in Japanese-specific tasks, achieving results that are competitive with frontier models like GPT-4. The base model is available at https://huggingface.co/pfnet/plamo-100b.
- Published
- 2024