Back to Search Start Over

The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits

Authors :
Ma, Shuming
Wang, Hongyu
Ma, Lingxiao
Wang, Lei
Wang, Wenhui
Huang, Shaohan
Dong, Li
Wang, Ruiping
Xue, Jilong
Wei, Furu
Ma, Shuming
Wang, Hongyu
Ma, Lingxiao
Wang, Lei
Wang, Wenhui
Huang, Shaohan
Dong, Li
Wang, Ruiping
Xue, Jilong
Wei, Furu
Publication Year :
2024

Abstract

Recent research, such as BitNet, is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16) Transformer LLM with the same model size and training tokens in terms of both perplexity and end-task performance, while being significantly more cost-effective in terms of latency, memory, throughput, and energy consumption. More profoundly, the 1.58-bit LLM defines a new scaling law and recipe for training new generations of LLMs that are both high-performance and cost-effective. Furthermore, it enables a new computation paradigm and opens the door for designing specific hardware optimized for 1-bit LLMs.<br />Comment: Work in progress

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438531010
Document Type :
Electronic Resource