1. Mixed-Precision Graph Neural Quantization for Low Bit Large Language Models
- Author
-
Liu, Wanlong, Xiao, Yichen, Zeng, Dingyi, Zhao, Hongyang, Chen, Wenyu, and Zhang, Malu
- Subjects
Computer Science - Computation and Language - Abstract
Post-Training Quantization (PTQ) is pivotal for deploying large language models (LLMs) within resource-limited settings by significantly reducing resource demands. However, existing PTQ strategies underperform at low bit levels < 3 bits due to the significant difference between the quantized and original weights. To enhance the quantization performance at low bit widths, we introduce a Mixed-precision Graph Neural PTQ (MG-PTQ) approach, employing a graph neural network (GNN) module to capture dependencies among weights and adaptively assign quantization bit-widths. Through the information propagation of the GNN module, our method more effectively captures dependencies among target weights, leading to a more accurate assessment of weight importance and optimized allocation of quantization strategies. Extensive experiments on the WikiText2 and C4 datasets demonstrate that our MG-PTQ method outperforms previous state-of-the-art PTQ method GPTQ, setting new benchmarks for quantization performance under low-bit conditions., Comment: ICASSP 2025
- Published
- 2025