Back to Search
Start Over
Foundations of Large Language Model Compression -- Part 1: Weight Quantization
- Publication Year :
- 2024
-
Abstract
- In recent years, compression of large language models (LLMs) has emerged as an important problem to enable language model deployment on resource-constrained devices, reduce computational costs, and mitigate the environmental footprint of large-scale AI infrastructure. In this paper, we lay down the foundation for LLM quantization from a convex optimization perspective and propose a quantization technique that builds on this foundation for optimum quantization outcomes. Our quantization framework, CVXQ, scales to models containing hundreds of billions of weight parameters and provides users with the flexibility to compress models to any specified model size, post-training. A reference implementation of CVXQ can be obtained from github.com/seannz/cvxq.<br />Comment: Preprint. 17 pages, 4 figures, 5 appendices
- Subjects :
- Computer Science - Machine Learning
Computer Science - Computation and Language
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2409.02026
- Document Type :
- Working Paper