Back to Search Start Over

Delta-CoMe: Training-Free Delta-Compression with Mixed-Precision for Large Language Models

Authors :
Ping, Bowen
Wang, Shuo
Wang, Hanqing
Han, Xu
Xu, Yuzhuang
Yan, Yukun
Chen, Yun
Chang, Baobao
Liu, Zhiyuan
Sun, Maosong
Publication Year :
2024

Abstract

Fine-tuning is a crucial process for adapting large language models (LLMs) to diverse applications. In certain scenarios, such as multi-tenant serving, deploying multiple LLMs becomes necessary to meet complex demands. Recent studies suggest decomposing a fine-tuned LLM into a base model and corresponding delta weights, which are then compressed using low-rank or low-bit approaches to reduce costs. In this work, we observe that existing low-rank and low-bit compression methods can significantly harm the model performance for task-specific fine-tuned LLMs (e.g., WizardMath for math problems). Motivated by the long-tail distribution of singular values in the delta weights, we propose a delta quantization approach using mixed-precision. This method employs higher-bit representation for singular vectors corresponding to larger singular values. We evaluate our approach on various fine-tuned LLMs, including math LLMs, code LLMs, chat LLMs, and even VLMs. Experimental results demonstrate that our approach performs comparably to full fine-tuned LLMs, surpassing both low-rank and low-bit baselines by a considerable margin. Additionally, we show that our method is compatible with various backbone LLMs, such as Llama-2, Llama-3, and Mistral, highlighting its generalizability.<br />Comment: 12 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.08903
Document Type :
Working Paper