Back to Search Start Over

DoTA: Weight-Decomposed Tensor Adaptation for Large Language Models

Authors :
Hu, Xiaolin
Cheng, Xiang
Liu, Peiyu
Liu, Wei
Luan, Jian
Wang, Bin
Liu, Yong
Publication Year :
2024

Abstract

Low-rank adaptation (LoRA) reduces the computational and memory demands of fine-tuning large language models (LLMs) by approximating updates with low-rank matrices. However, low-rank approximation in two-dimensional space fails to capture high-dimensional structures within the target matrix. Recently, tensor decomposition methods have been explored for fine-tuning LLMs, leveraging their ability to extract structured information. Yet, these approaches primarily rely on random initialization, and the impact of initialization on tensor adaptation remains underexplored. In this paper, we reveal that random initialization significantly diverges from the validation loss achieved by full fine-tuning. To address this, we propose Weight-Decomposed Tensor Adaptation (DoTA), which leverages the Matrix Product Operator (MPO) decomposition of pre-trained weights for effective initialization in fine-tuning LLMs. Additionally, we introduce QDoTA, a quantized version of DoTA designed for 4-bit quantization. Experiments on commonsense and arithmetic reasoning tasks show that DoTA outperforms random initialization methods with fewer parameters. QDoTA further reduces memory consumption and achieves comparable performance to DoTA on commonsense reasoning tasks. We will release our code to support future research.<br />Comment: 12 pages, 6 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.20891
Document Type :
Working Paper