Back to Search Start Over

A Semantic-based Layer Freezing Approach to Efficient Fine-Tuning of Language Models

Authors :
Gu, Jian
Aleti, Aldeida
Chen, Chunyang
Zhang, Hongyu
Publication Year :
2024

Abstract

Finetuning language models (LMs) is crucial for adapting the models to downstream data and tasks. However, full finetuning is usually costly. Existing work, such as parameter-efficient finetuning (PEFT), often focuses on \textit{how to finetune} but neglects the issue of \textit{where to finetune}. As a pioneering work on answering where to finetune (at the layer level), we conduct a semantic analysis of the LM inference process. We first propose a virtual transition of the latent representation and then trace its factual transition. Based on the deviation in transitions, we estimate the gain of finetuning each model layer, and further, narrow down the scope for finetuning. We perform extensive experiments across well-known LMs and datasets. The results show that our approach is effective and efficient, and outperforms the existing baselines. Our approach is orthogonal to existing efficient techniques, such as PEFT methods, offering practical values on LM finetuning.<br />Comment: 13 pages, 5 figures, under peer-review

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.11753
Document Type :
Working Paper