Back to Search
Start Over
Structure-Aware Low-Rank Adaptation for Parameter-Efficient Fine-Tuning.
- Source :
-
Mathematics (2227-7390) . Oct2023, Vol. 11 Issue 20, p4317. 16p. - Publication Year :
- 2023
-
Abstract
- With the growing scale of pre-trained language models (PLMs), full parameter fine-tuning becomes prohibitively expensive and practically infeasible. Therefore, parameter-efficient adaptation techniques for PLMs have been proposed to learn through incremental updates of pre-trained weights, such as in low-rank adaptation (LoRA). However, LoRA relies on heuristics to select the modules and layers to which it is applied, and assigns them the same rank. As a consequence, any fine-tuning that ignores the structural information between modules and layers is suboptimal. In this work, we propose structure-aware low-rank adaptation (SaLoRA), which adaptively learns the intrinsic rank of each incremental matrix by removing rank-0 components during training. We conduct comprehensive experiments using pre-trained models of different scales in both task-oriented (GLUE) and task-agnostic (Yelp and GYAFC) settings. The experimental results show that SaLoRA effectively captures the structure-aware intrinsic rank. Moreover, our method consistently outperforms LoRA without significantly compromising training efficiency. [ABSTRACT FROM AUTHOR]
- Subjects :
- *LANGUAGE models
*MACHINE learning
*MODELS & modelmaking
Subjects
Details
- Language :
- English
- ISSN :
- 22277390
- Volume :
- 11
- Issue :
- 20
- Database :
- Academic Search Index
- Journal :
- Mathematics (2227-7390)
- Publication Type :
- Academic Journal
- Accession number :
- 173316844
- Full Text :
- https://doi.org/10.3390/math11204317