Back to Search Start Over

MLAE: Masked LoRA Experts for Parameter-Efficient Fine-Tuning

Authors :
Wang, Junjie
Yang, Guangjing
Chen, Wentao
Yi, Huahui
Wu, Xiaohu
Lao, Qicheng
Wang, Junjie
Yang, Guangjing
Chen, Wentao
Yi, Huahui
Wu, Xiaohu
Lao, Qicheng
Publication Year :
2024

Abstract

In response to the challenges posed by the extensive parameter updates required for full fine-tuning of large-scale pre-trained models, parameter-efficient fine-tuning (PEFT) methods, exemplified by Low-Rank Adaptation (LoRA), have emerged. LoRA simplifies the fine-tuning process but may still struggle with a certain level of redundancy in low-rank matrices and limited effectiveness from merely increasing their rank. To address these issues, a natural idea is to enhance the independence and diversity of the learning process for the low-rank matrices. Therefore, we propose Masked LoRA Experts (MLAE), an innovative approach that applies the concept of masking to PEFT. Our method incorporates a cellular decomposition strategy that transforms a low-rank matrix into independent rank-1 submatrices, or ``experts'', thus enhancing independence. Additionally, we introduce a binary mask matrix that selectively activates these experts during training to promote more diverse and anisotropic learning, based on expert-level dropout strategies. Our investigations reveal that this selective activation not only enhances performance but also fosters a more diverse acquisition of knowledge with a marked decrease in parameter similarity among MLAE, significantly boosting the quality of the model while barely increasing the parameter count. Remarkably, MLAE achieves new SOTA performance with an average accuracy score of 78.8% on the VTAB-1k benchmark and 90.9% on the FGVC benchmark, demonstrating superior performance. Our code is available at https://github.com/jie040109/MLAE.<br />Comment: Tech report

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438561942
Document Type :
Electronic Resource