Back to Search Start Over

Self-Specialization: Uncovering Latent Expertise within Large Language Models

Authors :
Kang, Junmo
Luo, Hongyin
Zhu, Yada
Hansen, Jacob
Glass, James
Cox, David
Ritter, Alan
Feris, Rogerio
Karlinsky, Leonid
Publication Year :
2023

Abstract

Recent works have demonstrated the effectiveness of self-alignment in which a large language model is aligned to follow general instructions using instructional data generated from the model itself starting from a handful of human-written seeds. Instead of general alignment, in this work, we focus on self-alignment for expert domain specialization (e.g., biomedicine, finance). As a preliminary, we quantitively show the marginal effect that generic instruction-following training has on downstream expert domains' performance. To remedy this, we propose self-specialization - allowing for effective model specialization while achieving cross-task generalization by leveraging only a few labeled seeds. Self-specialization offers a data- and parameter-efficient way of "carving out" an expert model out of a generalist pre-trained LLM. Exploring a variety of popular open large models as a base for specialization, our experimental results in both biomedical and financial domains show that our self-specialized models outperform their base models by a large margin, and even larger models that are generally instruction-tuned or that have been adapted to the target domain by other means.<br />Comment: ACL 2024 (Findings; Long Paper)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.00160
Document Type :
Working Paper