Back to Search Start Over

Simple, Efficient and Scalable Structure-aware Adapter Boosts Protein Language Models

Authors :
Tan, Yang
Li, Mingchen
Zhou, Bingxin
Zhong, Bozitao
Zheng, Lirong
Tan, Pan
Zhou, Ziyi
Yu, Huiqun
Fan, Guisheng
Hong, Liang
Publication Year :
2024

Abstract

Fine-tuning Pre-trained protein language models (PLMs) has emerged as a prominent strategy for enhancing downstream prediction tasks, often outperforming traditional supervised learning approaches. As a widely applied powerful technique in natural language processing, employing Parameter-Efficient Fine-Tuning techniques could potentially enhance the performance of PLMs. However, the direct transfer to life science tasks is non-trivial due to the different training strategies and data forms. To address this gap, we introduce SES-Adapter, a simple, efficient, and scalable adapter method for enhancing the representation learning of PLMs. SES-Adapter incorporates PLM embeddings with structural sequence embeddings to create structure-aware representations. We show that the proposed method is compatible with different PLM architectures and across diverse tasks. Extensive evaluations are conducted on 2 types of folding structures with notable quality differences, 9 state-of-the-art baselines, and 9 benchmark datasets across distinct downstream tasks. Results show that compared to vanilla PLMs, SES-Adapter improves downstream task performance by a maximum of 11% and an average of 3%, with significantly accelerated training speed by a maximum of 1034% and an average of 362%, the convergence rate is also improved by approximately 2 times. Moreover, positive optimization is observed even with low-quality predicted structures. The source code for SES-Adapter is available at https://github.com/tyang816/SES-Adapter.<br />Comment: 30 pages, 4 figures, 8 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.14850
Document Type :
Working Paper