1. Enhancing efficiency of protein language models with minimal wet-lab data through few-shot learning.
- Author
-
Zhou Z, Zhang L, Yu Y, Wu B, Li M, Hong L, and Tan P
- Subjects
- Deep Learning, Proteins genetics, Proteins metabolism, Mutation, DNA-Directed DNA Polymerase metabolism, Computer Simulation, Models, Molecular, Algorithms, Protein Engineering methods
- Abstract
Accurately modeling the protein fitness landscapes holds great importance for protein engineering. Pre-trained protein language models have achieved state-of-the-art performance in predicting protein fitness without wet-lab experimental data, but their accuracy and interpretability remain limited. On the other hand, traditional supervised deep learning models require abundant labeled training examples for performance improvements, posing a practical barrier. In this work, we introduce FSFP, a training strategy that can effectively optimize protein language models under extreme data scarcity for fitness prediction. By combining meta-transfer learning, learning to rank, and parameter-efficient fine-tuning, FSFP can significantly boost the performance of various protein language models using merely tens of labeled single-site mutants from the target protein. In silico benchmarks across 87 deep mutational scanning datasets demonstrate FSFP's superiority over both unsupervised and supervised baselines. Furthermore, we successfully apply FSFP to engineer the Phi29 DNA polymerase through wet-lab experiments, achieving a 25% increase in the positive rate. These results underscore the potential of our approach in aiding AI-guided protein engineering., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF