Back to Search Start Over

ProteinGLUE multi-task benchmark suite for self-supervised protein modeling.

Authors :
Capel, Henriette
Weiler, Robin
Dijkstra, Maurits
Vleugels, Reinier
Bloem, Peter
Feenstra, K. Anton
Source :
Scientific Reports; 9/26/2022, Vol. 12 Issue 1, p1-14, 14p
Publication Year :
2022

Abstract

Self-supervised language modeling is a rapidly developing approach for the analysis of protein sequence data. However, work in this area is heterogeneous and diverse, making comparison of models and methods difficult. Moreover, models are often evaluated only on one or two downstream tasks, making it unclear whether the models capture generally useful properties. We introduce the ProteinGLUE benchmark for the evaluation of protein representations: a set of seven per-amino-acid tasks for evaluating learned protein representations. We also offer reference code, and we provide two baseline models with hyperparameters specifically trained for these benchmarks. Pre-training was done on two tasks, masked symbol prediction and next sentence prediction. We show that pre-training yields higher performance on a variety of downstream tasks such as secondary structure and protein interaction interface prediction, compared to no pre-training. However, the larger base model does not outperform the smaller medium model. We expect the ProteinGLUE benchmark dataset introduced here, together with the two baseline pre-trained models and their performance evaluations, to be of great value to the field of protein sequence-based property prediction. Availability: code and datasets from https://github.com/ibivu/protein-glue. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20452322
Volume :
12
Issue :
1
Database :
Complementary Index
Journal :
Scientific Reports
Publication Type :
Academic Journal
Accession number :
159323287
Full Text :
https://doi.org/10.1038/s41598-022-19608-4