Back to Search Start Over

ETLD: an encoder-transformation layer-decoder architecture for protein contact and mutation effects prediction.

Authors :
Wang, He
Zang, Yongjian
Kang, Ying
Zhang, Jianwen
Zhang, Lei
Zhang, Shengli
Source :
Briefings in Bioinformatics. Sep2023, Vol. 24 Issue 5, p1-9. 9p.
Publication Year :
2023

Abstract

The latent features extracted from the multiple sequence alignments (MSAs) of homologous protein families are useful for identifying residue–residue contacts, predicting mutation effects, shaping protein evolution, etc. Over the past three decades, a growing body of supervised and unsupervised machine learning methods have been applied to this field, yielding fruitful results. Here, we propose a novel self-supervised model, called encoder-transformation layer-decoder (ETLD) architecture, capable of capturing protein sequence latent features directly from MSAs. Compared to the typical autoencoder model, ETLD introduces a transformation layer with the ability to learn inter-site couplings, which can be used to parse out the two-dimensional residue–residue contacts map after a simple mathematical derivation or an additional supervised neural network. ETLD retains the process of encoding and decoding sequences, and the predicted probabilities of amino acids at each site can be further used to construct the mutation landscapes for mutation effects prediction, outperforming advanced models such as GEMME, DeepSequence and EVmutation in general. Overall, ETLD is a highly interpretable unsupervised model with great potential for improvement and can be further combined with supervised methods for more extensive and accurate predictions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14675463
Volume :
24
Issue :
5
Database :
Academic Search Index
Journal :
Briefings in Bioinformatics
Publication Type :
Academic Journal
Accession number :
172331655
Full Text :
https://doi.org/10.1093/bib/bbad290