Back to Search Start Over

MolE: a foundation model for molecular graphs using disentangled attention

Authors :
Oscar Méndez-Lucio
Christos A. Nicolaou
Berton Earnshaw
Source :
Nature Communications, Vol 15, Iss 1, Pp 1-9 (2024)
Publication Year :
2024
Publisher :
Nature Portfolio, 2024.

Abstract

Abstract Models that accurately predict properties based on chemical structure are valuable tools in the chemical sciences. However, for many properties, public and private training sets are typically small, making it difficult for models to generalize well outside of the training data. Recently, this lack of generalization has been mitigated by using self-supervised pretraining on large unlabeled datasets, followed by finetuning on smaller, labeled datasets. Inspired by these advances, we report MolE, a Transformer architecture adapted for molecular graphs together with a two-step pretraining strategy. The first step of pretraining is a self-supervised approach focused on learning chemical structures trained on ~842 million molecular graphs, and the second step is a massive multi-task approach to learn biological information. We show that finetuning models that were pretrained in this way perform better than the best published results on 10 of the 22 ADMET (absorption, distribution, metabolism, excretion and toxicity) tasks included in the Therapeutic Data Commons leaderboard (c. September 2023).

Subjects

Subjects :
Science

Details

Language :
English
ISSN :
20411723
Volume :
15
Issue :
1
Database :
Directory of Open Access Journals
Journal :
Nature Communications
Publication Type :
Academic Journal
Accession number :
edsdoj.4dbea24f0b6c43f6adf4c455943fb659
Document Type :
article
Full Text :
https://doi.org/10.1038/s41467-024-53751-y