Back to Search
Start Over
A Multi-view Molecular Pre-training with Generative Contrastive Learning.
- Source :
- Interdisciplinary Sciences: Computational Life Sciences; Sep2024, Vol. 16 Issue 3, p741-754, 14p
- Publication Year :
- 2024
-
Abstract
- Molecular representation learning can preserve meaningful molecular structures as embedding vectors, which is a necessary prerequisite for molecular property prediction. Yet, learning how to accurately represent molecules remains challenging. Previous approaches to learning molecular representations in an end-to-end manner potentially suffered information loss while neglecting the utilization of molecular generative representations. To obtain rich molecular feature information, the pre-training molecular representation model utilized different molecular representations to reduce information loss caused by a single molecular representation. Therefore, we provide the MVGC, a unique multi-view generative contrastive learning pre-training model. Our pre-training framework specifically acquires knowledge of three fundamental feature representations of molecules and effectively integrates them to predict molecular properties on benchmark datasets. Comprehensive experiments on seven classification tasks and three regression tasks demonstrate that our proposed MVGC model surpasses the majority of state-of-the-art approaches. Moreover, we explore the potential of the MVGC model to learn the representation of molecules with chemical significance. [ABSTRACT FROM AUTHOR]
- Subjects :
- MOLECULAR structure
PRIOR learning
GRAMMAR
MOLECULES
CLASSIFICATION
Subjects
Details
- Language :
- English
- ISSN :
- 19132751
- Volume :
- 16
- Issue :
- 3
- Database :
- Complementary Index
- Journal :
- Interdisciplinary Sciences: Computational Life Sciences
- Publication Type :
- Academic Journal
- Accession number :
- 179711045
- Full Text :
- https://doi.org/10.1007/s12539-024-00632-z