Back to Search Start Over

NatureLM: Deciphering the Language of Nature for Scientific Discovery

Authors :
Xia, Yingce
Jin, Peiran
Xie, Shufang
He, Liang
Cao, Chuan
Luo, Renqian
Liu, Guoqing
Wang, Yue
Liu, Zequn
Chen, Yuan-Jyue
Guo, Zekun
Bai, Yeqi
Deng, Pan
Min, Yaosen
Lu, Ziheng
Hao, Hongxia
Yang, Han
Li, Jielan
Liu, Chang
Zhang, Jia
Zhu, Jianwei
Wu, Kehan
Zhang, Wei
Gao, Kaiyuan
Pei, Qizhi
Wang, Qian
Liu, Xixian
Li, Yanting
Zhu, Houtian
Lu, Yeqing
Ma, Mingqian
Wang, Zun
Xie, Tian
Maziarz, Krzysztof
Segler, Marwin
Yang, Zhao
Chen, Zilong
Shi, Yu
Zheng, Shuxin
Wu, Lijun
Hu, Chen
Dai, Peggy
Liu, Tie-Yan
Liu, Haiguang
Qin, Tao
Publication Year :
2025

Abstract

Foundation models have revolutionized natural language processing and artificial intelligence, significantly enhancing how machines comprehend and generate human languages. Inspired by the success of these foundation models, researchers have developed foundation models for individual scientific domains, including small molecules, materials, proteins, DNA, and RNA. However, these models are typically trained in isolation, lacking the ability to integrate across different scientific domains. Recognizing that entities within these domains can all be represented as sequences, which together form the "language of nature", we introduce Nature Language Model (briefly, NatureLM), a sequence-based science foundation model designed for scientific discovery. Pre-trained with data from multiple scientific domains, NatureLM offers a unified, versatile model that enables various applications including: (i) generating and optimizing small molecules, proteins, RNA, and materials using text instructions; (ii) cross-domain generation/design, such as protein-to-molecule and protein-to-RNA generation; and (iii) achieving state-of-the-art performance in tasks like SMILES-to-IUPAC translation and retrosynthesis on USPTO-50k. NatureLM offers a promising generalist approach for various scientific tasks, including drug discovery (hit generation/optimization, ADMET optimization, synthesis), novel material design, and the development of therapeutic proteins or nucleotides. We have developed NatureLM models in different sizes (1 billion, 8 billion, and 46.7 billion parameters) and observed a clear improvement in performance as the model size increases.<br />Comment: 81 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2502.07527
Document Type :
Working Paper