Back to Search Start Over

Advancing bioinformatics with large language models: components, applications and perspectives

Authors :
Liu, Jiajia
Yang, Mengyuan
Yu, Yankai
Xu, Haixia
Wang, Tiangang
Li, Kang
Zhou, Xiaobo
Publication Year :
2024

Abstract

Large language models (LLMs) are a class of artificial intelligence models based on deep learning, which have great performance in various tasks, especially in natural language processing (NLP). Large language models typically consist of artificial neural networks with numerous parameters, trained on large amounts of unlabeled input using self-supervised or semi-supervised learning. However, their potential for solving bioinformatics problems may even exceed their proficiency in modeling human language. In this review, we will provide a comprehensive overview of the essential components of large language models (LLMs) in bioinformatics, spanning genomics, transcriptomics, proteomics, drug discovery, and single-cell analysis. Key aspects covered include tokenization methods for diverse data types, the architecture of transformer models, the core attention mechanism, and the pre-training processes underlying these models. Additionally, we will introduce currently available foundation models and highlight their downstream applications across various bioinformatics domains. Finally, drawing from our experience, we will offer practical guidance for both LLM users and developers, emphasizing strategies to optimize their use and foster further innovation in the field.<br />Comment: 5 main figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.04155
Document Type :
Working Paper