Back to Search Start Over

Pretraining model for biological sequence data

Authors :
Xiangzheng Fu
Zimeng Li
Jianmin Wang
Tian Wang
Bosheng Song
Xuan Lin
Source :
Briefings in Functional Genomics
Publication Year :
2021
Publisher :
Oxford University Press (OUP), 2021.

Abstract

With the development of high-throughput sequencing technology, biological sequence data reflecting life information becomes increasingly accessible. Particularly on the background of the COVID-19 pandemic, biological sequence data play an important role in detecting diseases, analyzing the mechanism and discovering specific drugs. In recent years, pretraining models that have emerged in natural language processing have attracted widespread attention in many research fields not only to decrease training cost but also to improve performance on downstream tasks. Pretraining models are used for embedding biological sequence and extracting feature from large biological sequence corpus to comprehensively understand the biological sequence data. In this survey, we provide a broad review on pretraining models for biological sequence data. Moreover, we first introduce biological sequences and corresponding datasets, including brief description and accessible link. Subsequently, we systematically summarize popular pretraining models for biological sequences based on four categories: CNN, word2vec, LSTM and Transformer. Then, we present some applications with proposed pretraining models on downstream tasks to explain the role of pretraining models. Next, we provide a novel pretraining scheme for protein sequences and a multitask benchmark for protein pretraining models. Finally, we discuss the challenges and future directions in pretraining models for biological sequences.

Details

ISSN :
20412657 and 20412649
Volume :
20
Database :
OpenAIRE
Journal :
Briefings in Functional Genomics
Accession number :
edsair.doi.dedup.....7fc879784087bfbd86072b42cc1a93ce