Back to Search Start Over

TencentPretrain: A Scalable and Flexible Toolkit for Pre-training Models of Different Modalities

Authors :
Zhao, Zhe
Li, Yudong
Hou, Cheng
Zhao, Jing
Tian, Rong
Liu, Weijie
Chen, Yiren
Sun, Ningyuan
Liu, Haoyan
Mao, Weiquan
Guo, Han
Guo, Weigang
Wu, Taiqiang
Zhu, Tao
Shi, Wenhang
Chen, Chen
Huang, Shan
Chen, Sihong
Liu, Liqun
Li, Feifei
Chen, Xiaoshuai
Sun, Xingwu
Kang, Zhanhui
Du, Xiaoyong
Shen, Linlin
Yan, Kimmo
Publication Year :
2022

Abstract

Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2212.06385
Document Type :
Working Paper