Back to Search Start Over

Trinity: Neural Network Adaptive Distributed Parallel Training Method Based on Reinforcement Learning.

Authors :
Zeng, Yan
Wu, Jiyang
Zhang, Jilin
Ren, Yongjian
Zhang, Yunquan
Source :
Algorithms. Apr2022, Vol. 15 Issue 4, pN.PAG-N.PAG. 18p.
Publication Year :
2022

Abstract

Deep learning, with increasingly large datasets and complex neural networks, is widely used in computer vision and natural language processing. A resulting trend is to split and train large-scale neural network models across multiple devices in parallel, known as parallel model training. Existing parallel methods are mainly based on expert design, which is inefficient and requires specialized knowledge. Although automatically implemented parallel methods have been proposed to solve these problems, these methods only consider a single optimization aspect of run time. In this paper, we present Trinity, an adaptive distributed parallel training method based on reinforcement learning, to automate the search and tuning of parallel strategies. We build a multidimensional performance evaluation model and use proximal policy optimization to co-optimize multiple optimization aspects. Our experiment used the CIFAR10 and PTB datasets based on InceptionV3, NMT, NASNet and PNASNet models. Compared with Google's Hierarchical method, Trinity achieves up to 5% reductions in runtime, communication, and memory overhead, and up to a 40% increase in parallel strategy search speeds. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
19994893
Volume :
15
Issue :
4
Database :
Academic Search Index
Journal :
Algorithms
Publication Type :
Academic Journal
Accession number :
156478652
Full Text :
https://doi.org/10.3390/a15040108