Back to Search Start Over

Improving scalability of parallel CNN training by adaptively adjusting parameter update frequency.

Authors :
Lee, Sunwoo
Kang, Qiao
Al-Bahrani, Reda
Agrawal, Ankit
Choudhary, Alok
Liao, Wei-keng
Source :
Journal of Parallel & Distributed Computing. Jan2022, Vol. 159, p10-23. 14p.
Publication Year :
2022

Abstract

Synchronous SGD with data parallelism, the most popular parallelization strategy for CNN training, suffers from the expensive communication cost of averaging gradients among all workers. The iterative parameter updates of SGD cause frequent communications and it becomes the performance bottleneck. In this paper, we propose a lazy parameter update algorithm that adaptively adjusts the parameter update frequency to address the expensive communication cost issue. Our algorithm accumulates the gradients if the difference of the accumulated gradients and the latest gradients is sufficiently small. The less frequent parameter updates reduce the per-iteration communication cost while maintaining the model accuracy. Our experimental results demonstrate that the lazy update method remarkably improves the scalability while maintaining the model accuracy. For ResNet50 training on ImageNet, the proposed algorithm achieves a significantly higher speedup (739.6 on 2048 Cori KNL nodes) as compared to the vanilla synchronous SGD (276.6) while the model accuracy is almost not affected (<0.2% difference). • Adaptive parameter update frequency control for scalable parallel deep learning. • Allowing gradient accumulation considering the direction of the gradients. • Reducing per-iteration communication cost at the input side model layers. • Overlapping gradient communications with backpropagation computations. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
07437315
Volume :
159
Database :
Academic Search Index
Journal :
Journal of Parallel & Distributed Computing
Publication Type :
Academic Journal
Accession number :
153203806
Full Text :
https://doi.org/10.1016/j.jpdc.2021.09.005