1. Straggler-Aware Gradient Aggregation for Large-Scale Distributed Deep Learning System
- Author
-
Li, Yijun, Huang, Jiawei, Li, Zhaoyi, Liu, Jingling, Zhou, Shengwen, Zhang, Tao, Jiang, Wanchun, and Wang, Jianxin
- Abstract
Deep Neural Network (DNN) is a critical component of a wide range of applications. However, with the rapid growth of the training dataset and model size, communication becomes the bottleneck, resulting in low utilization of computing resources. To accelerate communication, recent works propose to aggregate gradients from multiple workers in the programmable switch to reduce the volume of exchanged data. Unfortunately, since using synchronization transmission to aggregate data, current in-network aggregation designs suffer from the straggler problem, which often occurs in shared clusters due to resource contention. To address this issue, we propose a straggler-aware aggregation transport protocol (SA-ATP), which enables the leading worker to leverage the spare computing and storage resources to help the straggling worker. We implement SA-ATP atop clusters using P4-programmable switches. The evaluation results show that SA-ATP reduces the iteration time by up to 57% and accelerates training by up to
$1.8\times $ - Published
- 2024
- Full Text
- View/download PDF