Back to Search
Start Over
Decentralized Deep Learning with Arbitrary Communication Compression
- Publication Year :
- 2019
- Publisher :
- arXiv, 2019.
-
Abstract
- Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters. As current approaches suffer from limited bandwidth of the network, we propose the use of communication compression in the decentralized training context. We show that Choco-SGD $-$ recently introduced and analyzed for strongly-convex objectives only $-$ converges under arbitrary high compression ratio on general non-convex functions at the rate $O\bigl(1/\sqrt{nT}\bigr)$ where $T$ denotes the number of iterations and $n$ the number of workers. The algorithm achieves linear speedup in the number of workers and supports higher compression than previous state-of-the art methods. We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models (i) over distributed user devices, connected by a social network and (ii) in a datacenter (outperforming all-reduce time-wise).
- Subjects :
- FOS: Computer and information sciences
Computer Science - Machine Learning
G.1.6
E.4
Machine Learning (stat.ML)
F.2.1
Machine Learning (cs.LG)
ml-ai
Computer Science - Distributed, Parallel, and Cluster Computing
Statistics - Machine Learning
Optimization and Control (math.OC)
Computer Science - Data Structures and Algorithms
FOS: Mathematics
68W10, 68W15, 68W40, 90C06, 90C25, 90C35
Data Structures and Algorithms (cs.DS)
Distributed, Parallel, and Cluster Computing (cs.DC)
Mathematics - Optimization and Control
Subjects
Details
- Database :
- OpenAIRE
- Accession number :
- edsair.doi.dedup.....5c66b4653a93d5efaf52aca69ee02d76
- Full Text :
- https://doi.org/10.48550/arxiv.1907.09356