Back to Search Start Over

Lazily Aggregated Quantized Gradient Innovation for Communication-Efficient Federated Learning.

Authors :
Sun, Jun
Chen, Tianyi
Giannakis, Georgios B.
Yang, Qinmin
Yang, Zaiyue
Source :
IEEE Transactions on Pattern Analysis & Machine Intelligence. Apr2022, Vol. 44 Issue 4, p2031-2044. 14p.
Publication Year :
2022

Abstract

This paper focuses on communication-efficient federated learning problem, and develops a novel distributed quantized gradient approach, which is characterized by adaptive communications of the quantized gradients. Specifically, the federated learning builds upon the server-worker infrastructure, where the workers calculate local gradients and upload them to the server; then the server obtain the global gradient by aggregating all the local gradients and utilizes it to update the model parameter. The key idea to save communications from the worker to the server is to quantize gradients as well as skip less informative quantized gradient communications by reusing previous gradients. Quantizing and skipping result in ‘lazy’ worker-server communications, which justifies the term Lazily Aggregated Quantized (LAQ) gradient. Theoretically, the LAQ algorithm achieves the same linear convergence as the gradient descent in the strongly convex case, while effecting major savings in the communication in terms of transmitted bits and communication rounds. Empirically, extensive experiments using realistic data corroborate a significant communication reduction compared with state-of-the-art gradient- and stochastic gradient-based algorithms. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
*TELECOMMUNICATION employees

Details

Language :
English
ISSN :
01628828
Volume :
44
Issue :
4
Database :
Academic Search Index
Journal :
IEEE Transactions on Pattern Analysis & Machine Intelligence
Publication Type :
Academic Journal
Accession number :
155735846
Full Text :
https://doi.org/10.1109/TPAMI.2020.3033286