Back to Search Start Over

Fast Federated Learning by Balancing Communication Trade-Offs.

Authors :
Nori, Milad Khademi
Yun, Sangseok
Kim, Il-Min
Source :
IEEE Transactions on Communications. Aug2021, Vol. 69 Issue 8, p5168-5182. 15p.
Publication Year :
2021

Abstract

Federated Learning (FL) has recently received a lot of attention for large-scale privacy-preserving machine learning. However, high communication overheads due to frequent gradient transmissions decelerate FL. To mitigate the communication overheads, two main techniques have been studied: (i) local update of weights characterizing the trade-off between communication and computation and (ii) gradient compression characterizing the trade-off between communication and precision. To the best of our knowledge, studying and balancing those two trade-offs jointly and dynamically while considering their impacts on convergence has remained unresolved even though it promises significantly faster FL. In this paper, we first formulate our problem to minimize learning error with respect to two variables: local update coefficients and sparsity budgets of gradient compression who characterize trade-offs between communication and computation/precision, respectively. We then derive an upper bound of the learning error in a given wall-clock time considering the interdependency between the two variables. Based on this theoretical analysis, we propose an enhanced FL scheme, namely Fast FL (FFL), that jointly and dynamically adjusts the two variables to minimize the learning error. We demonstrate that FFL consistently achieves higher accuracies faster than similar schemes existing in the literature. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00906778
Volume :
69
Issue :
8
Database :
Academic Search Index
Journal :
IEEE Transactions on Communications
Publication Type :
Academic Journal
Accession number :
153154570
Full Text :
https://doi.org/10.1109/TCOMM.2021.3083316