Back to Search Start Over

Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design.

Authors :
Zhu, Zheqi
Shi, Yuchen
Xin, Gangtao
Peng, Chenghui
Fan, Pingyi
Letaief, Khaled B.
Source :
Entropy. Aug2023, Vol. 25 Issue 8, p1205. 15p.
Publication Year :
2023

Abstract

As a promising distributed learning paradigm, federated learning (FL) faces the challenge of communication–computation bottlenecks in practical deployments. In this work, we mainly focus on the pruning, quantization, and coding of FL. By adopting a layer-wise operation, we propose an explicit and universal scheme: FedLP-Q (federated learning with layer-wise pruning-quantization). Pruning strategies for homogeneity/heterogeneity scenarios, the stochastic quantization rule, and the corresponding coding scheme were developed. Both theoretical and experimental evaluations suggest that FedLP-Q improves the system efficiency of communication and computation with controllable performance degradation. The key novelty of FedLP-Q is that it serves as a joint pruning-quantization FL framework with layer-wise processing and can easily be applied in practical FL systems. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10994300
Volume :
25
Issue :
8
Database :
Academic Search Index
Journal :
Entropy
Publication Type :
Academic Journal
Accession number :
170746333
Full Text :
https://doi.org/10.3390/e25081205