1. Representative Kernels-Based CNN for Faster Transmission in Federated Learning
- Author
-
Li, Wei, Shen, Zichen, Liu, Xiulong, Wang, Mingfeng, Ma, Chao, Ding, Chuntao, and Cao, Jiannong
- Abstract
Due to the contradiction between limited bandwidth and huge transmission parameters, federated Learning (FL) has been an ongoing challenge to reduce the model parameters that need to be transmitted to server in clients for fast transmission. Existing works that attempt to reduce the amount of transmitted parameters have limitations: 1) the reduced number of parameters is not significant; 2) the performance of the global model is limited. In this paper, we propose a novel method called Fed-KGF that significantly reduces the amount of model parameters while improving the global model performance. Our goal is to reduce those transmitted parameters by reducing the number of convolution kernels. Specifically, we construct an incomplete model with a few representative convolution kernels, and propose Kernel Generation Function (KGF) to generate other convolution kernels to render the incomplete model to be a complete one. We discard those generated kernels after training local models, and solely transmit those representative kernels during training, thereby significantly reducing the transmitted parameters. Furthermore, there is a client-drift in the traditional FL because of the averaging method, which hurts the global model performance. We innovatively select one or few modules from all client models in a permutation way, and only aggregate the uploaded modules rather than averaging all modules to reduce client-drift, thus improving the global model performance and further reducing the transmitted parameters. Experimental results on both non-Independent and Identically Distributed (non-IID) and IID scenarios for image classification and object detection tasks demonstrate that our Fed-KGF outperforms SOTA FL models.
- Published
- 2024
- Full Text
- View/download PDF