1. Federated learning by employing knowledge distillation on edge devices with limited hardware resources.
- Author
-
Tanghatari, Ehsan, Kamal, Mehdi, Afzali-Kusha, Ali, and Pedram, Massoud
- Subjects
- *
ARTIFICIAL neural networks , *KNOWLEDGE transfer , *POWER resources - Abstract
This paper presents a federated learning approach based on utilizing computational resources of the IoT edge devices for training deep neural networks. In this approach, the edge devices and the cloud server collaborate in the training phase while preserving the privacy of the edge device data. Owing to the limited computational power and resources available to the edge devices, instead of the original neural network (NN), we suggest to use a smaller NN generated using a proposed heuristic method. In the proposed approach, the smaller model, which is trained on the edge device, is generated from the main NN model. By the exploiting Knowledge Distillation (K D) approach, the learned knowledge in the server and the edge devices can be exchanged, leading to lower required computation on the server and preserving data privacy of the edge devices. Also, to reduce the knowledge transfer overhead on the communication links between the server and the edge devices, a method for selecting the most valuable data to transfer the knowledge is introduced. The effectiveness of this method is assessed by comparing it to state-of-the-art methods. The results show that the proposed method lowers the communication traffic by up to 250 × and increases the learning accuracy by an average of 8.9 % in the cloud compared to the prior K D -based distributed training approaches in CIFAR-10 dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF