Back to Search Start Over

Federated learning by employing knowledge distillation on edge devices with limited hardware resources.

Authors :
Tanghatari, Ehsan
Kamal, Mehdi
Afzali-Kusha, Ali
Pedram, Massoud
Source :
Neurocomputing. Apr2023, Vol. 531, p87-99. 13p.
Publication Year :
2023

Abstract

This paper presents a federated learning approach based on utilizing computational resources of the IoT edge devices for training deep neural networks. In this approach, the edge devices and the cloud server collaborate in the training phase while preserving the privacy of the edge device data. Owing to the limited computational power and resources available to the edge devices, instead of the original neural network (NN), we suggest to use a smaller NN generated using a proposed heuristic method. In the proposed approach, the smaller model, which is trained on the edge device, is generated from the main NN model. By the exploiting Knowledge Distillation (K D) approach, the learned knowledge in the server and the edge devices can be exchanged, leading to lower required computation on the server and preserving data privacy of the edge devices. Also, to reduce the knowledge transfer overhead on the communication links between the server and the edge devices, a method for selecting the most valuable data to transfer the knowledge is introduced. The effectiveness of this method is assessed by comparing it to state-of-the-art methods. The results show that the proposed method lowers the communication traffic by up to 250 × and increases the learning accuracy by an average of 8.9 % in the cloud compared to the prior K D -based distributed training approaches in CIFAR-10 dataset. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
531
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
162438582
Full Text :
https://doi.org/10.1016/j.neucom.2023.02.011