Back to Search Start Over

Communication and Computation Reduction for Split Learning using Asynchronous Training

Authors :
Chen, Xing
Li, Jingtao
Chakrabarti, Chaitali
Publication Year :
2021

Abstract

Split learning is a promising privacy-preserving distributed learning scheme that has low computation requirement at the edge device but has the disadvantage of high communication overhead between edge device and server. To reduce the communication overhead, this paper proposes a loss-based asynchronous training scheme that updates the client-side model less frequently and only sends/receives activations/gradients in selected epochs. To further reduce the communication overhead, the activations/gradients are quantized using 8-bit floating point prior to transmission. An added benefit of the proposed communication reduction method is that the computations at the client side are reduced due to reduction in the number of client model updates. Furthermore, the privacy of the proposed communication reduction based split learning method is almost the same as traditional split learning. Simulation results on VGG11, VGG13 and ResNet18 models on CIFAR-10 show that the communication cost is reduced by 1.64x-106.7x and the computations in the client are reduced by 2.86x-32.1x when the accuracy degradation is less than 0.5% for the single-client case. For 5 and 10-client cases, the communication cost reduction is 11.9x and 11.3x on VGG11 for 0.5% loss in accuracy.<br />Comment: Accepted by SIPS '21

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2107.09786
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/SiPS52927.2021.00022