Back to Search Start Over

ResKD: Residual-Guided Knowledge Distillation

Authors :
Li, Xuewei
Li, Songyuan
Omar, Bourahla
Wu, Fei
Li, Xi
Publication Year :
2020

Abstract

Knowledge distillation, aimed at transferring the knowledge from a heavy teacher network to a lightweight student network, has emerged as a promising technique for compressing neural networks. However, due to the capacity gap between the heavy teacher and the lightweight student, there still exists a significant performance gap between them. In this paper, we see knowledge distillation in a fresh light, using the knowledge gap, or the residual, between a teacher and a student as guidance to train a much more lightweight student, called a res-student. We combine the student and the res-student into a new student, where the res-student rectifies the errors of the former student. Such a residual-guided process can be repeated until the user strikes the balance between accuracy and cost. At inference time, we propose a sample-adaptive strategy to decide which res-students are not necessary for each sample, which can save computational cost. Experimental results show that we achieve competitive performance with 18.04$\%$, 23.14$\%$, 53.59$\%$, and 56.86$\%$ of the teachers' computational costs on the CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet datasets. Finally, we do thorough theoretical and empirical analysis for our method.<br />Comment: The first two authors (Xuewei Li and Songyuan Li) contribute equally. Accepted to IEEE TRANSACTIONS ON IMAGE PROCESSING (TIP)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2006.04719
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TIP.2021.3066051