Back to Search Start Over

Customizing a teacher for feature distillation.

Authors :
Tan, Chao
Liu, Jie
Source :
Information Sciences. Sep2023, Vol. 640, pN.PAG-N.PAG. 1p.
Publication Year :
2023

Abstract

Knowledge distillation is a method to train a lightweight network by transferring class probability knowledge from a cumbersome teacher network. However, transferring only the class probability knowledge would limit the distillation performance. Therefore, several approaches have been proposed to transfer the teacher's knowledge at the feature map level. In this paper, we revisit the feature distillation method and have found that the larger the teacher's architecture/capacity becomes, the more difficult it is for the student to imitate. Thus, the feature distillation method is unable to achieve its full potential. To address this, a novel end-to-end distillation framework, termed Customizing a Teacher for Feature Distillation (CTFD), is proposed to train a teacher to be more compatible with its student. In addition, we apply the customized teacher to three feature distillation methods. Moreover, data augmentation is used as a trick to train the student to improve its generalization performance. Extensive empirical experiments and analyses are conducted on three computer vision tasks, including image classification, transfer learning, and object detection, to substantiate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00200255
Volume :
640
Database :
Academic Search Index
Journal :
Information Sciences
Publication Type :
Periodical
Accession number :
163851842
Full Text :
https://doi.org/10.1016/j.ins.2023.119024