Back to Search Start Over

Cascaded channel pruning using hierarchical self-distillation

Authors :
Miles, Roy
Mikolajczyk, Krystian
Publication Year :
2020

Abstract

In this paper, we propose an approach for filter-level pruning with hierarchical knowledge distillation based on the teacher, teaching-assistant, and student framework. Our method makes use of teaching assistants at intermediate pruning levels that share the same architecture and weights as the target student. We propose to prune each model independently using the gradient information from its corresponding teacher. By considering the relative sizes of each student-teacher pair, this formulation provides a natural trade-off between the capacity gap for knowledge distillation and the bias of the filter saliency updates. Our results show improvements in the attainable accuracy and model compression across the CIFAR10 and ImageNet classification tasks using the VGG16and ResNet50 architectures. We provide an extensive evaluation that demonstrates the benefits of using a varying number of teaching assistant models at different sizes.<br />Comment: BMVC 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2008.06814
Document Type :
Working Paper