Back to Search Start Over

Model Compression Algorithm via Reinforcement Learning and Knowledge Distillation.

Authors :
Liu, Botao
Hu, Bing-Bing
Zhao, Ming
Peng, Sheng-Lung
Chang, Jou-Ming
Source :
Mathematics (2227-7390). Nov2023, Vol. 11 Issue 22, p4589. 12p.
Publication Year :
2023

Abstract

Traditional model compression techniques are dependent on handcrafted features and require domain experts, with a tradeoff between model size, speed, and accuracy. This study proposes a new approach toward resolving model compression problems. Our approach combines reinforcement-learning-based automated pruning and knowledge distillation to improve the pruning of unimportant network layers and the efficiency of the compression process. We introduce a new state quantity that controls the size of the reward and an attention mechanism that reinforces useful features and attenuates useless features to enhance the effects of other features. The experimental results show that the proposed model is superior to other advanced pruning methods in terms of the computation time and accuracy on CIFAR-100 and ImageNet dataset, where the accuracy is approximately 3% higher than that of similar methods with shorter computation times. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
22277390
Volume :
11
Issue :
22
Database :
Academic Search Index
Journal :
Mathematics (2227-7390)
Publication Type :
Academic Journal
Accession number :
173862770
Full Text :
https://doi.org/10.3390/math11224589