Back to Search Start Over

LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation.

Authors :
Xu, Ting-Bing
Yang, Peipei
Zhang, Xu-Yao
Liu, Cheng-Lin
Source :
Pattern Recognition. Apr2019, Vol. 88, p272-284. 13p.
Publication Year :
2019

Abstract

Highlights • We present a new framework of deep convolutional neural network architecture distillation, namely LightweightNet, for acceleration and compression. • We exploit the prior knowledge of pre-defined network architecture to guide the efficient design of acceleration/compression strategies, while not using pre-trained model. • The proposed framework consists of network parameter compression, network structure acceleration, and non-tensor layer improvement. • The proposed framework demonstrates a higher acceleration/compression rate than previous methods in experiments, including a large category handwritten Chinese character recognition task with state-of-the-art performance. Abstract In recent years, deep neural networks have achieved remarkable successes in many pattern recognition tasks. However, the high computational cost and large memory overhead hinder them from applications on resource-limited devices. To address this problem, many deep network acceleration and compression methods have been proposed. One group of methods adopt decomposition and pruning techniques to accelerate and compress a pre-trained model. Another group designs single compact unit to stack their own networks. These methods are subject to complicated training processes, or lack of generality and extensibility. In this paper, we propose a general framework of architecture distillation, namely LightweightNet, to accelerate and compress convolutional neural networks. Rather than compressing a pre-trained model, we directly construct the lightweight network based on a baseline network architecture. The LightweightNet, designed based on a comprehensive analysis of the network architecture, consists of network parameter compression, network structure acceleration, and non-tensor layer improvement. Specifically, we propose the strategy of low-dimensional features of fully-connected layers for substantial memory saving, and design multiple efficient compact blocks to distill convolutional layers of baseline network with accuracy-sensitive distillation rule for notable time saving. Finally, it can effectively reduce the computational cost and the model size by > 4 × with negligible accuracy loss. Benchmarks on MNIST, CIFAR-10, ImageNet and HCCR (handwritten Chinese character recognition) datasets demonstrate the advantages of the proposed framework in terms of speed, performance, storage and training process. In HCCR, our method even outperforms traditional handcrafted features-based classifiers in terms of speed and storage while maintaining state-of-the-art recognition performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00313203
Volume :
88
Database :
Academic Search Index
Journal :
Pattern Recognition
Publication Type :
Academic Journal
Accession number :
134049024
Full Text :
https://doi.org/10.1016/j.patcog.2018.10.029