Back to Search Start Over

Learning Sparse Convolutional Neural Network via Quantization With Low Rank Regularization

Authors :
Liu Yan
Zongcheng Ben
Xin Long
Maojun Zhang
Xiangrong Zeng
Zhou Dianle
Source :
IEEE Access, Vol 7, Pp 51866-51876 (2019)
Publication Year :
2019
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2019.

Abstract

With the refinement of tasks in artificial intelligence, bringing in exponential level increments in computation cost and storage. Therefore, the augment of computation resource for complicated neural networks severely hinders their applications on limited-power devices in recent years. As a result, there is an impending necessity to compress and accelerate the deep networks by special ways. Considering the different peculiarities of weight quantization and sparse regularization, in this paper, we propose a low rank sparse quantization (LRSQ) method to quantize network weights and regularize the corresponding structures at the same time. Our LRSQ can: 1) obtain low-bit quantized networks to reduce memory and computation cost and 2) learn a compact structure from complex convolutional networks for subsequent channel pruning which has significant reduction on FLOPs. In experimental sections, we evaluate the proposed method on several popular models such as VGG-7/16/19 and ResNet-18/34/50, and results show that this method can dramatically reduce parameters and channels of the network with slight inference accuracy loss. Furthermore, we also visualize and analyze the four-dimensional weight tensors, which shows the low rank and group-sparsity structure of it. Finally, we try pruning unimportant channels which are zero-channels in our quantized model, and finding even a little better precision than the standard full-precision network.

Details

ISSN :
21693536
Volume :
7
Database :
OpenAIRE
Journal :
IEEE Access
Accession number :
edsair.doi.dedup.....75d9e36c697ae9d972a0ab9cbdc4de52
Full Text :
https://doi.org/10.1109/access.2019.2911536