Back to Search Start Over

Transform Quantization for CNN (Convolutional Neural Network) Compression

Authors :
Young, Sean I.
Zhe, Wang
Taubman, David
Girod, Bernd
Publication Year :
2020

Abstract

In this paper, we compress convolutional neural network (CNN) weights post-training via transform quantization. Previous CNN quantization techniques tend to ignore the joint statistics of weights and activations, producing sub-optimal CNN performance at a given quantization bit-rate, or consider their joint statistics during training only and do not facilitate efficient compression of already trained CNN models. We optimally transform (decorrelate) and quantize the weights post-training using a rate-distortion framework to improve compression at any given quantization bit-rate. Transform quantization unifies quantization and dimensionality reduction (decorrelation) techniques in a single framework to facilitate low bit-rate compression of CNNs and efficient inference in the transform domain. We first introduce a theory of rate and distortion for CNN quantization, and pose optimum quantization as a rate-distortion optimization problem. We then show that this problem can be solved using optimal bit-depth allocation following decorrelation by the optimal End-to-end Learned Transform (ELT) we derive in this paper. Experiments demonstrate that transform quantization advances the state of the art in CNN compression in both retrained and non-retrained quantization scenarios. In particular, we find that transform quantization with retraining is able to compress CNN models such as AlexNet, ResNet and DenseNet to very low bit-rates (1-2 bits).<br />Comment: To appear in IEEE Trans Pattern Anal Mach Intell

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2009.01174
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TPAMI.2021.3084839