Back to Search Start Over

Deep Network Quantization via Error Compensation.

Authors :
Peng, Hanyu
Wu, Jiaxiang
Zhang, Zhiwei
Chen, Shifeng
Zhang, Hai-Tao
Source :
IEEE Transactions on Neural Networks & Learning Systems. Sep2022, Vol. 33 Issue 9, p4960-4970. 11p.
Publication Year :
2022

Abstract

For portable devices with limited resources, it is often difficult to deploy deep networks due to the prohibitive computational overhead. Numerous approaches have been proposed to quantize weights and/or activations to speed up the inference. Loss-aware quantization has been proposed to directly formulate the impact of weight quantization on the model’s final loss. However, we discover that, under certain circumstances, such a method may not converge and end up oscillating. To tackle this issue, we introduce a novel loss-aware quantization algorithm to efficiently compress deep networks with low bit-width model weights. We provide a more accurate estimation of gradients by leveraging the Taylor expansion to compensate for the quantization error, which leads to better convergence behavior. Our theoretical analysis indicates that the gradient mismatch issue can be fixed by the newly introduced quantization error compensation term. Experimental results for both linear models and convolutional networks verify the effectiveness of our proposed method. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
2162237X
Volume :
33
Issue :
9
Database :
Academic Search Index
Journal :
IEEE Transactions on Neural Networks & Learning Systems
Publication Type :
Periodical
Accession number :
158869847
Full Text :
https://doi.org/10.1109/TNNLS.2021.3064293