Back to Search Start Over

Convolutional Neural Networks Quantization with Attention

Authors :
Wu, Binyi
Waschneck, Bernd
Mayr, Christian Georg
Publication Year :
2022

Abstract

It has been proven that, compared to using 32-bit floating-point numbers in the training phase, Deep Convolutional Neural Networks (DCNNs) can operate with low precision during inference, thereby saving memory space and power consumption. However, quantizing networks is always accompanied by an accuracy decrease. Here, we propose a method, double-stage Squeeze-and-Threshold (double-stage ST). It uses the attention mechanism to quantize networks and achieve state-of-art results. Using our method, the 3-bit model can achieve accuracy that exceeds the accuracy of the full-precision baseline model. The proposed double-stage ST activation quantization is easy to apply: inserting it before the convolution.<br />Comment: Preprint of an article published in International Journal of Neural Systems, [10.1142/S0129065722500514] \c{opyright} [copyright World Scientific Publishing Company] [https://www.worldscientific.com/doi/10.1142/S0129065722500514]

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2209.15317
Document Type :
Working Paper
Full Text :
https://doi.org/10.1142/S0129065722500514