Back to Search Start Over

Additive Noise Annealing and Approximation Properties of Quantized Neural Networks

Authors :
Spallanzani, Matteo
Cavigelli, Lukas
Leonardi, Gian Paolo
Bertogna, Marko
Benini, Luca
Publication Year :
2019

Abstract

We present a theoretical and experimental investigation of the quantization problem for artificial neural networks. We provide a mathematical definition of quantized neural networks and analyze their approximation capabilities, showing in particular that any Lipschitz-continuous map defined on a hypercube can be uniformly approximated by a quantized neural network. We then focus on the regularization effect of additive noise on the arguments of multi-step functions inherent to the quantization of continuous variables. In particular, when the expectation operator is applied to a non-differentiable multi-step random function, and if the underlying probability density is differentiable (in either classical or weak sense), then a differentiable function is retrieved, with explicit bounds on its Lipschitz constant. Based on these results, we propose a novel gradient-based training algorithm for quantized neural networks that generalizes the straight-through estimator, acting on noise applied to the network's parameters. We evaluate our algorithm on the CIFAR-10 and ImageNet image classification benchmarks, showing state-of-the-art performance on AlexNet and MobileNetV2 for ternary networks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1905.10452
Document Type :
Working Paper