Back to Search Start Over

SBNN: Slimming binarized neural network.

Authors :
Wu, Qing
Lu, Xiaojin
Xue, Shan
Wang, Chao
Wu, Xundong
Fan, Jin
Source :
Neurocomputing. Aug2020, Vol. 401, p113-122. 10p.
Publication Year :
2020

Abstract

With the rapid developments of deep neural networks related applications, approaches for accelerating computationally intensive convolutional neural networks, such as network quantization, pruning, knowledge distillation, have attracted ever-increasing attention. Network binarization is an extreme form of network quantization technique, which binarizes the network weights and/or activation values to save computational resources. However, it often introduces noises into the network, and requires larger model size (more parameters) to compensate for the loss of representation capacity. To address the model complexity reduction challenges and further improve the network performance, this paper proposes an approach: slimming binarized neural networks (SBNN), which reduces complexity of binarized networks with acceptable accuracy loss. SBNN prunes the convolutional layers and fully-connected layer in a binarized network. Then it is refined by the proposed SoftSign function, knowledge distillation and full-precision computation to enhance the network accuracy. The proposed SBNN can be also conveniently applied to a pre-trained binarized network. We demonstrate the effectiveness of our approach through several state-of-the-art binarized models. For AlexNet and ResNet-18 on ILSVRC-2012 dataset, SBNN obtains negligible accuracy loss but even a better accuracy than the pre-pruning model while using only 75% of original filters. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
401
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
143416822
Full Text :
https://doi.org/10.1016/j.neucom.2020.03.030