Back to Search
Start Over
Low-Complexity Approximate Convolutional Neural Networks.
- Source :
-
IEEE Transactions on Neural Networks & Learning Systems . Dec2018, Vol. 29 Issue 12, p5981-5992. 12p. - Publication Year :
- 2018
-
Abstract
- In this paper, we present an approach for minimizing the computational complexity of the trained convolutional neural networks (ConvNets). The idea is to approximate all elements of a given ConvNet and replace the original convolutional filters and parameters (pooling and bias coefficients; and activation function) with an efficient approximations capable of extreme reductions in computational complexity. Low-complexity convolution filters are obtained through a binary (zero and one) linear programming scheme based on the Frobenius norm over sets of dyadic rationals. The resulting matrices allow for multiplication-free computations requiring only addition and bit-shifting operations. Such low-complexity structures pave the way for low power, efficient hardware designs. We applied our approach on three use cases of different complexities: 1) a “light” but efficient ConvNet for face detection (with around 1000 parameters); 2) another one for hand-written digit classification (with more than 180 000 parameters); and 3) a significantly larger ConvNet: AlexNet with $\approx 1.2$ million matrices. We evaluated the overall performance on the respective tasks for different levels of approximations. In all considered applications, very low-complexity approximations have been derived maintaining an almost equal classification performance. [ABSTRACT FROM AUTHOR]
- Subjects :
- *ARTIFICIAL neural networks
*ARTIFICIAL intelligence
*MACHINE learning
Subjects
Details
- Language :
- English
- ISSN :
- 2162237X
- Volume :
- 29
- Issue :
- 12
- Database :
- Academic Search Index
- Journal :
- IEEE Transactions on Neural Networks & Learning Systems
- Publication Type :
- Periodical
- Accession number :
- 133211369
- Full Text :
- https://doi.org/10.1109/TNNLS.2018.2815435