Back to Search Start Over

LTNN: A Layerwise Tensorized Compression of Multilayer Neural Network.

Authors :
Huang, Hantao
Yu, Hao
Source :
IEEE Transactions on Neural Networks & Learning Systems. May2019, Vol. 30 Issue 5, p1497-1511. 15p.
Publication Year :
2019

Abstract

An efficient deep learning requires a memory-efficient construction of a neural network. This paper introduces a layerwise tensorized formulation of a multilayer neural network, called LTNN, such that the weight matrix can be significantly compressed during training. By reshaping the multilayer neural network weight matrix into a high-dimensional tensor with a low-rank approximation, significant network compression can be achieved with maintained accuracy. An according layerwise training is developed by a modified alternating least-squares method with backward propagation for fine-tuning only. LTNN can provide the state-of-the-art results on various benchmarks with significant compression. For MNIST benchmark, LTNN shows $64 \times $ compression rate without accuracy drop. For Imagenet12 benchmark, our proposed LTNN achieves $35.84 \times $ compression of the neural network with around 2% accuracy drop. We have also shown $1.615 \times $ faster on inference speed than the existing works due to the smaller tensor core ranks. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
2162237X
Volume :
30
Issue :
5
Database :
Academic Search Index
Journal :
IEEE Transactions on Neural Networks & Learning Systems
Publication Type :
Periodical
Accession number :
136117589
Full Text :
https://doi.org/10.1109/TNNLS.2018.2869974