Back to Search Start Over

No Fine-Tuning, No Cry: Robust SVD for Compressing Deep Networks.

Authors :
Tukan, Murad
Maalouf, Alaa
Weksler, Matan
Feldman, Dan
Source :
Sensors (14248220). Aug2021, Vol. 21 Issue 16, p5599-5599. 1p.
Publication Year :
2021

Abstract

A common technique for compressing a neural network is to compute the k-rank ℓ 2 approximation A k of the matrix A ∈ R n × d via SVD that corresponds to a fully connected layer (or embedding layer). Here, d is the number of input neurons in the layer, n is the number in the next one, and A k is stored in O ((n + d) k) memory instead of O (n d) . Then, a fine-tuning step is used to improve this initial compression. However, end users may not have the required computation resources, time, or budget to run this fine-tuning stage. Furthermore, the original training set may not be available. In this paper, we provide an algorithm for compressing neural networks using a similar initial compression time (to common techniques) but without the fine-tuning step. The main idea is replacing the k-rank ℓ 2 approximation with ℓ p , for p ∈ [ 1 , 2 ] , which is known to be less sensitive to outliers but much harder to compute. Our main technical result is a practical and provable approximation algorithm to compute it for any p ≥ 1 , based on modern techniques in computational geometry. Extensive experimental results on the GLUE benchmark for compressing the networks BERT, DistilBERT, XLNet, and RoBERTa confirm this theoretical advantage. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14248220
Volume :
21
Issue :
16
Database :
Academic Search Index
Journal :
Sensors (14248220)
Publication Type :
Academic Journal
Accession number :
152146153
Full Text :
https://doi.org/10.3390/s21165599