Back to Search Start Over

On the Computational Intractability of Exact and Approximate Dictionary Learning.

Authors :
Tillmann, Andreas M.
Source :
IEEE Signal Processing Letters; Jan2015, Vol. 22 Issue 1, p45-49, 5p
Publication Year :
2015

Abstract

The efficient sparse coding and reconstruction of signal vectors via linear observations has received a tremendous amount of attention over the last decade. In this context, the automated learning of a suitable basis or overcomplete dictionary from training data sets of certain signal classes for use in sparse representations has turned out to be of particular importance regarding practical signal processing applications. Most popular dictionary learning algorithms involve NP-hard sparse recovery problems in each iteration, which may give some indication about the complexity of dictionary learning but does not constitute an actual proof of computational intractability. In this technical note, we show that learning a dictionary with which a given set of training signals can be represented as sparsely as possible is indeed \ssb NP-hard. Moreover, we also establish hardness of approximating the solution to within large factors of the optimal sparsity level. Furthermore, we give \ssb NP-hardness and non-approximability results for a recent dictionary learning variation called the sensor permutation problem. Along the way, we also obtain a new non-approximability result for the classical sparse recovery problem from compressed sensing. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10709908
Volume :
22
Issue :
1
Database :
Complementary Index
Journal :
IEEE Signal Processing Letters
Publication Type :
Academic Journal
Accession number :
101289994
Full Text :
https://doi.org/10.1109/LSP.2014.2345761