Back to Search Start Over

Dictionary Learning for Sparse Approximations With the Majorization Method.

Authors :
Yaghoobi, Mehrdad
Blumensath, Thomas
Davies, Mike E.
Source :
IEEE Transactions on Signal Processing. Jun2009, Vol. 57 Issue 6, p2178-2191. 14p. 3 Black and White Photographs, 10 Graphs.
Publication Year :
2009

Abstract

In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by introducing different constraints on the dictionary. The convergence of the proposed method to a fixed point is guaranteed, unless the accumulation points form a continuum. This holds for different sparsity measures. The majorization method is an optimization method that substitutes the original objective function with a surrogate function that is updated in each optimization step. This method has been used successfully in sparse approximation and statistical estimation [e.g., expectation-maximization (EM)] problems. This paper shows that the majorization method can be used for the dictionary learning problem too. The proposed method is compared with other methods on both synthetic and real data and different constraints on the dictionary are compared. Simulations show the advantages of the proposed method over other currently available dictionary learning methods not only in terms of average performance but also in terms of computation time. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1053587X
Volume :
57
Issue :
6
Database :
Academic Search Index
Journal :
IEEE Transactions on Signal Processing
Publication Type :
Academic Journal
Accession number :
40838612
Full Text :
https://doi.org/10.1109/TSP.2009.2016257