Back to Search Start Over

Learning sparse classifiers with difference of convex functions algorithms.

Authors :
Ong, ChengSoon
An, LeThi Hoai
Source :
Optimization Methods & Software. Aug2013, Vol. 28 Issue 4, p830-854. 25p. 4 Charts, 15 Graphs.
Publication Year :
2013

Abstract

Sparsity of a classifier is a desirable condition for high-dimensional data and large sample sizes. This paper investigates the two complementary notions of sparsity for binary classification: sparsity in the number of features and sparsity in the number of examples. Several different losses and regularizers are considered: the hinge loss and ramp loss, and ℓ2, ℓ1, approximate ℓ0, and capped ℓ1regularization. We propose three new objective functions that further promote sparsity, the capped ℓ1regularization with hinge loss, and the ramp loss versions of approximate ℓ0and capped ℓ1regularization. We derive difference of convex functions algorithms (DCA) for solving these novel non-convex objective functions. The proposed algorithms are shown to converge in a finite number of iterations to a local minimum. Using simulated data and several data sets from the University of California Irvine (UCI) machine learning repository, we empirically investigate the fraction of features and examples required by the different classifiers. [ABSTRACT FROM PUBLISHER]

Details

Language :
English
ISSN :
10556788
Volume :
28
Issue :
4
Database :
Academic Search Index
Journal :
Optimization Methods & Software
Publication Type :
Academic Journal
Accession number :
88408569
Full Text :
https://doi.org/10.1080/10556788.2011.652630