Back to Search Start Over

Efficient Feature Selection via $\ell _{2,0}$ℓ2,0-norm Constrained Sparse Regression.

Authors :
Pang, Tianji
Nie, Feiping
Han, Junwei
Li, Xuelong
Source :
IEEE Transactions on Knowledge & Data Engineering. 5/1/2019, Vol. 31 Issue 5, p880-893. 14p.
Publication Year :
2019

Abstract

Sparse regression based feature selection method has been extensively investigated these years. However, because it has a non-convex constraint, i.e., $\ell _{2,0}$ℓ2,0-norm constraint, this problem is very hard to solve. In this paper, unlike most of the other methods which only solve its slack version by introducing sparsity regularization into objective function forcibly, a novel framework is proposed by us to solve the original $\ell _{2,0}$ℓ2,0-norm constrained sparse regression based feature selection problem. We transform our objective function into Linear Discriminant Analysis (LDA) by using a new label coding method, thus enabling our model to calculate the ratio of inter-class scatter to intra-class scatter of features which is the most widely used feature discrimination evaluation metric. According to that ratio, features can be selected by a simple sorting method. The projection gradient descent algorithm is introduced to further improve the performance of our algorithm by using the solution obtained before as its initial solution. This ensures the stability of this iterative algorithm. We prove that the proposed method can get the global optimal solution of this non-convex problem when all features are statistically independent. For the general case where features are statistically dependent, extensive experiments on six small sample size datasets and one large-scale dataset show that our algorithm has comparable or better classification capability comparing with other eight state-of-the-art feature selection methods by the SVM classifier. We also show that our algorithm can obtain a low loss value, which means the solution of our algorithm can get very close to this NP-hard problem's real solution. What is more, because we solve the original $\ell _{2,0}$ℓ2,0-norm constrained problem, we avoid the heavy work of tuning the regularization parameter because its meaning is explicit in our method, i.e., the number of selected features. At last, we evaluate the stability of our algorithm from two perspectives, i.e., the objective function values and the selected features, by experiments. From both perspectives, our algorithm shows satisfactory stability performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10414347
Volume :
31
Issue :
5
Database :
Academic Search Index
Journal :
IEEE Transactions on Knowledge & Data Engineering
Publication Type :
Academic Journal
Accession number :
135773700
Full Text :
https://doi.org/10.1109/TKDE.2018.2847685