Back to Search Start Over

Quasi-newton method for LP multiple kernel learning.

Authors :
Qinghui, Hu
Shiwei, Wei
Zhiyuan, Li
Xiaogang, Liu
Source :
Neurocomputing. Jun2016, Vol. 194, p218-226. 9p.
Publication Year :
2016

Abstract

Multiple kernel learning method has more advantages over the single one on the model’s interpretability and generalization performance. The existing multiple kernel learning methods usually solve SVM in the dual which is equivalent to the primal optimization. Research shows solving in the primal achieves faster convergence rate than solving in the dual. This paper provides a novel L P -norm( P >1) constraint non-spare multiple kernel learning method which optimizes the objective function in the primal. Subgradient and Quasi-Newton approach are used to solve standard SVM which possesses superlinear convergence property and acquires inverse Hessian without computing a second derivative, leading to a preferable convergence speed. Alternating optimization method is used to solve SVM and to learn the base kernel weights. Experiments show that the proposed algorithm converges rapidly and that its efficiency compares favorably to other multiple kernel learning algorithms. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
194
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
114874416
Full Text :
https://doi.org/10.1016/j.neucom.2016.01.079