Back to Search Start Over

Robust support vector machine with generalized quantile loss for classification and regression.

Authors :
Yang, Liming
Dong, Hongwei
Source :
Applied Soft Computing; Aug2019, Vol. 81, p105483-105483, 1p
Publication Year :
2019

Abstract

A new robust loss function (called L q -loss) is proposed based on the concept of quantile and correntropy, which can be seen as an improved version of quantile loss function. The proposed L q -loss has some important properties such as asymmetry, non-convexity and boundedness, which has received a lot of attention recently. The L q -loss includes and extends the traditional loss functions such as pinball loss, rescaled hinge loss, L 1 -norm loss and zero-norm loss. Additionally, we demonstrate that the L q -loss is a kernel-induced loss by reproducing piecewise kernel function. Further, two robust SVM frameworks are presented to handle robust classification and regression problems by applying L q -loss to support vector machine, respectively. Last but not least, we demonstrate that the proposed classification framework satisfies Bayes' optimal decision rule. However, the non-convexity of the proposed L q -loss makes it difficult to optimize. A non-convex optimization method, concave–convex procedure (CCCP) technique, is used to solve the proposed models, and the convergence of the algorithms is proved theoretically. For classification and regression tasks, experiments are carried out on three databases including UCI benchmark datasets, artificial datasets and a practical application dataset. Compared to some classical and advanced methods, numerical simulations under different noise setting and different evaluation criteria show that the proposed methods have good robustness to feature noise and outliers in both classification and regression applications. • Propose a generalized quantile loss (L q -loss) to handle robust learning. • Demonstrate important properties: asymmetry, non-convexity, approximability and boundedness. • Two robust models are proposed with L q -loss to enhance robustness. • The proposed classification framework satisfies Bayes' optimal decision rule. • concave–convex procedure (CCCP) is used to handle nonconvexity. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15684946
Volume :
81
Database :
Supplemental Index
Journal :
Applied Soft Computing
Publication Type :
Academic Journal
Accession number :
137595006
Full Text :
https://doi.org/10.1016/j.asoc.2019.105483