23 results on '"Yang, Liming"'
Search Results
2. Asymmetric kernel-based robust classification by ADMM
- Author
-
Ding, Guangsheng and Yang, Liming
- Published
- 2023
- Full Text
- View/download PDF
3. A robust projection twin support vector machine with a generalized correntropy-based loss
- Author
-
Ren, Qiangqiang and Yang, Liming
- Published
- 2022
- Full Text
- View/download PDF
4. Robust semi-supervised support vector machines with Laplace kernel-induced correntropy loss functions
- Author
-
Dong, Hongwei, Yang, Liming, and Wang, Xue
- Published
- 2021
- Full Text
- View/download PDF
5. Selective ensemble of uncertain extreme learning machine for pattern classification with missing features
- Author
-
Jing, Shibo, Wang, Yidan, and Yang, Liming
- Published
- 2020
- Full Text
- View/download PDF
6. Robust Extreme Learning Machines with Different Loss Functions
- Author
-
Ren, Zhuo and Yang, Liming
- Published
- 2019
- Full Text
- View/download PDF
7. An efficient multi-metric learning method by partitioning the metric space.
- Author
-
Yuan, Chao and Yang, Liming
- Subjects
- *
SUPERVISED learning , *MACHINE learning , *LOGARITHMIC functions , *PATTERN recognition systems , *LEARNING - Abstract
Metric learning has attracted significant attention due to its high effectiveness and efficiency for pattern recognition task. Traditional supervised metric learning algorithms attempt to seek a global distance metric with labeled samples. When data are represented with multimodal and only limited supervision information is available, these approaches are insufficient to obtain satisfactory results. In this paper, we develop a robust semi-supervised multi-metric learning method (RSMM) to improve classification performance. The proposed RSMM learns multiple local metrics and a background metric instead of a single global metric. Specifically, we divide the metric space into influential regions and background region, and then regulate the effectiveness of each local metric to be within the related regions. Simultaneously, a geometrically interpretable, symmetric distance is defined with local metrics and background metric. Based on the resultant learning bounds, we obtain the regularization term to improve the classifier's generalization ability. Moreover, the manifold regularization term is introduced to preserve the supervision information as well as geometry structure. The substantial unlabeled samples may cause potential threats and large uncertainties, so the logarithmic loss function is utilized to enhance the robustness. An efficient gradient descent algorithm is exploited to solve the non-convex challenging problem. To further understand the proposed algorithm, we theoretically derive its robustness and generalization error bounds. Finally, numerical experiments on UCI datasets and image datasets demonstrate the feasibility and validity of the RSMM. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Capped [formula omitted]-norm metric based robust least squares twin support vector machine for pattern classification.
- Author
-
Yuan, Chao and Yang, Liming
- Subjects
- *
SUPPORT vector machines , *ALGORITHMS , *CLASSIFICATION algorithms , *LINEAR equations - Abstract
Least squares twin support vector machine (LSTSVM) is an effective and efficient learning algorithm for pattern classification. However, the distance in LSTSVM is measured by squared L 2 -norm metric that may magnify the influence of outliers. In this paper, a novel robust least squares twin support vector machine framework is proposed for binary classification, termed as C L 2 , p -LSTSVM, which utilizes capped L 2 , p -norm distance metric to reduce the influence of noise and outliers. The goal of C L 2 , p -LSTSVM is to minimize the capped L 2 , p -norm intra-class distance dispersion, and eliminate the influence of outliers during training process, where the value of the metric is controlled by the capped parameter, which can ensure better robustness. The proposed metric includes and extends the traditional metrics by setting appropriate values of p and capped parameter. This strategy not only retains the advantages of LSTSVM, but also improves the robustness in solving a binary classification problem with outliers. However, the nonconvexity of metric makes it difficult to optimize. We design an effective iterative algorithm to solve the C L 2 , p -LSTSVM. In each iteration, two systems of linear equations are solved. Simultaneously, we present some insightful analyses on the computational complexity and convergence of algorithm. Moreover, we extend the C L 2 , p -LSTSVM to nonlinear classifier and semi-supervised classification. Experiments are conducted on artificial datasets, UCI benchmark datasets, and image datasets to evaluate our method. Under different noise settings and different evaluation criteria, the experiment results show that the C L 2 , p -LSTSVM has better robustness than state-of-the-art approaches in most cases, which demonstrates the feasibility and effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Capped [formula omitted]-norm distance metric-based fast robust twin bounded support vector machine.
- Author
-
Ma, Jun, Yang, Liming, and Sun, Qun
- Subjects
- *
SUPPORT vector machines , *STATISTICAL learning , *ALGORITHMS , *LEAST squares - Abstract
• A robust twin bounded SVM (RTBSVM) is proposed. • The least squares version (FRTBSVM) of RTBSVM is proposed. • Two effective algorithms are derived. • The convergence of algorithms are proved and computational complexity is analyzed. • Experiments show that RTBSVM and FRTBSVM are feasibility and effectiveness. In this paper, to improve the performance of capped L 1 -norm twin support vector machine (CTSVM), we first propose a new robust twin bounded support vector machine (RTBSVM) by introducing the regularization term. The significant advantage of our RTBSVM over CTSVM is that the structural risk minimization principle is implemented. This embodies the marrow of statistical learning theory, so this modification can improve the performance of classification. Furthermore, to accelerate the computation of RTBSVM and simultaneously inherit the merit of robustness, we construct a least squares version of RTBSVM (called RTBSVM). This formulation leads to a simple and fast algorithm for binary classifiers by solving just two systems of linear equations. Finally, we derive two simple and effective iterative optimization algorithms for solving RTBSVM and FRTBSVM, respectively. Simultaneously, we theoretically rigorously analyze and prove the computational complexity, local optimality and convergence of the algorithms. Experimental results on one synthetic dataset and nine UCI datasets demonstrate that our methods are competitive with other methods. Additionally, the FRTBSVM is directly applied to recognize the purity of hybrid maize seeds using near-infrared spectral data. Experiments show that our method achieves better performance than the traditional methods in most spectral regions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. Robust sparse principal component analysis by DC programming algorithm.
- Author
-
Li, Jieya and Yang, Liming
- Subjects
- *
PRINCIPAL components analysis , *ALGORITHMS , *CONVEX functions , *MATHEMATICAL regularization , *L-functions - Abstract
The classical principal component analysis (PCA) is not sparse enough since it is based on the L2-norm that is also prone to be adversely affected by the presence of outliers and noises. In order to address the problem, a sparse robust PCA framework is proposed based on the min of zero-norm regularization and the max of Lp-norm (0 < p ≤ 2) PCA. Furthermore, we developed a continuous optimization method, DC (difference of convex functions) programming algorithm (DCA), to solve the proposed problem. The resulting algorithm (called DC-LpZSPCA) is convergent linearly. In addition, when choosing different p values, the model can keep robust and is applicable to different data types. Numerical simulations are simulated in artificial data sets and Yale face data sets. Experiment results show that the proposed method can maintain good sparsity and anti-outlier ability. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
11. Training robust support vector regression machines for more general noise.
- Author
-
Dong, Hongwei and Yang, Liming
- Subjects
- *
QUANTILE regression , *SMOOTHNESS of functions , *LOSS functions (Statistics) , *NOISE , *SYMMETRIC functions - Abstract
Symmetric loss functions are widely used in regression algorithms to focus on estimating the means. Huber loss, a symmetric smooth loss function, has been proved that it can be optimized with high efficiency and certain robustness. However, mean estimators may be poor when the noise distribution is asymmetric (even outliers caused heavy-tailed distribution noise) and estimators beyond the means are necessary. Under the circumstances, quantile regression is a natural choice which estimates quantiles instead of means through asymmetric loss functions. In this paper, an asymmetric Huber loss function is proposed to implement different penalty for overestimation and underestimation so as to deal with more general noise. Moreover, a smooth truncated version of the proposed loss is introduced to enhance stronger robustness to outliers. Concave-convex procedure is developed in the primal space with the proof of convergence to handle the non-convexity of the involved truncated objective. Experiments are carried out on both artificial and benchmark datasets and robustness of the proposed methods are verified. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
12. Correntropy-based metric for robust twin support vector machine.
- Author
-
Yuan, Chao, Yang, Liming, and Sun, Ping
- Subjects
- *
SUPPORT vector machines , *ALGORITHMS - Abstract
• Propose a robust distance metric based on correntropy. • A robust twin SVM is built with the proposed metric. • The metric satisfies the conditions of distance metric. • Demonstrate important properties for the metric. • Experiments show the robustness of the proposed method. This work proposes a robust distance metric that is induced by correntropy based on Laplacian kernel. The proposed metric satisfies the properties that distance metric must have. Moreover, we demonstrate important properties of the proposed metric such as robustness, boundedness, non-convexity and approximation behaviors. The proposed metric includes and extends the traditional metrics such as L 0 -norm and L 1 -norm metrics. Following that we apply the proposed metric to twin support vector machine classification (TSVM), and then a new robust TSVM algorithm (called RCTSVM) is built to reduce the influence of noise and outliers. The proposed RCTSVM inherits the advantages of TSVM and improves the robustness. However, the non-convexity of the proposed model makes it difficult to optimize. A continuous optimization method is developed to solve the RCTSVM. The problem is converted into difference of convex (DC) programming, and the corresponding DC algorithm (DCA) converges linearly. Compared with the traditional algorithms, numerical experiments under different noise setting and evaluation criteria show that the proposed RCTSVM has robustness to noise and outliers in most cases, which demonstrates the feasibility and effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. A robust loss function for classification with imbalanced datasets.
- Author
-
Wang, Yidan and Yang, Liming
- Subjects
- *
SUPPORT vector machines , *ROBUST control , *COMPUTER algorithms , *INFORMATION science , *NOISE control - Abstract
Highlights • A new robust loss function is designed for imbalanced data sets. • The proposed model combines this loss with SVM. • The robustness of model is analyzed in theory. • The Bayes optimal solution is derived. • Alternative iterative algorithm is designed to reduce algorithm complexity. Abstract Based on minimizing misclassification cost, a new robust loss function is designed in this paper to deal with the imbalanced classification problem under noise environment. It is nonconvex but maintains Fisher consistency. Applying the proposed loss function into support vector machine (SVM), a robust SVM framework is presented which results in a Bayes optimal classifier. However, nonconvexity makes the model difficult to optimize. We develop an alternative iterative algorithm to solve the proposed model. What's more, we analyze the robustness of the proposed model theoretically from a re-weighted SVM viewpoint and the obtained optimal solution is consistent with Bayesian optimal decision rule. Furthermore, numerical experiments are carried out on databases that are drawn from UCI Machine Learning Repository and a practical application. With two different types of noise environments, one with label noise and one with feature noise, experiment results show that on these two databases the proposed method achieves better generalization results compared to other SVM methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
14. Correntropy-based robust extreme learning machine for classification.
- Author
-
Ren, Zhuo and Yang, Liming
- Subjects
- *
MACHINE learning , *ROBUST control , *ALGORITHMS , *ARTIFICIAL neural networks , *SUPPORT vector machines - Abstract
Correntropy is a local similarity measure between two arbitrary variables, and it has been applied in a variety of learning algorithms to improve noise insensitivity. In this paper, based on the correntropy, a non-convex and bounded loss function is obtained which contains second and higher order moments of the classification margin. And the novel loss function is robust to noises and close to the 0–1 loss function. Then we introduce it into extreme learning machine (ELM), and propose a correntropy-based robust ELM framework for classification, trained by half quadratic optimization to cope with non-convexity of the algorithm. To evaluate robustness, feature noise and label noise are simulated to provide noisy environments. Experimental results on benchmark datasets demonstrate that the proposed algorithm is better than original algorithms and robust algorithms. Moreover, the superiority of proposed algorithm in noisy environment is more evident, which further proves its robustness to noises. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
15. Support vector machine with truncated pinball loss and its application in pattern recognition.
- Author
-
Yang, Liming and Dong, Hongwei
- Subjects
- *
SUPPORT vector machines , *PATTERN recognition systems , *LOSS functions (Statistics) , *MATHEMATICAL optimization , *NONCONVEX programming - Abstract
Support vector machine(SVM) with pinball loss(PINSVM) has been recently proposed and shown its advantages in pattern recognition. In this paper, we present a robust bounded loss function (called L t -loss) that truncates pinball loss function. Then a novel robust SVM formulation with L t -loss(called TPINSVM) is proposed to enhance noise robustness. Moreover, we demonstrate that the proposed TPINSVM satisfies Bayes rule and it has a certain sparseness. However, the non-convexity of the proposed TPINSVM makes it difficult to optimize. We develop a continuous optimization method, DC(difference of convex functions) programming method, to solve the proposed TPINSVM. The resulting DC optimization algorithm converges finitely. Furthermore, the proposed TPINSVM is directly applied to recognize the purity of hybrid maize seeds using near-infrared spectral data. Experiments show that the proposed method achieves better performance than the traditional methods in most spectral regions. Meanwhile we simulate the proposed TPINSVM in benchmark datasets in different situations. In noiseless setting, the proposed TPINSVM either improves or shows no significant difference in generalization compared to the traditional approaches. While in noise situations, TPINSVM improves generalization in most cases. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
16. A robust classification framework with mixture correntropy.
- Author
-
Wang, Yidan, Yang, Liming, and Ren, Qiangqiang
- Subjects
- *
ROBUST control , *ENTROPY (Information theory) , *ITERATIVE methods (Mathematics) , *SUPPORT vector machines , *COMPUTER algorithms - Abstract
• A heterogenous mixture correntropy criterion together with the induced loss is defined. • Good properties and robustness are analyzed in theory. • A robust SVM framework is conducted with the induced loss. • An iteration algorithm with fast convergence rate is designed. • Comparison experiments with different levels of noise show its advantages. In this paper, we define a mixture correntropy criterion where two different kernel functions are combined. We induce a more general nonconvex robust loss function by this heterogenous mixture correntropy. The proposed mixture correntropy is also a local similarity measure that not only improves the limitations of correntropy under a single kernel, but also handles heterogeneous data more flexibly and stably. The induced loss amalgamates the superiors of the state-of-the-art robust loss functions and is more effective. What's more, we verify the Fisher consistency of the induced loss and analyze the robustness from the view point of robust estimation. With this induced loss, we propose a robust support vector machine (SVM) framework and adopt half quadratic optimization algorithm to handle the nonconvexity and further improve convergent rate. Furthermore, we generate heterogenous structured artificial datasets and impose different levels of label noise on benchmark datasets. Implements on these two types of datasets show the superior flexibility and effectiveness of the proposed framework. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
17. Local Geometric Consensus: A General Purpose Point Pattern-Based Tracking Algorithm.
- Author
-
Yang, Liming, Normand, Jean-Marie, and Moreau, Guillaume
- Subjects
TRACKING algorithms ,THREE-dimensional imaging ,ROBUST control ,TEXTURE analysis (Image processing) ,MOBILE apps - Abstract
We present a method which can quickly and robustly match 2D and 3D point patterns based on their sole spatial distribution, but it can also handle other cues if available. This method can be easily adapted to many transformations such as similarity transformations in 2D/3D, and affine and perspective transformations in 2D. It is based on local geometric consensus among several local matchings and a refinement scheme. We provide two implementations of this general scheme, one for the 2D homography case (which can be used for marker or image tracking) and one for the 3D similarity case. We demonstrate the robustness and speed performance of our proposal on both synthetic and real images and show that our method can be used to augment any (textured/textureless) planar objects but also 3D objects. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
18. Robust supervised and semi-supervised twin extreme learning machines for pattern classification.
- Author
-
Ma, Jun and Yang, Liming
- Subjects
- *
MACHINE learning , *CLASSIFICATION , *SUPERVISED learning , *COST functions , *COMPUTATIONAL complexity - Abstract
• The robust adaptive capped L θ ε -loss function is presented. • The L θ ε -loss has non-negativity, symmetry, non-convexity and boundedness. • Two robust learning frameworks RTELM and Lap-RTELM are proposed for supervised semi-supervised classification, respectively. • Two iterative algorithms with convergence and complexity analysis are designed. • Experiments show that RTELM and Lap-RTELM are competitive with existing methods. In this paper, we first propose a novel robust loss function called adaptive capped L θ ε -loss. The L θ ε -loss has some interesting properties, such as robustness, non-convexity, and boundedness. During the learning process, for different problems, we can choose different loss functions through adaptive parameter θ. Then, a new robust twin extreme learning machine (RTELM) framework is presented by applying L θ ε -loss and capped L 1 -norm distance metric. Compared with the twin extreme learning machine (TELM), RTELM overcomes the disadvantages of L 2 -norm distance metric and hinge loss, especially for the problem with outliers, while inherits the advantages of TELM. Further, we present a new Laplacian RTELM (Lap-RTELM for short) by introducing manifold regularization terms into RTELM. Intuitively, the Lap-RTELM can effectively utilize geometric information embedded in unlabeled samples and merge them as manifold regularization terms to learn a more reasonable classifier for semi-supervised classification (SSC) problems. Finally, two effective iterative algorithms are designed to solve the challenges brought by the non-convex optimization problems RTELM and Lap-RTELM, and theoretically guarantee the convergence, local optimality, and computational complexity of algorithms. Experiments on multiple datasets show that the proposed RTELM and Lap-RTELM are competitive with existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Robust twin extreme learning machines with correntropy-based metric.
- Author
-
Yuan, Chao and Yang, Liming
- Subjects
- *
MACHINE learning , *KERNEL operating systems , *COST functions , *ALGORITHMS - Abstract
In this paper, we propose a novel distance metric based on correntropy and kernel learning. Some properties of the proposed metric are demonstrated such as nonnegativity, non-convexity, boundedness and approximation behaviors. The proposed metric includes and extends classical metrics such as L 0 -norm and L 1 -norm metrics. Moreover, we develop a fraction loss function satisfying the Bayes rule, and demonstrate its robustness from the perspective of M-estimation. With the proposed robust metric and loss function, a new robust twin extreme learning machines framework (called LCFTELM) is presented to reduce the negative effect of noises and outliers. The proposed LCFTELM retains the advantages of twin extreme machines (TELM) and promotes the robustness. However, the non-convexity of the proposed model makes it difficult to optimize. Furthermore, an effective iterative algorithm for LCFTELM is designed, we present theoretical analysis on the convergence of the proposed algorithm. Following that, we evaluate the proposed algorithm on real-world datasets and artificial dataset under different noise settings. Experimental results show that the proposed method achieves better generalization than the state-of-the-art methods in most cases, which demonstrates the feasibility and robustness of the proposed LCFTELM. • Based on correntropy and Laplacian kernel, a robust distance metric is proposed. A new non-convex fraction loss function is developed. Applying to TELM, a robust classification framework is proposed. • The proposed metric includes and extends the traditional metrics and fractional loss function is a powerful adaptive cost in the presence of noise. • An efficient optimization method is proposed to solve the model. • Numerical experiments show that the proposed LCFTELM is effective and more robust to outliers. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Adaptive robust learning framework for twin support vector machine classification.
- Author
-
Ma, Jun, Yang, Liming, and Sun, Qun
- Subjects
- *
SUPPORT vector machines , *CLASSIFICATION , *ALGORITHMS , *COST functions - Abstract
In general, introducing robust distance metrics and loss functions in the learning process can improve the robustness of the algorithms. In this work, we first propose a new robust loss function called adaptive capped L θ ε -loss. For different problems, we can choose different loss functions through adaptive parameter θ during the learning process. Secondly, we propose a new robust distance metric induced by correntropy (CIM) that is based on Laplacian kernel. The CIM contains first and higher-order moments from samples. Further, we demonstrate some important and interesting properties of the L θ ε -loss and CIM, such as robustness, boundedness, nonconvexity, etc. Finally, we apply the to L θ ε -loss and CIM to twin support vector machine (TWSVM) and develop an adaptive robust learning framework, namely adaptive robust twin support vector machine (ARTSVM). The proposed ARTSVM not only inherits the advantages of TWSVM but also improves the robustness of classification problems. A non-convex optimization method, DC (difference of convex functions) programming algorithm (DCA) is used to solve the proposed ARTSVM, and the convergence of the algorithm is proved theoretically. Experiments on multiple datasets show that the proposed ARTSVM is competitive with existing methods. • A correntropy-based generalized robust distance metric is proposed, called correntropy induced metric (CIM). • A robust loss function is proposed, namely, adaptive capped L θ ε -loss. • Some important and interesting properties of the L θ ε -loss and CIM are demonstrated. • A adaptive robust twin support vector machine (ARTSVM) is proposed based on CIM and L θ ε -loss. • DC programming technique is used to solve the ARTSVM. • Numerical experiments under different noises show that the proposed ARTSVM is effective and more robust to outliers. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
21. Robust regression framework with asymmetrically analogous to correntropy-induced loss.
- Author
-
Yang, Liming, Ding, Guangsheng, Yuan, Chao, and Zhang, Min
- Subjects
- *
COST functions , *PROBLEM solving , *DATABASES , *GENERALIZATION , *LEAST squares - Abstract
This work proposes a robust loss function based on expectile penalty (named as rescaled expectile loss, RE-loss), which includes and generalizes the existing loss functions. Then some important properties of RE-loss are demonstrated such as asymmetry, nonconvexity, smoothness, boundedness and asymptotic approximation behaviors. From the viewpoints of correntropy, we analyze that the proposed RE-loss can be viewed as a correntropy-induced loss by a reproducing piecewise kernel. Furthermore, a sparse version of RE-loss (called SRE-loss function) is developed to improve sparsity by introducing a ϵ -insensitive zone. Following that, two robust regression frameworks are proposed with the proposed loss functions. However, the non-convexity of the proposed losses makes the problems difficult to optimize. We apply concave–convex procedure (CCCP) and dual theory to solve the problems effectively. The resulting algorithms converge linearly. To validate the proposed methods, we carry out numerical experiments in different scale datasets with different levels of noises and outliers, respectively. In three databases including artificial database, benchmark database and a practical application database, experimental results demonstrate that the proposed methods achieve better generalization than the traditional regression methods in most cases,especially when noise and outlier distribution are imbalance. • Propose an analogous to correntropy-induced loss (RE-loss). • RE-loss is an exponential expectile penalty. • A sparse version of RE-loss is built with a ϵ -insensitive version. • Demonstrate important properties of RE-loss function. • RE-loss includes and extends the existing loss functions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
22. Robust support vector machine with generalized quantile loss for classification and regression.
- Author
-
Yang, Liming and Dong, Hongwei
- Subjects
SUPPORT vector machines ,KERNEL functions ,COST functions ,CLASSIFICATION ,REGRESSION trees - Abstract
A new robust loss function (called L q -loss) is proposed based on the concept of quantile and correntropy, which can be seen as an improved version of quantile loss function. The proposed L q -loss has some important properties such as asymmetry, non-convexity and boundedness, which has received a lot of attention recently. The L q -loss includes and extends the traditional loss functions such as pinball loss, rescaled hinge loss, L 1 -norm loss and zero-norm loss. Additionally, we demonstrate that the L q -loss is a kernel-induced loss by reproducing piecewise kernel function. Further, two robust SVM frameworks are presented to handle robust classification and regression problems by applying L q -loss to support vector machine, respectively. Last but not least, we demonstrate that the proposed classification framework satisfies Bayes' optimal decision rule. However, the non-convexity of the proposed L q -loss makes it difficult to optimize. A non-convex optimization method, concave–convex procedure (CCCP) technique, is used to solve the proposed models, and the convergence of the algorithms is proved theoretically. For classification and regression tasks, experiments are carried out on three databases including UCI benchmark datasets, artificial datasets and a practical application dataset. Compared to some classical and advanced methods, numerical simulations under different noise setting and different evaluation criteria show that the proposed methods have good robustness to feature noise and outliers in both classification and regression applications. • Propose a generalized quantile loss (L q -loss) to handle robust learning. • Demonstrate important properties: asymmetry, non-convexity, approximability and boundedness. • Two robust models are proposed with L q -loss to enhance robustness. • The proposed classification framework satisfies Bayes' optimal decision rule. • concave–convex procedure (CCCP) is used to handle nonconvexity. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Robust metric learning based on subspace learning with [formula omitted].
- Author
-
Wang, Yidan, Yuan, Chao, and Yang, Liming
- Subjects
- *
ALGORITHMS , *MACHINE learning , *DISTANCE education - Abstract
• Perform simultaneously subspace learning and metric learning (psub). • Enhance the robustness based on Lp-norm (0 < p < = 2). • Analyze the robustness and the advantage of the psub theoretically. • Design a modified gradient ascending algorithm for psub. • Psub achieves better performance than other algorithms in most cases. Distance metric learning has been an important technique in machine learning field recently due to its high effectiveness in improving the performance of distance related methods. In order to take advantages of both subspace learning and metric learning to overcome the limitations of metric learning, in this work we intend to learn a robust discriminative subspace and a distance metric simultaneously by maximizing the ratio of inter-class covariance to inner-class covariance using l p − n o r m (0 < p ≤ 2) , where the l p − n o r m is used to enhance the robustness. The proposed model is a more general framework compared to the state-of-art algorithms. Moreover, a modified gradient ascending algorithm is designed to optimize the problem, and the convergence of the algorithm and complexity are analyzed. To verify the proposed method, we carry out numerical experiments on artificial data sets and benchmark data sets. Under different evaluation criterions, experiment results show that the proposed method achieves better performance than the state-of-art algorithms in most cases. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.