Back to Search Start Over

Convex formulation for multi-task L1-, L2-, and LS-SVMs

Authors :
Ruiz Pastor, Carlos
Alaiz Gudín, Carlos María
Dorronsoro Ibero, José Ramón
UAM. Departamento de Ingeniería Informática
Source :
Biblos-e Archivo. Repositorio Institucional de la UAM, instname
Publication Year :
2021
Publisher :
Elsevier BV, 2021.

Abstract

Quite often a machine learning problem lends itself to be split in several well-defined subproblems, or tasks. The goal of Multi-Task Learning (MTL) is to leverage the joint learning of the problem from two different perspectives: on the one hand, a single, overall model, and on the other hand task-specific models. In this way, the found solution by MTL may be better than those of either the common or the task-specific models. Starting with the work of Evgeniou et al., support vector machines (SVMs) have lent themselves naturally to this approach. This paper proposes a convex formulation of MTL for the L1-, L2- and LS-SVM models that results in dual problems quite similar to the single-task ones, but with multi-task kernels; in turn, this makes possible to train the convex MTL models using standard solvers. As an alternative approach, the direct optimal combination of the already trained common and task-specific models can also be considered. In this paper, a procedure to compute the optimal combining parameter with respect to four different error functions is derived. As shown experimentally, the proposed convex MTL approach performs generally better than the alternative optimal convex combination, and both of them are better than the straight use of either common or task-specific models<br />With partial support from Spain’s grant TIN2016-76406-P. Work supported also by the UAM–ADIC Chair for Data Science and Machine Learning.

Details

ISSN :
09252312
Volume :
456
Database :
OpenAIRE
Journal :
Neurocomputing
Accession number :
edsair.doi.dedup.....65a0c85f70721ef5cb00294511fce575