Back to Search Start Over

Escaping the curse of dimensionality in similarity learning: Efficient Frank-Wolfe algorithm and generalization bounds

Authors :
Kuan Liu
Aurélien Bellet
Google Inc.
Machine Learning in Information Networks (MAGNET)
Inria Lille - Nord Europe
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL)
Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)
This work was partially supported by a grant from CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020. It was also supported in part by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense U.S. Army Research Laboratory (DoD / ARL) contract number W911NF-12-C-0012, a NSF IIS-1065243, 670 an Alfred. P. Sloan Research Fellowship, DARPA award D11AP00278, and an ARO YIP Award (W911NF-12-1-0241). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily represent675 ing the official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government.
Source :
Neurocomputing, Neurocomputing, 2019, 333, pp.185-199. ⟨10.1016/j.neucom.2018.12.060⟩, Neurocomputing, Elsevier, 2019, 333, pp.185-199. ⟨10.1016/j.neucom.2018.12.060⟩
Publication Year :
2019
Publisher :
Elsevier BV, 2019.

Abstract

Similarity and metric learning provides a principled approach to construct a task-specific similarity from weakly supervised data. However, these methods are subject to the curse of dimensionality: as the number of features grows large, poor generalization is to be expected and training becomes intractable due to high computational and memory costs. In this paper, we propose a similarity learning method that can efficiently deal with high-dimensional sparse data. This is achieved through a parameterization of similarity functions by convex combinations of sparse rank-one matrices, together with the use of a greedy approximate Frank-Wolfe algorithm which provides an efficient way to control the number of active features. We show that the convergence rate of the algorithm, as well as its time and memory complexity, are independent of the data dimension. We further provide a theoretical justification of our modeling choices through an analysis of the generalization error, which depends logarithmically on the sparsity of the solution rather than on the number of features. Our experiments on datasets with up to one million features demonstrate the ability of our approach to generalize well despite the high dimensionality as well as its superiority compared to several competing methods.<br />Comment: Long version of arXiv:1411.2374 (AISTATS 2015), to appear in Neurocomputing. Matlab code: https://github.com/bellet/HDSL

Details

ISSN :
09252312
Volume :
333
Database :
OpenAIRE
Journal :
Neurocomputing
Accession number :
edsair.doi.dedup.....a759532b9b59571c0f74428f1c5fcf5a
Full Text :
https://doi.org/10.1016/j.neucom.2018.12.060