Back to Search Start Over

For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability.

Authors :
Rangamani, Akshay
Rosasco, Lorenzo
Poggio, Tomaso
Source :
Analysis & Applications. Jan2023, Vol. 21 Issue 1, p193-215. 23p.
Publication Year :
2023

Abstract

In this paper, we study kernel ridge-less regression, including the case of interpolating solutions. We prove that maximizing the leave-one-out ( CV loo ) stability minimizes the expected error. Further, we also prove that the minimum norm solution — to which gradient algorithms are known to converge — is the most stable solution. More precisely, we show that the minimum norm interpolating solution minimizes a bound on CV loo stability, which in turn is controlled by the smallest singular value, hence the condition number, of the empirical kernel matrix. These quantities can be characterized in the asymptotic regime where both the dimension (d) and cardinality (n) of the data go to infinity (with n d → γ as d , n → ∞). Our results suggest that the property of CV loo stability of the learning algorithm with respect to perturbations of the training set may provide a more general framework than the classical theory of Empirical Risk Minimization (ERM). While ERM was developed to deal with the classical regime in which the architecture of the learning network is fixed and n → ∞ , the modern regime focuses on interpolating regressors and overparameterized models, when both d and n go to infinity. Since the stability framework is known to be equivalent to the classical theory in the classical regime, our results here suggest that it may be interesting to extend it beyond kernel regression to other overparameterized algorithms such as deep networks. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02195305
Volume :
21
Issue :
1
Database :
Academic Search Index
Journal :
Analysis & Applications
Publication Type :
Academic Journal
Accession number :
161468791
Full Text :
https://doi.org/10.1142/S0219530522400115