Back to Search Start Over

On regularization algorithms in learning theory

Authors :
Bauer, Frank
Pereverzev, Sergei
Rosasco, Lorenzo
Source :
Journal of Complexity. Feb2007, Vol. 23 Issue 1, p52-72. 21p.
Publication Year :
2007

Abstract

Abstract: In this paper we discuss a relation between Learning Theory and Regularization of linear ill-posed inverse problems. It is well known that Tikhonov regularization can be profitably used in the context of supervised learning, where it usually goes under the name of regularized least-squares algorithm. Moreover, the gradient descent algorithm was studied recently, which is an analog of Landweber regularization scheme. In this paper we show that a notion of regularization defined according to what is usually done for ill-posed inverse problems allows to derive learning algorithms which are consistent and provide a fast convergence rate. It turns out that for priors expressed in term of variable Hilbert scales in reproducing kernel Hilbert spaces our results for Tikhonov regularization match those in Smale and Zhou [Learning theory estimates via integral operators and their approximations, submitted for publication, retrievable at http://www.tti-c.org/smale.html , 2005] and improve the results for Landweber iterations obtained in Yao et al. [On early stopping in gradient descent learning, Constructive Approximation (2005), submitted for publication]. The remarkable fact is that our analysis shows that the same properties are shared by a large class of learning algorithms which are essentially all the linear regularization schemes. The concept of operator monotone functions turns out to be an important tool for the analysis. [Copyright &y& Elsevier]

Details

Language :
English
ISSN :
0885064X
Volume :
23
Issue :
1
Database :
Academic Search Index
Journal :
Journal of Complexity
Publication Type :
Academic Journal
Accession number :
23956493
Full Text :
https://doi.org/10.1016/j.jco.2006.07.001