Back to Search Start Over

The Kernel Least-Mean-Square Algorithm.

Authors :
Weifeng Liu
Pokharel, Puskal P.
Principe, Jose C.
Source :
IEEE Transactions on Signal Processing; Feb2008, Vol. 56 Issue 2, p543-554, 12p, 1 Black and White Photograph, 1 Chart, 1 Graph
Publication Year :
2008

Abstract

The combination of the famed kernel trick and the least-mean-square (LMS) algorithm provides an interesting sample-by-sample update for an adaptive filter in reproducing kernel Hubert spaces (RKHS), which is named in this paper the KLMS. Unlike the accepted view in kernel methods, this paper shows that in the finite training data case, the KLMS algorithm is well posed in RKHS without the addition of an extra regularization term to penalize solution norms as was suggested by Kivinen [Kivinen, Smola and Williamson, "Online Learning With Kernels," IEEE Transactions on Signal Processing, vol. 52, no. 8, pp. 2165-2176, Aug. 2004] and Smale [Smale and Yao, "Online Learning Algorithms," Foundations in computational Mathematics, vol. 6, no. 2, pp. 145-176, 20061. This result is the main contribution of the paper and enhances the present understanding of the LMS algorithm with a machine learning perspective. The effect of the KLMS step size is also studied from the viewpoint of regularization. Two experiments are presented to support our conclusion that with finite data the KLMS algorithm can be readily used in high dimensional spaces and particularly in RKHS to derive nonlinear, stable algorithms with comparable performance to batch, regularized solutions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1053587X
Volume :
56
Issue :
2
Database :
Complementary Index
Journal :
IEEE Transactions on Signal Processing
Publication Type :
Academic Journal
Accession number :
29434029
Full Text :
https://doi.org/10.1109/TSP.2007.907881