Back to Search Start Over

On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning.

Authors :
Drineas, Petros
Mahoney, Michael W.
Cristianini, Nello
Source :
Journal of Machine Learning Research. 12/1/2005, Vol. 6 Issue 12, p2153-2175. 23p.
Publication Year :
2005

Abstract

A problem for many kernel-based methods is that the amount of computation required to find the solution scales as O(n³), where n is the number of training examples. We develop and analyze an algorithm to compute an easily-interpretable low-rank approximation to an n x n Gram matrix G such that computations of interest may be performed more rapidly. The approximation is of the form G͂k = CWk+CT, where C is a matrix consisting of a small number c of columns of G and Wk is the best rank-k approximation to W, the matrix formed by the intersection between those c columns of G and the corresponding c rows of G. An important aspect of the algorithm is the probability distribution used to randomly sample the columns; we will use a judiciously-chosen and data-dependent nonuniform probability distribution. Let ‖‧‖2 and ‖‧‖2F denote the spectral norm and the Frobenius norm, respectively, of a matrix, and let Gk be the best rank-k approximation to G. We prove that by choosing O(k/ε4) columns ... both in expectation and with high probability, for both ξ = 2, F, and for all k : 0 ≤ k ≤ rank(W). This approximation can be computed using O(n) additional space and time, after making two passes over the data from external storage. The relationships between this algorithm, other related matrix decompositions, and the Nyström method from integral equation theory are discussed. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15324435
Volume :
6
Issue :
12
Database :
Academic Search Index
Journal :
Journal of Machine Learning Research
Publication Type :
Academic Journal
Accession number :
20018419