Back to Search Start Over

Generalization Error Bounds for Noisy, Iterative Algorithms

Authors :
Varun Jog
Ankit Pensia
Po-Ling Loh
Source :
ISIT
Publication Year :
2018
Publisher :
arXiv, 2018.

Abstract

In statistical learning theory, generalization error is used to quantify the degree to which a supervised machine learning algorithm may overfit to training data. Recent work [Xu and Raginsky (2017)] has established a bound on the generalization error of empirical risk minimization based on the mutual information $I(S;W)$ between the algorithm input $S$ and the algorithm output $W$, when the loss function is sub-Gaussian. We leverage these results to derive generalization error bounds for a broad class of iterative algorithms that are characterized by bounded, noisy updates with Markovian structure. Our bounds are very general and are applicable to numerous settings of interest, including stochastic gradient Langevin dynamics (SGLD) and variants of the stochastic gradient Hamiltonian Monte Carlo (SGHMC) algorithm. Furthermore, our error bounds hold for any output function computed over the path of iterates, including the last iterate of the algorithm or the average of subsets of iterates, and also allow for non-uniform sampling of data in successive updates of the algorithm.<br />Comment: A shorter version of this paper was submitted to ISIT 2018. 14 pages, 1 figure

Details

Database :
OpenAIRE
Journal :
ISIT
Accession number :
edsair.doi.dedup.....a6eccb2f12588072344c6e4300c080cf
Full Text :
https://doi.org/10.48550/arxiv.1801.04295