Back to Search Start Over

On Optimal Generalizability in Parametric Learning

Authors :
Beirami, Ahmad
Razaviyayn, Meisam
Shahrampour, Shahin
Tarokh, Vahid
Publication Year :
2017

Abstract

We consider the parametric learning problem, where the objective of the learner is determined by a parametric loss function. Employing empirical risk minimization with possibly regularization, the inferred parameter vector will be biased toward the training samples. Such bias is measured by the cross validation procedure in practice where the data set is partitioned into a training set used for training and a validation set, which is not used in training and is left to measure the out-of-sample performance. A classical cross validation strategy is the leave-one-out cross validation (LOOCV) where one sample is left out for validation and training is done on the rest of the samples that are presented to the learner, and this process is repeated on all of the samples. LOOCV is rarely used in practice due to the high computational complexity. In this paper, we first develop a computationally efficient approximate LOOCV (ALOOCV) and provide theoretical guarantees for its performance. Then we use ALOOCV to provide an optimization algorithm for finding the regularizer in the empirical risk minimization framework. In our numerical experiments, we illustrate the accuracy and efficiency of ALOOCV as well as our proposed framework for the optimization of the regularizer.<br />Comment: Proc. of 2017 Advances in Neural Information Processing Systems (NIPS 2017)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1711.05323
Document Type :
Working Paper