Back to Search Start Over

Reshuffling Resampling Splits Can Improve Generalization of Hyperparameter Optimization

Authors :
Nagler, Thomas
Schneider, Lennart
Bischl, Bernd
Feurer, Matthias
Publication Year :
2024

Abstract

Hyperparameter optimization is crucial for obtaining peak performance of machine learning models. The standard protocol evaluates various hyperparameter configurations using a resampling estimate of the generalization error to guide optimization and select a final hyperparameter configuration. Without much evidence, paired resampling splits, i.e., either a fixed train-validation split or a fixed cross-validation scheme, are often recommended. We show that, surprisingly, reshuffling the splits for every configuration often improves the final model's generalization performance on unseen data. Our theoretical analysis explains how reshuffling affects the asymptotic behavior of the validation loss surface and provides a bound on the expected regret in the limiting regime. This bound connects the potential benefits of reshuffling to the signal and noise characteristics of the underlying optimization problem. We confirm our theoretical results in a controlled simulation study and demonstrate the practical usefulness of reshuffling in a large-scale, realistic hyperparameter optimization experiment. While reshuffling leads to test performances that are competitive with using fixed splits, it drastically improves results for a single train-validation holdout protocol and can often make holdout become competitive with standard CV while being computationally cheaper.<br />Comment: 39 pages, 4 tables, 29 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.15393
Document Type :
Working Paper