Back to Search Start Over

Accounting for Variance in Machine Learning Benchmarks

Authors :
Bouthillier, Xavier
Delaunay, Pierre
Bronzi, Mirko
Trofimov, Assya
Nichyporuk, Brennan
Szeto, Justin
Sepah, Naz
Raff, Edward
Madan, Kanika
Voleti, Vikram
Kahou, Samira Ebrahimi
Michalski, Vincent
Serdyuk, Dmitriy
Arbel, Tal
Pal, Chris
Varoquaux, Gaël
Vincent, Pascal
Bouthillier, Xavier
Delaunay, Pierre
Bronzi, Mirko
Trofimov, Assya
Nichyporuk, Brennan
Szeto, Justin
Sepah, Naz
Raff, Edward
Madan, Kanika
Voleti, Vikram
Kahou, Samira Ebrahimi
Michalski, Vincent
Serdyuk, Dmitriy
Arbel, Tal
Pal, Chris
Varoquaux, Gaël
Vincent, Pascal
Publication Year :
2021

Abstract

Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.<br />Comment: Submitted to MLSys2021

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1269533526
Document Type :
Electronic Resource