Back to Search Start Over

Prioritized training on points that are learnable, worth learning, and not yet learned (workshop version)

Authors :
Mindermann, Sören
Razzak, Muhammed
Xu, Winnie
Kirsch, Andreas
Sharma, Mrinank
Morisot, Adrien
Gomez, Aidan N.
Farquhar, Sebastian
Brauner, Jan
Gal, Yarin
Source :
ICML 2021 Workshop on Subset Selection in Machine Learning
Publication Year :
2021

Abstract

We introduce Goldilocks Selection, a technique for faster model training which selects a sequence of training points that are "just right". We propose an information-theoretic acquisition function -- the reducible validation loss -- and compute it with a small proxy model -- GoldiProx -- to efficiently choose training points that maximize information about a validation set. We show that the "hard" (e.g. high loss) points usually selected in the optimization literature are typically noisy, while the "easy" (e.g. low noise) samples often prioritized for curriculum learning confer less information. Further, points with uncertain labels, typically targeted by active learning, tend to be less relevant to the task. In contrast, Goldilocks Selection chooses points that are "just right" and empirically outperforms the above approaches. Moreover, the selected sequence can transfer to other architectures; practitioners can share and reuse it without the need to recreate it.

Details

Database :
arXiv
Journal :
ICML 2021 Workshop on Subset Selection in Machine Learning
Publication Type :
Report
Accession number :
edsarx.2107.02565
Document Type :
Working Paper