1. On the reversed bias-variance tradeoff in deep ensembles
- Author
-
Kobayashi, Seijin, von Oswald, Johannes, Grewe, Benjamin F; https://orcid.org/0000-0001-8560-2120, Kobayashi, Seijin, von Oswald, Johannes, and Grewe, Benjamin F; https://orcid.org/0000-0001-8560-2120
- Abstract
Deep ensembles aggregate predictions of diverse neural networks to improve generalisation and quantify uncertainty. Here, we investigate their behavior when increasing the ensemble mem- bers’ parameter size - a practice typically asso- ciated with better performance for single mod- els. We show that under practical assumptions in the overparametrized regime far into the dou- ble descent curve, not only the ensemble test loss degrades, but common out-of-distribution detec- tion and calibration metrics suffer as well. Rem- iniscent to deep double descent, we observe this phenomenon not only when increasing the single member’s capacity but also as we increase the training budget, suggesting deep ensembles can benefit from early stopping. This sheds light on the success and failure modes of deep ensembles and suggests that averaging finite width models perform better than the neural tangent kernel limit for these metrics.
- Published
- 2021