Back to Search Start Over

Improving Confidence Estimates for Unfamiliar Examples

Authors :
Li, Zhizhong
Hoiem, Derek
Publication Year :
2018

Abstract

Intuitively, unfamiliarity should lead to lack of confidence. In reality, current algorithms often make highly confident yet wrong predictions when faced with relevant but unfamiliar examples. A classifier we trained to recognize gender is 12 times more likely to be wrong with a 99% confident prediction if presented with a subject from a different age group than those seen during training. In this paper, we compare and evaluate several methods to improve confidence estimates for unfamiliar and familiar samples. We propose a testing methodology of splitting unfamiliar and familiar samples by attribute (age, breed, subcategory) or sampling (similar datasets collected by different people at different times). We evaluate methods including confidence calibration, ensembles, distillation, and a Bayesian model and use several metrics to analyze label, likelihood, and calibration error. While all methods reduce over-confident errors, the ensemble of calibrated models performs best overall, and T-scaling performs best among the approaches with fastest inference. Our code is available at https://github.com/lizhitwo/ConfidenceEstimates . $\color{red}{\text{Please see UPDATED ERRATA.}}$<br />Comment: Published in CVPR 2020 (oral). ERRATA: (1) a previous version (v3) included erroneous results for $T$-scaling, where novel samples are mistakenly included in the validation set for calibration. Please disregard those results. (2) Previous versions (v4, v5) incorrectly stated that Adam was used. In fact, we used SGD

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1804.03166
Document Type :
Working Paper