Back to Search Start Over

Loss Estimators Improve Model Generalization

Authors :
Narayanaswamy, Vivek
Thiagarajan, Jayaraman J.
Rajan, Deepta
Spanias, Andreas
Publication Year :
2021

Abstract

With increased interest in adopting AI methods for clinical diagnosis, a vital step towards safe deployment of such tools is to ensure that the models not only produce accurate predictions but also do not generalize to data regimes where the training data provide no meaningful evidence. Existing approaches for ensuring the distribution of model predictions to be similar to that of the true distribution rely on explicit uncertainty estimators that are inherently hard to calibrate. In this paper, we propose to train a loss estimator alongside the predictive model, using a contrastive training objective, to directly estimate the prediction uncertainties. Interestingly, we find that, in addition to producing well-calibrated uncertainties, this approach improves the generalization behavior of the predictor. Using a dermatology use-case, we show the impact of loss estimators on model generalization, in terms of both its fidelity on in-distribution data and its ability to detect out of distribution samples or new classes unseen during training.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2103.03788
Document Type :
Working Paper