1. Model misspecification, Bayesian versus credibility estimation, and Gibbs posteriors
- Author
-
Liang Hong and Ryan Martin
- Subjects
Statistics and Probability ,Economics and Econometrics ,050208 finance ,Computer science ,Posterior probability ,05 social sciences ,Bayesian probability ,Inference ,Context (language use) ,Statistical model ,01 natural sciences ,Statistics::Computation ,010104 statistics & probability ,Bayes' theorem ,ComputingMethodologies_PATTERNRECOGNITION ,Exponential family ,0502 economics and business ,Credibility ,Prior probability ,Econometrics ,Statistics::Methodology ,0101 mathematics ,Statistics, Probability and Uncertainty ,Uncertainty quantification - Abstract
In the context of predicting future claims, a fully Bayesian analysis --- one that specifies a statistical model, prior distribution, and updates using Bayes's formula --- is often viewed as the gold-standard, while Buhlmann's credibility estimator serves as a simple approximation. But those desirable properties that give the Bayesian solution its elevated status depend critically on the posited model being correctly specified. Here we investigate the asymptotic behavior of Bayesian posterior distributions under a misspecified model, and our conclusion is that misspecification bias generally has damaging effects that can lead to inaccurate inference and prediction. The credibility estimator, on the other hand, is not sensitive at all to model misspecification, giving it an advantage over the Bayesian solution in those practically relevant cases where the model is uncertain. This begs the question: does robustness to model misspecification require that we abandon uncertainty quantification based on a posterior distribution? Our answer to this question is No, and we offer an alternative Gibbs posterior construction. Furthermore, we argue that this Gibbs perspective provides a new characterization of Buhlmann's credibility estimator.
- Published
- 2020