1. Priors via imaginary training samples of sufficient statistics for objective Bayesian hypothesis testing
- Author
-
Dimitris Fouskakis
- Subjects
Statistics and Probability ,Normal distribution ,Interpretation (logic) ,Computer science ,Generalization ,Prior probability ,Algorithm ,Equivalence (measure theory) ,Sufficient statistic ,Statistical hypothesis testing ,Curse of dimensionality - Abstract
The expected-posterior prior (EPP) and the power-expected-posterior (PEP) prior are based on random imaginary observations and offer several advantages in objective Bayesian hypothesis testing. The use of sufficient statistics, when these exist, as a way to redefine the EPP and PEP prior is investigated. In this way the dimensionality of the problem can be reduced, by generating samples of sufficient statistics instead of generating full sets of imaginary data. On the theoretical side it is proved that the new EPP and PEP definitions based on imaginary training samples of sufficient statistics are equivalent with the standard definitions based on individual training samples. This equivalence provides a strong justification and generalization of the definition of both EPP and PEP prior, since from the individual samples or from the sufficient samples the criteria coincide. This avoids potential inconsistencies or paradoxes when only sufficient statistics are available. The applicability of the new definitions in different hypotheses testing problems is explored, including the case of an irregular model. Calculations are simplified; and it is shown that when testing the mean of a normal distribution the EPP and PEP prior can be expressed as a beta mixture of normal priors. The paper concludes with a discussion about the interpretation and the benefits of the proposed approach.
- Published
- 2019
- Full Text
- View/download PDF