1. Bayesian model selection through averaging: applications and theory
- Author
-
Pan, Tianyu
- Subjects
- Statistics, Clustering Analysis, Model Averaging, Model Selection Consistency, Nonparametric Bayesian, Posterior Convergence
- Abstract
This thesis centers on developing Bayesian model selection methods through model averaging. Specifically, we investigate the conditions under which the model selection consistency can be achieved and the interpretations of the selected model. Some Bayesian model selection methods are introduced under different topics, such as Bayesian panel data clustering, biclustering binary, continuous and counting-valued data matrix, and testing the separability assumption of a Kronecker product covariance. For the proposed models, we establish convergence results for the posterior estimates and bridge the convergence rates with the frequentist large-sample conclusions. In addition, we use numerous simulations to validate the theoretical justifications and the robustness of the proposed methods. For the first three works, we apply the proposed methods to analyze real data and conclude with interpretable results.In Chapter 2, we consider a Bayesian panel data clustering method based on a Markov random field-constrained product partition model. The primary goal of the study is to cluster longitudinal observations collected at spatial locations and leverage the geographical information to facilitate clustering. Based on the empirical results and theoretical justifications, our method can identify the true location clustering structure given sufficient longitudinal samples for each location. The main theoretical contribution of this work is in the clustering consistency of the studied locations, which is boosted by incorporating the spatial information.In Chapter 3, we propose a biclustering method for binary data matrices with a variant of the nested mixture of finite mixtures model. Our research objective is to detect a nested clustering structure to help understand the interaction between the two main factors, e.g., in the motivating educational assessment data example, the two main factors are examinees' ability and test questions' difficulty level. Both computational and theoretical results reveal that the proposed method can identify the clustering structure at the question (column) level and the heterogeneity pattern at the examinee (row) level, given the increasing number of examinees. The main contribution of this work is it provides an insightful understanding of the interaction between the two main factors, compared with the conventional Item Response Theory.In Chapter 4, we generalize the idea in Chapter 3 to analyze continuous and counting-valued data matrices. Likewise, we conclude with the simulation results that justify the practical feasibility and the theoretical results on the column clustering consistency. Besides, we show that the column clustering result is equivalent to the independence structure of the associated random variables. To the best of our knowledge, our work is the first one that studies the connection between a biclustering structure and the independence structure of the associated random variables.In Chapter 5, we propose a large covariance estimation method under the Kronecker product structure, modeled by Bayesian sparse priors. Moreover, to examine the goodness-of-fit of the Kronecker product estimation, we evaluate a sequence of Bayes factors, with each given by the ratio of a bi-variate normal distribution to the product of two uni-variate normal distributions in terms of their marginal likelihoods. Both estimation and testing methods have the proven ability to identify the true covariance and if all off-diagonal entries are 0 or not, respectively.
- Published
- 2023