14 results
Search Results
2. A Gibbs sampler for learning DAG: a unification for discrete and Gaussian domains.
- Author
-
Zareifard, Hamid, Rezaei Tabar, Vahid, and Plewczynski, Dariusz
- Subjects
GIBBS sampling ,DISTRIBUTION (Probability theory) ,MARGINAL distributions ,ALGORITHMS ,GAUSSIAN distribution ,PARAMETER estimation - Abstract
One of the major challenges in modern day statistics is to formulate models and develop inferential procedures to understand the complex multivariate relationships present in high-dimensional datasets. In this paper, we address the issue of model determination for DAGs, with respect to a given ordering of the variables, together with the corresponding parameter estimation. For this, we use a hierarchical mixture prior and develop a Gibbs sampling algorithm to carry out the posterior computations. We first focus on the Gaussian DAG models and calculate the posterior probability of being the edge between two nodes. We then extend our idea to construct a DAG for discrete data under the assumption that the data generated by discretization of the marginal distributions of a latent multivariate Gaussian distribution via a set of predetermined threshold values. Results show that the proposed method has high accuracy. The source code is available at [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Fully Bayesian logistic regression with hyper-LASSO priors for high-dimensional feature selection.
- Author
-
Li, Longhai and Yao, Weixin
- Subjects
BAYESIAN analysis ,FEATURE selection ,LOGISTIC regression analysis ,MARKOV chain Monte Carlo ,HAMILTONIAN systems ,ALGORITHMS - Abstract
Feature selection arises in many areas of modern science. For example, in genomic research, we want to find the genes that can be used to separate tissues of different classes (e.g. cancer and normal). One approach is to fit regression/classification models with certain penalization. In the past decade, hyper-LASSO penalization (priors) have received increasing attention in the literature. However, fully Bayesian methods that use Markov chain Monte Carlo (MCMC) for regression/classification with hyper-LASSO priors are still in lack of development. In this paper, we introduce an MCMC method for learning multinomial logistic regression with hyper-LASSO priors. Our MCMC algorithm uses Hamiltonian Monte Carlo in a restricted Gibbs sampling framework. We have used simulation studies and real data to demonstrate the superior performance of hyper-LASSO priors compared to LASSO, and to investigate the issues of choosing heaviness and scale of hyper-LASSO priors. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
4. A Bayesian model for multinomial sampling with misclassified data.
- Author
-
Ruiz, M., Girón, F.J., Pérez, C.J., Martín, J., and Rojano, C.
- Subjects
BAYESIAN analysis ,DIFFERENTIAL equations ,GIBBS' equation ,ALGORITHMS ,NOISE ,EMPIRICAL research - Abstract
In this paper the issue of making inferences with misclassified data from a noisy multinomial process is addressed. A Bayesian model for making inferences about the proportions and the noise parameters is developed. The problem is reformulated in a more tractable form by introducing auxiliary or latent random vectors. This allows for an easy-to-implement Gibbs sampling-based algorithm to generate samples from the distributions of interest. An illustrative example related to elections is also presented. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
5. Bayesian Analysis of Zero-Inflated Distributions.
- Author
-
Rodrigues, Josemar
- Subjects
DISTRIBUTION (Probability theory) ,BAYESIAN analysis ,ALGORITHMS - Abstract
In this paper zero-inflated distributions (ZID) are studied from the Bayesian point of view using the data augmentation algorithm. This type of discrete model arises in count data with excess of zeros. The zero-inflated Poisson distribution (ZIP) and an illustrative example via MCMC algorithm are considered. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
6. Sequential Gibbs Sampling Algorithm for Cognitive Diagnosis Models with Many Attributes.
- Author
-
Wang, Juntao, Shi, Ningzhong, Zhang, Xue, and Xu, Gongjun
- Subjects
GIBBS sampling ,MARKOV chain Monte Carlo ,ALGORITHMS - Abstract
Cognitive diagnosis models (CDMs) are useful statistical tools to provide rich information relevant for intervention and learning. As a popular approach to estimate and make inference of CDMs, the Markov chain Monte Carlo (MCMC) algorithm is widely used in practice. However, when the number of attributes, K, is large, the existing MCMC algorithm may become time-consuming, due to the fact that O (2 K) calculations are usually needed in the process of MCMC sampling to get the conditional distribution for each attribute profile. To overcome this computational issue, motivated by Culpepper and Hudson's earlier work in 2018, we propose a computationally efficient sequential Gibbs sampling method, which needs O(K) calculations to sample each attribute profile. We use simulation and real data examples to show the good finite-sample performance of the proposed sequential Gibbs sampling, and its advantage over existing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Geometric ergodicity of a more efficient conditional Metropolis-Hastings algorithm.
- Author
-
Hui, Jianan, Flegal, James M., and Johnson, Alicia
- Subjects
ALGORITHMS ,GIBBS sampling ,MARKOV chain Monte Carlo - Abstract
Despite its extensive application in practice, the Metropolis-Hastings sampler can suffer from slow mixing and, in turn, statistical inefficiency. We introduce a modification to the Metropolis-Hastings algorithm that, under specified conditions, encourages more efficient movement on general state spaces while preserving the overall quality of convergence, geometric ergodicity in particular. We illustrate the modified algorithm and its properties for the Metropolis-Hastings algorithm for a toy univariate Normal model and for the Gibbs sampling algorithm in a toy bivariate Normal model and a Bayesian dynamic spatiotemporal model. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Particle MCMC With Poisson Resampling: Parallelization and Continuous Time Models.
- Author
-
Ca̧kała, Tomasz, Miasojedow, Błażej, and Niemiro, Wojciech
- Subjects
CONTINUOUS time models ,POISSON distribution ,MARKOV chain Monte Carlo ,GIBBS sampling ,MARKOV processes ,CONTINUOUS processing ,DETERMINISTIC processes ,ALGORITHMS - Abstract
We introduce a new version of particle filter in which the number of "children" of a particle at a given time has a Poisson distribution. As a result, the number of particles is random and varies with time. An advantage of this scheme is that descendants of different particles can evolve independently. It makes easy to parallelize computations. Moreover, particle filter with Poisson resampling is readily adapted to the case when a hidden process is a continuous time, piecewise deterministic semi-Markov process. We show that the basic techniques of particle MCMC, namely particle independent Metropolis-Hastings, particle Gibbs sampler and its version with ancestor sampling, work under our Poisson resampling scheme. Our version of particle Gibbs sampler is uniformly ergodic under the same assumptions as its standard counterpart. We present simulation results which indicate that our algorithms can compete with the existing methods. Supplemental materials for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Not every Gibbs sampler is a special case of the Metropolis–Hastings algorithm.
- Author
-
VanDerwerken, Douglas
- Subjects
GIBBS sampling ,ALGORITHMS ,MARKOV chain Monte Carlo ,DISTRIBUTION (Probability theory) ,ITERATIVE methods (Mathematics) - Abstract
It is commonly asserted that the Gibbs sampler is a special case of the Metropolis–Hastings (MH) algorithm. While this statement is true for certain Gibbs samplers, it is not true in general for the version that is taught and used most often, namely, the deterministic scan Gibbs sampler. In this note, I prove that that there exist deterministic scan Gibbs samplers that do not exhibit detailed balance and hence cannot be considered MH samplers. The nuances of various Gibbs sampling schemes are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
10. Counting with Combined Splitting and Capture–Recapture Methods.
- Author
-
Dupuis, Paul, Kaynar, Bahar, Ridder, Ad, Rubinstein, Reuven, and Vaisman, Radislav
- Subjects
SPLITTING extrapolation method ,RANDOM graphs ,BINARY number system ,ESTIMATION theory ,ALGORITHMS ,GIBBS sampling ,MATHEMATICAL analysis - Abstract
We apply the splitting method to three well-known counting problems, namely 3-SAT, random graphs with prescribed degrees, and binary contingency tables. We present an enhanced version of the splitting method based on the capture-recapture technique, and show by experiments the superiority of this technique for SAT problems in terms of variance of the associated estimators, and speed of the algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
11. Bayesian Analysis of Diagnostic Test Accuracy When Disease State is Unverified for Some Subjects.
- Author
-
Pennello, GeneA.
- Subjects
BAYESIAN analysis ,LIVER disease diagnosis ,MEDICAL screening ,ALGORITHMS ,MISSING data (Statistics) ,SENSITIVITY analysis - Abstract
Studies of the accuracy of medical tests to diagnose the presence or absence of disease can suffer from an inability to verify the true disease state in everyone. When verification is missing at random (MAR), the missing data mechanism can be ignored in likelihood-based inference. However, this assumption may not hold even approximately. When verification is nonignorably missing, the most general model of the distribution of disease state, test result, and verification indicator is overparameterized. Parameters are only partially identified, creating regions of ignorance for maximum likelihood estimators. For studies of a single test, we use Bayesian analysis to implement the most general nonignorable model, a reduced nonignorable model with identifiable parameters, and the MAR model. Simple Gibbs sampling algorithms are derived that enable computation of the posterior distribution of test accuracy parameters. In particular, the posterior distribution is easily obtained for the most general nonignorable model, which makes relatively weak assumptions about the missing data mechanism. For this model, the posterior distribution combines two sources of uncertainty: ignorance in the estimation of partially identified parameters, and imprecision due to finite sampling variability. We compare the three models on data from a study of the accuracy of scintigraphy to diagnose liver disease. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
12. Double generalized linear model for tissue culture proportion data: a Bayesian perspective.
- Author
-
Vieira, AfrânioM.C., Leandro, RoseliA., Demétrio, ClariceG.B., and Molenberghs, Geert
- Subjects
MATHEMATICAL models ,BAYESIAN analysis ,REASONING ,LINEAR statistical models ,ALGORITHMS - Abstract
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
13. Adaptive Independent Metropolis–Hastings by Fast Estimation of Mixtures of Normals.
- Author
-
Giordani, Paolo
- Subjects
NUMERICAL solutions for Markov processes ,ALGORITHMS ,SIMULATION methods & models ,REGRESSION analysis ,NUMERICAL solutions to equations ,MATRICES (Mathematics) ,MATHEMATICAL statistics - Abstract
Adaptive Metropolis–Hastings samplers use information obtained from previous draws to tune the proposal distribution automatically and repeatedly. Adaptation needs to be done carefully to ensure convergence to the correct target distribution because the resulting chain is not Markovian. We construct an adaptive independent Metropolis–Hastings sampler that uses a mixture of normals as a proposal distribution. To take full advantage of the potential of adaptive sampling our algorithm updates the mixture of normals frequently, starting early in the chain. The algorithm is built for speed and reliability and its sampling performance is evaluated with real and simulated examples. Our article outlines conditions for adaptive sampling to hold. An online supplement to the article gives a proof of convergence and Gauss code to implement the algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
14. Influential Observations in the Functional Measurement Error Model.
- Author
-
Vidal, Ignacio, Iglesias, Pilar, and Galea, Manuel
- Subjects
BAYESIAN analysis ,MEASUREMENT errors ,DISTRIBUTION (Probability theory) ,PERTURBATION theory ,ALGORITHMS ,RISK - Abstract
In this work we propose Bayesian measures to quantify the influence of observations on the structural parameters of the simple measurement error model (MEM). Different influence measures, like those based on q-divergence between posterior distributions and Bayes risk, are studied to evaluate the influence. A strategy based on the perturbation function and MCMC samples is used to compute these measures. The samples from the posterior distributions are obtained by using the Metropolis-Hastings algorithm and assuming specific proper prior distributions. The results are illustrated with an application to a real example modeled with MEM in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.