354 results on '"coverage probability"'
Search Results
2. Estimation and selection in linear mixed models with missing data under compound symmetric structure
- Author
-
Yi-Ching Lee and Junfeng Shang
- Subjects
Statistics and Probability ,Estimation ,Dependency (UML) ,Model selection ,Linear model ,Coverage probability ,Articles ,Statistics, Probability and Uncertainty ,Missing data ,Algorithm ,Generalized linear mixed model ,Selection (genetic algorithm) ,Mathematics - Abstract
It is quite appealing to extend existing theories in classical linear models to correlated responses where linear mixed-effects models are utilized and the dependency in the data is modeled by random effects. In the mixed modeling framework, missing values occur naturally due to dropouts or non-responses, which is frequently encountered when dealing with real data. Motivated by such problems, we aim to investigate the estimation and model selection performance in linear mixed models when missing data are present. Inspired by the property of the indicator function for missingness and its relation to missing rates, we propose an approach that records missingness in an indicator-based matrix and derive the likelihood-based estimators for all parameters involved in the linear mixed-effects models. Based on the proposed method for estimation, we explore the relationship between estimation and selection behavior over missing rates. Simulations and a real data application are conducted for illustrating the effectiveness of the proposed method in selecting the most appropriate model and in estimating parameters.
- Published
- 2021
3. Confidence intervals for means and variances of nonnormal distributions
- Author
-
José Dias Curto
- Subjects
Statistics and Probability ,Population mean ,Modeling and Simulation ,Statistics ,Monte Carlo method ,Coverage probability ,Variance (accounting) ,Geometric mean ,Confidence interval ,Mathematics - Abstract
In this article, we propose new confidence intervals for the population mean and variance, the ratio of two populations variance, and the difference in the arithmetic averages of two populations wi...
- Published
- 2021
4. A new standardized mortality ratio method for hospital quality evaluation
- Author
-
Jing Chen, Xin Lai, Liu Liu, and Qing Peng
- Subjects
Statistics and Probability ,medicine.medical_specialty ,Promotion (rank) ,Standardized mortality ratio ,media_common.quotation_subject ,Emergency medicine ,Hospital quality ,Coverage probability ,medicine ,Logistic regression ,Medical care ,media_common ,Mathematics - Abstract
Evaluation of hospital quality is of great significance for the promotion of the development of medical care. Hospital standardized mortality ratio (HSMR) is the ratio of hospital observed mortalit...
- Published
- 2021
5. Confidence bands for exponential distribution functions under progressive type-II censoring
- Author
-
Fabian Mies and Stefan Bedbur
- Subjects
FOS: Computer and information sciences ,Statistics and Probability ,Exponential distribution ,0211 other engineering and technologies ,Coverage probability ,Mathematics - Statistics Theory ,Sample (statistics) ,Statistics Theory (math.ST) ,02 engineering and technology ,Computer Science::Computational Geometry ,01 natural sciences ,Location-scale family ,Methodology (stat.ME) ,010104 statistics & probability ,Statistics ,FOS: Mathematics ,0101 mathematics ,Statistics - Methodology ,Mathematics ,021103 operations research ,Applied Mathematics ,Order statistic ,Distribution function ,Modeling and Simulation ,Censoring (clinical trials) ,Statistics, Probability and Uncertainty ,Scale parameter - Abstract
Based on a progressively type-II censored sample from the exponential distribution with unknown location and scale parameter, confidence bands are proposed for the underlying distribution function by using confidence regions for the parameters and Kolmogorov-Smirnov type statistics. Simple explicit representations for the boundaries and for the coverage probabilities of the confidence bands are analytically derived, and the performance of the bands is compared in terms of band width and area by means of a data example. As a by-product, a novel confidence region for the location-scale parameter is obtained. Extensions of the results to related models for ordered data, such as sequential order statistics, as well as to other underlying location-scale families of distributions are discussed., AOM. This article has been accepted for publication in Journal of Statistical Computation and Simulation, published by Taylor & Francis
- Published
- 2021
6. Comparison of some interval estimation methods for the parameters of the gamma distribution
- Author
-
Edilberto Nájera and Addy Bolívar-Cimé
- Subjects
Statistics and Probability ,021103 operations research ,Interval estimation ,0211 other engineering and technologies ,Coverage probability ,Estimator ,macromolecular substances ,02 engineering and technology ,Bayesian inference ,01 natural sciences ,Confidence interval ,010104 statistics & probability ,Modeling and Simulation ,Statistics ,Fiducial inference ,Gamma distribution ,Interval (graph theory) ,0101 mathematics ,Mathematics - Abstract
Several methods of finding interval estimators of the parameters of the gamma distribution are considered in the literature. In this work we compare the following methods: Wald confidence intervals...
- Published
- 2021
7. Confidence intervals for the proportion of conformance
- Author
-
Hsiuying Wang and Chung Han Lee
- Subjects
Statistics and Probability ,021103 operations research ,Applied Mathematics ,media_common.quotation_subject ,0211 other engineering and technologies ,Coverage probability ,02 engineering and technology ,01 natural sciences ,Confidence interval ,Binomial distribution ,010104 statistics & probability ,Modeling and Simulation ,Statistics ,Quality (business) ,0101 mathematics ,Statistics, Probability and Uncertainty ,media_common ,Mathematics - Abstract
The proportion of conformance is defined as the proportion of products with quality characteristic inside the specification limits. The construction of confidence interval for the proportion of con...
- Published
- 2021
8. Confidence Intervals for a Population Size Based on Capture-Recapture Data
- Author
-
K. Krishnamoorthy, Bao-Anh Dang, and Shanshan Lv
- Subjects
Mark and recapture ,education.field_of_study ,Applied Mathematics ,Population size ,Statistics ,Population ,Score method ,Coverage probability ,Target population ,education ,General Business, Management and Accounting ,Confidence interval ,Mathematics - Abstract
Capture-recapture is a popular sampling method to estimate the total number of individuals in a population. This method is also used to estimate the size of a target population based on several inc...
- Published
- 2020
9. Fiducial confidence intervals for proportions in finite populations: One- and two-sample problems
- Author
-
K. Krishnamoorthy and Shanshan Lv
- Subjects
Statistics and Probability ,education.field_of_study ,021103 operations research ,Population ,0211 other engineering and technologies ,Coverage probability ,02 engineering and technology ,Odds ratio ,01 natural sciences ,Confidence interval ,010104 statistics & probability ,Relative risk ,Statistics ,Quantitative Biology::Populations and Evolution ,Two sample ,0101 mathematics ,education ,Fiducial marker ,Mathematics - Abstract
The problems of constructing confidence intervals (CIs) for the proportions and functions of proportions in finite populations are considered. For estimating the proportion in a finite population, ...
- Published
- 2020
10. The information domain confidence intervals in univariate linear calibration
- Author
-
Guimei Zhao and Xingzhong Xu
- Subjects
Statistics and Probability ,021103 operations research ,Calibration (statistics) ,0211 other engineering and technologies ,Coverage probability ,Univariate ,02 engineering and technology ,01 natural sciences ,Confidence interval ,010104 statistics & probability ,Information domain ,Modeling and Simulation ,Statistics ,Simple linear model ,0101 mathematics ,Variable (mathematics) ,Mathematics - Abstract
We consider the confidence interval for the univariate linear calibration, where a response variable is related to an explanatory variable by a simple linear model, and the observations of the resp...
- Published
- 2020
11. Tests and Confidence Intervals for the Mean of a Zero-Inflated Poisson Distribution
- Author
-
Dustin Waguespack, Meesook Lee, and K. Krishnamoorthy
- Subjects
021103 operations research ,Distribution (number theory) ,Applied Mathematics ,Degenerate energy levels ,0211 other engineering and technologies ,Coverage probability ,02 engineering and technology ,Poisson distribution ,01 natural sciences ,General Business, Management and Accounting ,Confidence interval ,Quantitative Biology::Subcellular Processes ,010104 statistics & probability ,symbols.namesake ,Statistics ,Computer Science::Mathematical Software ,symbols ,Zero-inflated model ,Computer Science::Symbolic Computation ,0101 mathematics ,Mathematics ,Count data - Abstract
The zero-inflated Poisson (ZIP) model is often postulated for count data that include excessive zeros. This ZIP distribution can be regarded as the mixture of two distributions, one that degenerate...
- Published
- 2020
12. Confidence-credible intervals
- Author
-
Ivair R. Silva and Dionatan W. R. Oliveira
- Subjects
Statistics and Probability ,Data set ,Frequentist inference ,Unanimity ,Bayesian probability ,Statistics ,Interval estimation ,Coverage probability ,Mathematics - Abstract
Frequentist and Bayesian approaches for interval estimation usually produce conflicting results if applied to analyze the same data set. Paradoxically, there is no unanimity in the literature on wh...
- Published
- 2020
13. A new approach to precise interval estimation for the parameters of the hypergeometric distribution
- Author
-
Mark Schilling and Alyssa Stanley
- Subjects
Statistics and Probability ,education.field_of_study ,021103 operations research ,Population ,Interval estimation ,0211 other engineering and technologies ,Coverage probability ,Contrast (statistics) ,02 engineering and technology ,Computer Science::Computational Geometry ,01 natural sciences ,Hypergeometric distribution ,010104 statistics & probability ,Quantitative Biology::Populations and Evolution ,Applied mathematics ,0101 mathematics ,education ,Mathematics - Abstract
We study interval estimation for both parameters of the hypergeometric distribution: (i) the number of successes in a finite population and (ii) the size of the population. In contrast to tradition...
- Published
- 2020
14. Reduce the computation in jackknife empirical likelihood for comparing two correlated Gini indices
- Author
-
Yichuan Zhao and Kangni Alemdjrodo
- Subjects
Statistics and Probability ,Index (economics) ,Inequality ,media_common.quotation_subject ,05 social sciences ,Coverage probability ,U-statistic ,01 natural sciences ,Measure (mathematics) ,Confidence interval ,010104 statistics & probability ,Empirical likelihood ,0502 economics and business ,Statistics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Jackknife resampling ,050205 econometrics ,Mathematics ,media_common - Abstract
The Gini index has been widely used as a measure of income (or wealth) inequality in social sciences. To construct a confidence interval for the difference of two Gini indices from the pair...
- Published
- 2019
15. Assessing the impact of the economic crises in 1997 and 2008 on suicides in Hong Kong, Taiwan and South Korea using a strata-bootstrap algorithm
- Author
-
Paul S. F. Yip and Mehdi Soleymani
- Subjects
Statistics and Probability ,021103 operations research ,Financial economics ,0211 other engineering and technologies ,Coverage probability ,Articles ,02 engineering and technology ,01 natural sciences ,Bootstrap algorithm ,010104 statistics & probability ,Financial crisis ,Economics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Bootstrap confidence interval - Abstract
The Asian economic crises of 1997 and the 2008 Global Financial Crisis (GFC) had far-reaching impacts on Asian and other global economies. Turmoil in the banking and finance sectors led to downturns in stock markets, resulting in bankruptcies, house repossessions and high unemployment. These crises have been shown to be correlated with a deterioration in mental health and an increase in suicides, and it is important to understand the implication of these impacts and how such recessions affect the health of affected populations. With the benefit of hindsight, did lessons learned from the negative effects of the 1997 Asian economic recession impact the aftermath of the 2008 GFC in Asian countries? Utilising a framework based on a simple strata-bootstrap algorithm using daily data – where available – we investigate the trend in suicide rates over time in three different populations (Hong Kong, Taiwan and South Korea), and examine whether there were any changes in the pattern of suicide rates in each country subsequent to both the 1997 Asian and 2008. We find that each country responded differently to each of the crises and the suicide rates for certain age-gender specific groups in each country were more affected.
- Published
- 2019
16. Interval estimators for ratios of independent quantiles and interquantile ranges
- Author
-
Luke A. Prendergast, Maxwell Cairns, and Chandima N. P. G. Arachchige
- Subjects
Statistics and Probability ,021103 operations research ,0211 other engineering and technologies ,Coverage probability ,Estimator ,Mathematics - Statistics Theory ,Statistics Theory (math.ST) ,02 engineering and technology ,01 natural sciences ,010104 statistics & probability ,Delta method ,Sample size determination ,Modeling and Simulation ,Statistics ,FOS: Mathematics ,Interval (graph theory) ,0101 mathematics ,Quantile ,Mathematics - Abstract
Recent research has shown that interval estimators with good coverage properties are achievable for some functions of quantiles, even when sample sizes are not large. Motivated by this, we consider interval estimators for the ratios of independent quantiles and interquantile ranges that will be useful when comparing location and scale for two samples. Simulations show that the intervals have excellent coverage properties for a wide range of distributions, including those that are heavily skewed. Examples are also considered that highlight the usefulness of using these approaches to compare location and scale.
- Published
- 2019
17. Sensitivity analysis for the generalized shared-parameter model framework
- Author
-
Freedom Gumedze and Abdul-Karim Iddrisu
- Subjects
Pharmacology ,Statistics and Probability ,Mean square ,Models, Statistical ,Time Factors ,Coverage probability ,Pericarditis, Tuberculous ,Missing data ,CD4 Lymphocyte Count ,Treatment Outcome ,Research Design ,Robustness (computer science) ,Data Interpretation, Statistical ,Statistics ,Humans ,Multicenter Studies as Topic ,Pharmacology (medical) ,Longitudinal Studies ,Sensitivity (control systems) ,Glucocorticoids ,Sensitivity analyses ,Randomized Controlled Trials as Topic ,Count data ,Mathematics - Abstract
In this paper, we assess the effect of tuberculosis pericarditis treatment (prednisolone) on CD4 count changes over time and draw inferences in the presence of missing data. We accounted for the missing data and performed sensitivity analyses to assess robustness of inferences, from a model that assumes that the data are missing at random, to models that assume that the data are not missing at random. Our sensitivity approaches are within the shared-parameter model framework. We implemented the approach by Creemers and colleagues to the CD4 count data and performed simulation studies to evaluate the performance of this approach. We also assessed the influence of potentially influential subjects, on parameter estimates, via the global influence approach. Our results revealed that inferences from missing at random analysis model are robust to not missing at random models and influential subjects did not overturn the study conclusions about prednisolone effect and missing data mechanism. Prednisolone was found to have no significant effect on CD4 count changes over time and also did not interact with anti-retroviral therapy. The simulation studies produced unbiased estimates of prednisolone effect with lower mean square errors and coverage probabilities approximately equal the nominal coverage probability.
- Published
- 2019
18. Estimation of parameters and reliability characteristics for a generalized Rayleigh distribution under progressive type-II censored sample
- Author
-
Suchandan Kayal and Kousik Maiti
- Subjects
Statistics and Probability ,021103 operations research ,Mean squared error ,Rayleigh distribution ,0211 other engineering and technologies ,Coverage probability ,Sample (statistics) ,02 engineering and technology ,Computer Science::Computational Geometry ,01 natural sciences ,Confidence interval ,Statistics::Computation ,010104 statistics & probability ,Delta method ,Bayes' theorem ,Modeling and Simulation ,Statistics ,Statistics::Methodology ,0101 mathematics ,Reliability (statistics) ,Mathematics - Abstract
In this article, we obtain maximum likelihood and Bayes estimates of the parameters, reliability and hazard functions for generalized Rayleigh distribution when progressive type-II censored sample ...
- Published
- 2019
19. Notes on Interval Estimation Under the AB/BA Design in Multicenter Trials With Binary Responses
- Author
-
Chii-Dean Lin and Kung-Jong Lui
- Subjects
Statistics and Probability ,Conditional likelihood ,education.field_of_study ,Population ,Interval estimation ,Coverage probability ,Pharmaceutical Science ,Binary number ,Random effects model ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Multicenter trial ,Statistics ,Generalizability theory ,030212 general & internal medicine ,0101 mathematics ,education ,Mathematics - Abstract
A multicenter trial is often applied to facilitate recruiting patients into a trial and enhance the generalizability of results to a population. Under a distribution-free random effects log...
- Published
- 2019
20. Statistical inference problems in sequential parallel comparison design
- Author
-
H. M. James Hung, Yifan Cui, and Semhar Ogbagaber
- Subjects
Statistics and Probability ,Computer science ,Interval estimation ,Coverage probability ,01 natural sciences ,Random Allocation ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Statistics ,Statistical inference ,Humans ,Computer Simulation ,Pharmacology (medical) ,030212 general & internal medicine ,Truncation (statistics) ,0101 mathematics ,Spurious relationship ,Probability ,Randomized Controlled Trials as Topic ,Pharmacology ,Models, Statistical ,Mental Disorders ,Placebo Effect ,Outcome (probability) ,Treatment Outcome ,Research Design ,Sample size determination ,Sample Size ,Type I and type II errors - Abstract
The sequential parallel comparison designhas recently been considered to solve the problem with high placebo response and the required sample size in the psychiatric clinical trials. One feature with this design is that a difference between the placebo group and the drug group may also arise in the variance-covariance structure of the clinical outcome. Provided the heterogeneity of the second moment, the treatment effect estimation at the second stage can be biased for the entire randomized patient population that includes patient responders. Our work presented here aims at how the coverage probability of the interval estimation of treatment effect performs under the unstructured variance-covariance matrix. The interaction between the truncation after the first stage and the heterogeneity of the second moment causes a substantial coverage probability problem. The type I error probability may not be controlled under the weak null due to this bias. This bias can also cause spurious power evaluation under an alternative hypothesis. The coverage probability of the ordinary least square statistic is shown in different scenarios.
- Published
- 2019
21. Quantile-based reliability comparison of products: Applied to Log-normal distribution
- Author
-
Ahad Malekzadeh and Kamyar Sabri-Laghaie
- Subjects
Statistics and Probability ,021103 operations research ,0211 other engineering and technologies ,Coverage probability ,02 engineering and technology ,01 natural sciences ,Confidence interval ,010104 statistics & probability ,Modeling and Simulation ,Log-normal distribution ,Statistics ,0101 mathematics ,Reliability (statistics) ,Quantile ,Mathematics - Abstract
In this paper, reliability comparison indices are developed for log-normally distributed populations. In this regard, three indices are proposed for reliability comparison of products respectively ...
- Published
- 2019
22. Yield-based process capability indices for nonnormal continuous data
- Author
-
Piao Chen, Zhi-Sheng Ye, and Bing Xing Wang
- Subjects
Mathematical optimization ,Percentile ,021103 operations research ,Computer science ,Strategy and Management ,Process capability ,Kernel density estimation ,0211 other engineering and technologies ,Coverage probability ,Nonparametric statistics ,Estimator ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,Industrial and Manufacturing Engineering ,Normal distribution ,010104 statistics & probability ,0101 mathematics ,Safety, Risk, Reliability and Quality ,Parametric statistics - Abstract
Process capability indices (PCIs) are widely used to assess whether an in-control process meets manufacturing specifications. In most applications of classical PCIs, the process characteristic is assumed normally distributed. However, the normal distribution has been found inappropriate in various applications. In the literature, the percentile-based PCIs are widely used to deal with the nonnormal process. One problem associated with the percentile-based PCIs is that they do not provide a quantitative interpretation to the process capability. In this study, new PCIs that have a consistent quantification to the process capability for both normal and nonnormal processes are proposed. The proposed PCIs are generalizations of the classical normal PCIs in the sense that they are the same as the classical PCIs when the process characteristic follows a normal distribution, and they offer the same interpretation to the process capability as the classical PCIs when the process characteristic is nonnormal. We then discuss nonparametric and parametric estimation of the proposed PCIs. The nonparametric estimator is based on the kernel density estimation and confidence limits are obtained by the nonparametric bootstrap, while the parametric estimator is based on the maximum likelihood estimation and confidence limits are constructed by the method of generalized pivots. The proposed methodologies are demonstrated using a real example from a manufacturing factory.
- Published
- 2019
23. Confidence intervals for the closed population size under inverse sampling without replacement
- Author
-
Mohammad Mohammadi
- Subjects
Statistics and Probability ,021103 operations research ,Interval estimation ,0211 other engineering and technologies ,Coverage probability ,Estimator ,Sampling (statistics) ,02 engineering and technology ,Simple random sample ,01 natural sciences ,Confidence interval ,010104 statistics & probability ,Sampling distribution ,Bias of an estimator ,Statistics ,Statistics::Methodology ,0101 mathematics ,Mathematics - Abstract
Inverse sampling is an appropriate design for the second phase of capture-recapture experiments which provides an exactly unbiased estimator of the population size. However, the sampling distribution of the resulting estimator tends to be highly right skewed for small recapture samples, so, the traditional Wald-type confidence intervals appear to be inappropriate. The objective of this paper is to study the performance of interval estimators for the population size under inverse recapture sampling without replacement. To this aim, we consider the Wald-type, the logarithmic transformation-based, the Wilson score, the likelihood ratio and the exact methods. Also, we propose some bootstrap confidence intervals for the population size, including the with-replacement bootstrap (BWR), the without replacement bootstrap (BWO), and the Rao–Wu’s rescaling method. A Monte Carlo simulation is employed to evaluate the performance of suggested methods in terms of the coverage probability, error rates and standa...
- Published
- 2019
24. Semi-parametric Bayesian regression for subgroup analysis in clinical trials
- Author
-
Ram C. Tiwari and Margaret Gamalo-Siebers
- Subjects
Statistics and Probability ,Cystic Fibrosis ,Computer science ,Coverage probability ,Cystic Fibrosis Transmembrane Conductance Regulator ,Subgroup analysis ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Statistics ,Prior probability ,Credible interval ,Humans ,Computer Simulation ,Pharmacology (medical) ,030212 general & internal medicine ,0101 mathematics ,Pharmacology ,Clinical Trials as Topic ,Models, Statistical ,Reproducibility of Results ,Mean and predicted response ,Bayes Theorem ,Regression analysis ,Dirichlet process ,Treatment Outcome ,Research Design ,Data Interpretation, Statistical ,Sample Size ,Mutation ,Linear Models ,Bayesian linear regression - Abstract
Determining whether there are differential treatment effects in subgroups of trial participants remains an important topic in clinical trials as precision medicine becomes ever more relevant. Any assessment of differential treatment effect is predicated on being able to estimate the treatment response accurately while satisfying constraints of balancing the risk of overlooking an important subgroup with the potential to make a decision based on a false discovery. While regression models, such as marginal interaction model, have been widely used to improve accuracy of subgroup parameter estimates by leveraging the relationship between treatment and covariate, there is still a possibility that it can lead to excessively conservative or anti-conservative results. Conceivably, this can be due to the use of the normal distribution as a default prior, which forces outlying subjects to have their means over-shrunk towards the population mean, and the data from such subjects may be excessively influential in estimation of both the overall mean response and the mean response for each subgroup, or a model mis-specification. To address this issue, we investigate the use of nonparametric Bayes, particularly Dirichlet process priors, to create semi-parametric models. These models represent uncertainty in the prior distribution for the overall response while accommodating heterogeneity among individual subgroups. They also account for the effect and variation due to the unaccounted terms. As a result, the models do not force estimates to excessively shrink but still retain the attractiveness of improved precision given by the narrower credible intervals. This is illustrated in extensive simulations investigating bias, mean squared error, coverage probability and credible interval widths. We applied the method on a simulated data based closely on the results of a cystic fibrosis Phase 2 trial.
- Published
- 2019
25. Prediction intervals for hypergeometric distributions
- Author
-
Shanshan Lv and K. Krishnamoorthy
- Subjects
Statistics and Probability ,021103 operations research ,0211 other engineering and technologies ,Coverage probability ,Score method ,Prediction interval ,Sample (statistics) ,02 engineering and technology ,01 natural sciences ,Hypergeometric distribution ,010104 statistics & probability ,Simple (abstract algebra) ,Applied mathematics ,0101 mathematics ,Mathematics - Abstract
The problem of constructing prediction intervals (PIs) for a future sample from a hypergeometric distribution is addressed. Simple closed-form approximate PIs based on the Wald approach, th...
- Published
- 2019
26. Notes on the two-stage procedure under an incomplete block two-period crossover design
- Author
-
Kung-Jong Lui
- Subjects
Statistics and Probability ,Explicit formulae ,Modeling and Simulation ,Incomplete block ,Statistics ,Coverage probability ,Estimator ,Stage (hydrology) ,Crossover study ,Period (music) ,Mathematics - Abstract
Under an incomplete block crossover design with two periods, we derive the least-squares estimators for the period effect, treatment effects and carry-over effects in explicit formulae based on wit...
- Published
- 2019
27. Small sample inference for the common coefficient of variation
- Author
-
Mohammad Reza Kazemi and Ali Akbar Jafari
- Subjects
Statistics and Probability ,021103 operations research ,Coefficient of variation ,0211 other engineering and technologies ,Coverage probability ,Inference ,Small sample ,macromolecular substances ,02 engineering and technology ,01 natural sciences ,Confidence interval ,010104 statistics & probability ,Ratio method ,Modeling and Simulation ,Statistics ,0101 mathematics ,Mathematics - Abstract
This paper utilizes the modified signed log-likelihood ratio method for the problem of inference about the common coefficient of variation in several independent normal populations. This method is ...
- Published
- 2018
28. Estimation of the smallest scale parameter of two-parameter exponential distributions
- Author
-
Panayiotis Bobotas
- Subjects
Statistics and Probability ,021103 operations research ,Interval estimation ,0211 other engineering and technologies ,Coverage probability ,Estimator ,02 engineering and technology ,01 natural sciences ,Exponential function ,010104 statistics & probability ,Quadratic equation ,Applied mathematics ,Entropy (information theory) ,Nuisance parameter ,0101 mathematics ,Scale parameter ,Mathematics - Abstract
Improved point and interval estimation of the smallest scale parameter of n independent populations following two-parameter exponential distributions are studied. The model is formulated in such a way that allows for treating the estimation of the smallest scale parameter as a problem of estimating an unrestricted scale parameter in the presence of a nuisance parameter. The classes of improved point and interval estimators are enriched with Stein-type, Brewster and Zidek-type, Maruyama-type and Strawderman-type improved estimators under both quadratic and entropy losses, whereas using as a criterion the coverage probability, with Stein-type, Brewster and Zidek-type, and Maruyama-type improved intervals. The sampling framework considered incorporates important life-testing schemes such as i.i.d. sampling, type-II censoring, progressive type-II censoring, adaptive progressive type-II censoring, and record values.
- Published
- 2018
29. Interval Estimation for the Correlation Coefficient
- Author
-
Xinjie Hu, Gengsheng Qin, and Aekyung Jung
- Subjects
Statistics and Probability ,Correlation coefficient ,General Mathematics ,Fisher transformation ,05 social sciences ,Interval estimation ,Coverage probability ,Pivotal quantity ,01 natural sciences ,050105 experimental psychology ,Confidence interval ,010104 statistics & probability ,Empirical likelihood ,Statistics ,0501 psychology and cognitive sciences ,0101 mathematics ,Statistics, Probability and Uncertainty ,Random variable ,Mathematics - Abstract
The correlation coefficient (CC) is a standard measure of a possible linear association between two continuous random variables. The CC plays a significant role in many scientific disciplines. For ...
- Published
- 2018
30. Exact confidence limits for the probability of response in two-stage designs
- Author
-
Guogen Shan
- Subjects
Statistics and Probability ,Binomial (polynomial) ,Coverage probability ,Binary number ,Sample (statistics) ,Interval (mathematics) ,01 natural sciences ,Article ,Confidence interval ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Statistics ,Sample space ,030212 general & internal medicine ,Point estimation ,0101 mathematics ,Statistics, Probability and Uncertainty ,Mathematics - Abstract
In addition to point estimate for the probability of response in a two-stage design (e.g., Simon’s two-stage design for a Phase II clinical trial with binary endpoints), confidence limits should be Cute the confidence interval does not guarantee coverage probability in a two-stage setting. The existing exact approach to calculate one-sided limits is based on the overall number of responses to order the sample space. This approach could be conservative because many sample points have the same limits. We propose a new exact one-sided interval based on p-value for the sample space ordering. Exact intervals are computed by using binomial distributions directly, instead of a normal approximation. Both exact intervals preserve the nominal confidence level. The proposed exact interval based on the p-value generally performs better than the other exact interval with regard to expected length and simple average length of confidence intervals. Therefore, the new interval calculation based on p-value is recommended for use in practice.
- Published
- 2018
31. Estimation and prediction of Marshall–Olkin extended exponential distribution under progressively type-II censored data
- Author
-
Mazen Nassar, Raj Kamal Maurya, Yogesh Mani Tripathi, and Sanku Dey
- Subjects
Statistics and Probability ,Bayes estimator ,021103 operations research ,Exponential distribution ,Applied Mathematics ,0211 other engineering and technologies ,Coverage probability ,Estimator ,Markov chain Monte Carlo ,02 engineering and technology ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,Bayes' theorem ,Modeling and Simulation ,Credible interval ,symbols ,Applied mathematics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Fisher information ,Mathematics - Abstract
In this paper, we consider Marshall–Olkin extended exponential (MOEE) distribution which is capable of modelling various shapes of failure rates and aging criteria. The purpose of this paper is three fold. First, we derive the maximum likelihood estimators of the unknown parameters and the observed the Fisher information matrix from progressively type-II censored data. Next, the Bayes estimates are evaluated by applying Lindley’s approximation method and Markov Chain Monte Carlo method under the squared error loss function. We have performed a simulation study in order to compare the proposed Bayes estimators with the maximum likelihood estimators. We also compute 95% asymptotic confidence interval and symmetric credible interval along with the coverage probability. Third, we consider one-sample and two-sample prediction problems based on the observed sample and provide appropriate predictive intervals under classical as well as Bayesian framework. Finally, we analyse a real data set to illustrate...
- Published
- 2018
32. Uncertainty quantification for monotone stochastic degradation models
- Author
-
Zhi-Sheng Ye and Piao Chen
- Subjects
021103 operations research ,Computer science ,Stochastic modelling ,Strategy and Management ,Interval estimation ,Gamma process ,0211 other engineering and technologies ,Coverage probability ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,Industrial and Manufacturing Engineering ,Inverse Gaussian distribution ,010104 statistics & probability ,symbols.namesake ,Monotone polygon ,Sample size determination ,symbols ,0101 mathematics ,Uncertainty quantification ,Safety, Risk, Reliability and Quality ,Algorithm - Abstract
Degradation data are an important source of product reliability information. Two popular stochastic models for degradation data are the Gamma process and the inverse Gaussian (IG) process, both of which possess monotone degradation paths. Although these two models have been used in numerous applications, the existing interval estimation methods are either inaccurate given a moderate sample size of the degradation data or require a significant computation time when the size of the degradation data is large. To bridge this gap, this article develops a general framework of interval estimation for the Gamma and IG processes based on the method of generalized pivotal quantities. Extensive simulations are conducted to compare the proposed methods with existing methods under moderate and large sample sizes. Degradation data from capacitors are used to illustrate the proposed methods.
- Published
- 2018
33. Generalized confidence limits for the performance index of the exponentially distributed lifetime
- Author
-
Danush K. Wijekularathna and Sumith Gunasekera
- Subjects
Statistics and Probability ,010104 statistics & probability ,021103 operations research ,Exponential distribution ,Statistics ,0211 other engineering and technologies ,Coverage probability ,02 engineering and technology ,0101 mathematics ,01 natural sciences ,Performance index ,Confidence interval ,Mathematics - Abstract
Under a two-parameter exponential distribution, this study constructs the generalized lower confidence limit of the lifetime performance index CL based on type-II right-censored data. The confidenc...
- Published
- 2018
34. Confidence intervals for the mean and a percentile based on zero-inflated lognormal data
- Author
-
K. Krishnamoorthy and Sazib Hasan
- Subjects
Statistics and Probability ,Percentile ,education.field_of_study ,Applied Mathematics ,Population ,Zero (complex analysis) ,Coverage probability ,Statistics::Other Statistics ,Population based ,030210 environmental & occupational health ,01 natural sciences ,Confidence interval ,010104 statistics & probability ,03 medical and health sciences ,0302 clinical medicine ,Modeling and Simulation ,Statistics ,Log-normal distribution ,Quantitative Biology::Populations and Evolution ,0101 mathematics ,Statistics, Probability and Uncertainty ,education ,Mathematics ,Quantile - Abstract
The problems of estimating the mean and an upper percentile of a lognormal population with nonnegative values are considered. For estimating the mean of a such population based on data that include...
- Published
- 2018
35. Assessment of interrater and intermethod agreement in the kinesiology literature
- Author
-
Marilyn A. Looney
- Subjects
Identity line ,Kinesiology ,Intraclass correlation ,Coverage probability ,Physical Therapy, Sports Therapy and Rehabilitation ,030229 sport sciences ,Bivariate analysis ,01 natural sciences ,010104 statistics & probability ,03 medical and health sciences ,Inter-rater reliability ,0302 clinical medicine ,Concordance correlation coefficient ,Statistics ,Raw score ,Orthopedics and Sports Medicine ,0101 mathematics ,Mathematics - Abstract
The purpose of this article was two-fold (1) provide an overview of the commonly reported and under-reported absolute agreement indices in the kinesiology literature for continuous data; and (2) present examples of these indices for hypothetical data along with recommendations for future use. It is recommended that three types of information be reported as evidence for agreement because no one type is superior to another. Report one graphical display (bivariate plot of raw scores with identity line or Bland-Altman plot), one scaled index (intraclass correlation coefficient with agreement definition or Lin’s Concordance Correlation Coefficient), and at least one unscaled index (root mean squared deviation, total deviation index, coverage probability, or limits of agreement) when there are two methods/raters with no replication. The amount of information recommended exceeds what has commonly been reported in absolute agreement studies published in the kinesiology literature.
- Published
- 2017
36. Inference on the Gamma Distribution
- Author
-
Bing Xing Wang and Fangtao Wu
- Subjects
Statistics and Probability ,021103 operations research ,Applied Mathematics ,Cumulative distribution function ,Generalized gamma distribution ,0211 other engineering and technologies ,Coverage probability ,Inference ,Statistics::Other Statistics ,02 engineering and technology ,01 natural sciences ,Confidence interval ,010104 statistics & probability ,Modeling and Simulation ,Statistics ,Gamma distribution ,Confidence distribution ,Generalized integer gamma distribution ,0101 mathematics ,Mathematics - Abstract
This study develops inferential procedures for a gamma distribution. Based on the Cornish–Fisher expansion and pivoting the cumulative distribution function, an approximate confidence interval for ...
- Published
- 2017
37. On the distribution-free confidence intervals and universal bounds for quantiles based on joint records
- Author
-
M. Noori Asl, Hossein Bevrani, Narayanaswamy Balakrishnan, William Volterman, and R. Arabi Belaghi
- Subjects
Statistics and Probability ,Distribution free ,021103 operations research ,0211 other engineering and technologies ,Coverage probability ,02 engineering and technology ,01 natural sciences ,Confidence interval ,Data set ,010104 statistics & probability ,Distribution function ,Statistics ,Econometrics ,0101 mathematics ,Random variable ,Joint (geology) ,Mathematics ,Quantile - Abstract
In this paper, we consider three distribution-free confidence intervals for quantiles given joint records from two independent sequences of continuous random variables with a common continuous distribution function. The coverage probabilities of these intervals are compared. We then compute the universal bounds of the expected widths of the proposed confidence intervals. These results naturally extend to any number of independent sequences instead of just two. Finally, the proposed confidence intervals are applied for a real data set to illustrate the practical usefulness of the procedures developed here.
- Published
- 2017
38. What Do Interpolated Nonparametric Confidence Intervals for Population Quantiles Guarantee?
- Author
-
Yimin Zhang and Jesse Frey
- Subjects
Statistics and Probability ,General Mathematics ,05 social sciences ,Coverage probability ,01 natural sciences ,Robust confidence intervals ,Confidence interval ,010104 statistics & probability ,0502 economics and business ,Statistics ,False coverage rate ,Credible interval ,Econometrics ,Tolerance interval ,0101 mathematics ,Statistics, Probability and Uncertainty ,CDF-based nonparametric confidence interval ,050205 econometrics ,Mathematics ,Quantile - Abstract
The interval between two prespecified order statistics of a sample provides a distribution-free confidence interval for a population quantile. However, due to discreteness, only a small set of exact coverage probabilities is available. Interpolated confidence intervals are designed to expand the set of available coverage probabilities. However, we show here that the infimum of the coverage probability for an interpolated confidence interval is either the coverage probability for the inner interval or the coverage probability obtained by removing the more likely of the two extreme subintervals from the outer interval. Thus, without additional assumptions, interpolated intervals do not expand the set of available guaranteed coverage probabilities.
- Published
- 2017
39. Constructing Tolerance Intervals for the Number of Defectives Using Both High- and Low-Resolution Data
- Author
-
Hsiuying Wang and Fugee Tsung
- Subjects
021103 operations research ,Computer science ,Strategy and Management ,Low resolution ,0211 other engineering and technologies ,Coverage probability ,02 engineering and technology ,Construct (python library) ,Management Science and Operations Research ,Sensor fusion ,01 natural sciences ,Data type ,Industrial and Manufacturing Engineering ,Confidence interval ,Binomial distribution ,010104 statistics & probability ,Statistics ,Tolerance interval ,0101 mathematics ,Safety, Risk, Reliability and Quality - Abstract
Defect inspection is important in many industries, such as in the manufacturing and pharmaceutical industries. Existing methods usually use either low-resolution data, which are obtained from less precise measurements, or high-resolution data, which are obtained from more precise measurements, to estimate the number of defectives in a given amount of goods produced. In this study, a novel approach is proposed that combines the two types of data to construct tolerance intervals with a desired average coverage probability. A simulation study shows that the derived tolerance intervals can lead to better performance than a tolerance interval that is constructed based on only the low-resolution data. In addition, a real-data example shows that the tolerance interval based on only the low-resolution data is more conservative than the tolerance intervals based on both high-resolution and low-resolution data.
- Published
- 2017
40. Confidence intervals for a two-parameter exponential distribution: One- and two-sample problems
- Author
-
Yanping Xia and K. Krishnamoorthy
- Subjects
Statistics and Probability ,021103 operations research ,Exponential distribution ,0211 other engineering and technologies ,Coverage probability ,02 engineering and technology ,Pivotal quantity ,01 natural sciences ,Confidence interval ,Robust confidence intervals ,010104 statistics & probability ,Statistics ,Confidence distribution ,0101 mathematics ,CDF-based nonparametric confidence interval ,Quantile ,Mathematics - Abstract
The problems of interval estimating the mean, quantiles, and survival probability in a two-parameter exponential distribution are addressed. Distribution function of a pivotal quantity whose percen...
- Published
- 2017
41. The performance of model averaged tail area confidence intervals
- Author
-
Rheanna Mainzer, Paul Kabaila, and Alan H. Welsh
- Subjects
Statistics and Probability ,010104 statistics & probability ,Model selection ,0502 economics and business ,05 social sciences ,Statistics ,Coverage probability ,Context (language use) ,0101 mathematics ,01 natural sciences ,Confidence interval ,050205 econometrics ,Mathematics - Abstract
We investigate the exact coverage and expected length properties of the model averaged tail area (MATA) confidence interval proposed by Turek and Fletcher, CSDA, 2012, in the context of two nested,...
- Published
- 2017
42. Parametric bootstrap inferences for unbalanced panel data models
- Author
-
Dengkui Wang and Liwen Xu
- Subjects
Statistics and Probability ,Statistics::Theory ,05 social sciences ,Interval estimation ,Coverage probability ,Missing data ,Quantitative Biology::Genomics ,01 natural sciences ,Statistics::Computation ,Statistics::Machine Learning ,010104 statistics & probability ,Modeling and Simulation ,0502 economics and business ,Linear regression ,Statistics ,Econometrics ,Statistics::Methodology ,0101 mathematics ,050205 econometrics ,Panel data ,Parametric statistics ,Mathematics ,Statistical hypothesis testing - Abstract
This article presents parametric bootstrap (PB) approaches for hypothesis testing and interval estimation for the regression coefficients of panel data regression models with incomplete panels. Som...
- Published
- 2017
43. On confidence intervals for the mean past lifetime function under random censorship
- Author
-
V. Zardasht
- Subjects
Statistics and Probability ,021103 operations research ,Applied Mathematics ,0211 other engineering and technologies ,Coverage probability ,Estimator ,Asymptotic distribution ,02 engineering and technology ,Function (mathematics) ,01 natural sciences ,Confidence interval ,010104 statistics & probability ,Empirical likelihood ,Modeling and Simulation ,Statistics ,0101 mathematics ,Statistics, Probability and Uncertainty ,Kaplan–Meier estimator ,Central limit theorem ,Mathematics - Abstract
The mean past lifetime (MPL) function (also known as the expected inactivity time function) is of interest in many fields such as reliability theory and survival analysis, actuarial studies and forensic science. For estimation of the MPL function some procedures have been proposed in the literature. In this paper, we give a central limit theorem result for the estimator of MPL function based on a right-censored random sample from an unknown distribution. The limiting distribution is used to construct normal approximation-based confidence interval for MPL. Furthermore, we use the empirical likelihood ratio procedure to obtain confidence interval for the MPL function. These two intervals are compared with each other through simulation study in terms of coverage probability. Finally, a couple of numerical example illustrating the theory is also given.
- Published
- 2017
44. Exact inference for exponential distribution with multiply Type-I censored data
- Author
-
Xiang Jia and Bo Guo
- Subjects
Statistics and Probability ,Exact statistics ,021103 operations research ,Exponential distribution ,Maximum likelihood ,0211 other engineering and technologies ,Coverage probability ,Inference ,02 engineering and technology ,01 natural sciences ,Censoring (statistics) ,Confidence interval ,010104 statistics & probability ,Modeling and Simulation ,Statistics ,Statistics::Methodology ,0101 mathematics ,Likelihood function ,Mathematics - Abstract
In this paper, we focus on exact inference for exponential distribution under multiple Type-I censoring, which is a general form of Type-I censoring and represents that the units are terminated at different times. The maximum likelihood estimate of mean parameter is calculated. Further, the distribution of maximum likelihood estimate is derived and it yields an exact lower confidence limit for the mean parameter. Based on a simulation study, we conclude that the exact limit outperforms the bootstrap limit in terms of the coverage probability and average limit. Finally, a real dataset is analyzed for illustration.
- Published
- 2017
45. A general algorithm for computing simultaneous prediction intervals for the (log)-location-scale family of distributions
- Author
-
Yili Hong, William Q. Meeker, Luis A. Escobar, and Yimeng Xie
- Subjects
Statistics and Probability ,021103 operations research ,Applied Mathematics ,0211 other engineering and technologies ,Coverage probability ,Prediction interval ,02 engineering and technology ,01 natural sciences ,Confidence interval ,Location-scale family ,010104 statistics & probability ,Distribution (mathematics) ,Modeling and Simulation ,Statistics ,Log-normal distribution ,0101 mathematics ,Statistics, Probability and Uncertainty ,Random variable ,Mathematics ,Weibull distribution - Abstract
Making predictions of future realized values of random variables based on currently available data is a frequent task in statistical applications. In some applications, the interest is to obtain a two-sided simultaneous prediction interval (SPI) to contain at least k out of m future observations with a certain confidence level based on n previous observations from the same distribution. A closely related problem is to obtain a one-sided upper (or lower) simultaneous prediction bound (SPB) to exceed (or be exceeded) by at least k out of m future observations. In this paper, we provide a general approach for computing SPIs and SPBs based on data from a particular member of the (log)-location-scale family of distributions with complete or right censored data. The proposed simulation-based procedure can provide exact coverage probability for complete and Type II censored data. For Type I censored data, our simulation results show that our procedure provides satisfactory results in small samples. We us...
- Published
- 2017
46. Non-asymptotic confidence estimation of the parameters in stochastic regression models with Gaussian noises
- Author
-
Sergey E. Vorobeychikov and Victor Konev
- Subjects
Statistics and Probability ,Estimation ,стохастические модели ,Монте-Карло моделирование ,Gaussian ,Monte Carlo method ,Coverage probability ,020206 networking & telecommunications ,Regression analysis ,02 engineering and technology ,пороговая авторегрессия ,01 natural sciences ,010104 statistics & probability ,symbols.namesake ,линейные параметры ,Autoregressive model ,Modeling and Simulation ,Statistics ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Applied mathematics ,Point estimation ,0101 mathematics ,Mathematics ,Confidence region - Abstract
The article considers the problem of estimating linear parameters in stochastic regression models with Gaussian noises, such as an autoregression of the first order, threshold autoregression, and some others. We propose the non-asymptotic technique for constructing a fixed-size confidence region for unknown parameters with any prescribed coverage probability. The construction makes use of some new properties of the sequential point estimates known in the literature. The results of Monte Carlo simulations for AR(1) and TAR(1) models are given. A new version of the sequential point estimate is proposed.
- Published
- 2017
47. Approximate Statistical Limits for a Gamma Distribution
- Author
-
Zhi-Sheng Ye and Piao Chen
- Subjects
021103 operations research ,Strategy and Management ,0211 other engineering and technologies ,Coverage probability ,02 engineering and technology ,Management Science and Operations Research ,Stress strength ,01 natural sciences ,Industrial and Manufacturing Engineering ,010104 statistics & probability ,Control limits ,Statistics ,Fiducial inference ,Gamma distribution ,Applied mathematics ,0101 mathematics ,Safety, Risk, Reliability and Quality ,Mathematics - Abstract
This study develops methods for constructing some important statistical limits of a gamma distribution. First, we construct upper prediction limits and tolerance limits for a gamma distribution. In addition, upper prediction limits for at least p of m m..
- Published
- 2017
48. Simultaneous inferences for ordered exponential location parameters under unbalanced data and heteroscedasticity of scale parameters
- Author
-
Vishal Maurya, Anju Goyal, and Amar Nath Gill
- Subjects
Statistics and Probability ,Heteroscedasticity ,021103 operations research ,Exponential distribution ,Homogeneity (statistics) ,0211 other engineering and technologies ,Coverage probability ,02 engineering and technology ,01 natural sciences ,Confidence interval ,010104 statistics & probability ,Modeling and Simulation ,Statistics ,Pairwise comparison ,0101 mathematics ,Unbalanced data ,Ordered exponential ,Mathematics - Abstract
A test procedure for testing homogeneity of location parameters against simple ordered alternative is proposed for k(k ≥ 2) members of two parameter exponential distribution under unbalanced data and heteroscedasticity of the scale parameters. The relevant one-sided and two-sided simultaneous confidence intervals (SCIs) for all k(k − 1)/2 ordered pairwise differences of location parameters are also proposed. Simulation-based study revealed that the proposed procedure is better than the recently proposed procedure in terms of power, coverage probability, and average volume of SCIs. The implementation of proposed procedure is demonstrated through real life data.
- Published
- 2016
49. On confidence bounds for one-parameter exponential families
- Author
-
Saralees Nadarajah, Mashaallah Mashinchi, Abbas Parchami, M. Alizadeh, and Mahdi Doostparast
- Subjects
Statistics and Probability ,05 social sciences ,Bayesian probability ,Coverage probability ,Prediction interval ,01 natural sciences ,Robust confidence intervals ,Confidence interval ,010104 statistics & probability ,Exponential family ,Modeling and Simulation ,0502 economics and business ,Statistics ,Credible interval ,0101 mathematics ,CDF-based nonparametric confidence interval ,050205 econometrics ,Mathematics - Abstract
There exist various methods for providing confidence intervals for unknown parameters of interest on the basis of a random sample. Generally, the bounds are derived from a system of non-linear equations. In this article, we present a general solution to obtain an unbiased confidence interval with confidence coefficient 1 − α in one-parameter exponential families. Also we discuss two Bayesian credible intervals, the highest posterior density (HPD) and relative surprise (RS) credible intervals. Standard criteria like the coverage length and coverage probability are used to assess the performance of the HPD and RS credible intervals. Simulation studies and real data applications are presented for illustrative purposes.
- Published
- 2016
50. Bartlett-corrected two-sample adjusted empirical likelihood via resampling
- Author
-
Lei Wang
- Subjects
Statistics and Probability ,education.field_of_study ,05 social sciences ,Population ,Coverage probability ,Function (mathematics) ,01 natural sciences ,010104 statistics & probability ,Empirical likelihood ,Dimension (vector space) ,Sample size determination ,Resampling ,Bounded function ,0502 economics and business ,Statistics ,0101 mathematics ,education ,050205 econometrics ,Mathematics - Abstract
To construct confidence regions for the difference of two population means, Liu and Yu (2010) proposed a two-sample adjusted empirical likelihood (AEL) with high-order precision. However, two issues have not been well addressed. The first one is that the AEL ratio function is bounded such that the size of the confidence regions may overly expand when the sample sizes are small and/or the dimension of data is large. The second issue is that its high-order precision relies on accurate estimation of the Bartlett factor, while accurately estimating the Bartlett factor is a serious challenge. In order to address these two problems simultaneously, we propose a two-sample modified AEL to ensure the boundedness of confidence regions and preserve the Bartlett correctability. A two-stage procedure is proposed for constructing accurate confidence regions via resampling. The finite-sample performance of the proposed method is illustrated by simulations and a real-data example.
- Published
- 2016
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.