1,735 results on '"62F03"'
Search Results
2. Statistical Inference for Chi-square Statistics or F-Statistics Based on Multiple Imputation
- Author
-
Wang, Binhuan, Fang, Yixin, and Jin, Man
- Subjects
Statistics - Methodology ,62F03 - Abstract
Missing data is a common issue in medical, psychiatry, and social studies. In literature, Multiple Imputation (MI) was proposed to multiply impute datasets and combine analysis results from imputed datasets for statistical inference using Rubin's rule. However, Rubin's rule only works for combined inference on statistical tests with point and variance estimates and is not applicable to combine general F-statistics or Chi-square statistics. In this manuscript, we provide a solution to combine F-test statistics from multiply imputed datasets, when the F-statistic has an explicit fractional form (that is, both the numerator and denominator of the F-statistic are reported). Then we extend the method to combine Chi-square statistics from multiply imputed datasets. Furthermore, we develop methods for two commonly applied F-tests, Welch's ANOVA and Type-III tests of fixed effects in mixed effects models, which do not have the explicit fractional form. SAS macros are also developed to facilitate applications., Comment: 21 pages
- Published
- 2024
3. Testing distribution for multiplicative distortion measurement errors.
- Author
-
Cui, Leyi, Zhou, Yue, Zhang, Jun, and Yang, Yiping
- Subjects
- *
MONTE Carlo method , *MEASUREMENT errors , *RANDOM variables , *ASYMPTOTIC distribution , *PROBABILITY theory - Abstract
In this article, we study a goodness of fit test for a multiplicative distortion model under a uniformly distributed but unobserved random variable. The unobservable variable is distorted in a multiplicative fashion by an observed confounding variable. The proposed k-th power test statistic is based on logarithmic transformed observations and a correlation coefficient-based estimator without distortion measurement errors. The proper choice of k is discussed through the empirical coverage probabilities. The asymptotic null distribution of the test statistics are obtained with known asymptotic variances. Next, we proposed the conditional mean calibrated test statistic when a variable is distorted in a multiplicative fashion. We conduct Monte Carlo simulation experiments to examine the performance of the proposed test statistics. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
4. Estimating changepoints in extremal dependence, applied to aviation stock prices during COVID-19 pandemic.
- Author
-
Hazra, Arnab and Bose, Shiladitya
- Subjects
- *
COVID-19 pandemic , *LIKELIHOOD ratio tests , *STOCK prices , *CONDITIONAL probability , *RANDOM variables - Abstract
The dependence in the tails of the joint distribution of two random variables is generally assessed using χ-measure, the limiting conditional probability of one variable being extremely high given the other variable is also extremely high. This work is motivated by the structural changes in χ-measure between the daily rate of return (RoR) of the two Indian airlines, IndiGo and SpiceJet, during the COVID-19 pandemic. We model the daily maximum and minimum RoR vectors (potentially transformed) using the bivariate Hüsler-Reiss (BHR) distribution. To estimate the changepoint in the χ-measure of the BHR distribution, we explore two changepoint detection procedures based on the Likelihood Ratio Test (LRT) and Modified Information Criterion (MIC). We obtain critical values and power curves of the LRT and MIC test statistics for low through high values of χ-measure. We also explore the consistency of the estimators of the changepoint based on LRT and MIC numerically. In our data application, for RoR maxima and minima, the most prominent changepoints detected by LRT and MIC are close to the announcement of the first phases of lockdown and unlock, respectively, which are realistic; thus, our study would be beneficial for portfolio optimization in the case of future pandemic situations. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
5. Context-sensitive hypothesis-testing and exponential families.
- Author
-
Kelbert, Mark and Suhov, Yuri
- Abstract
We propose a number of concepts and properties related to ‘weighted’ statistical inference where the observed data are classified in accordance with a ‘value’ of a sample string. The motivation comes from the concepts of weighted information and weighted entropy that proved useful in industrial/microeconomic and medical statistics. We focus on applications relevant in hypothesis testing and an analysis of exponential families. Several notions, bounds and asymptotics are established, which generalize their counterparts well-known in standard statistical research. It includes Fisher information, Neyman–Pearson lemma, Stein–Sanov theorem, Pinsker's and Bretangnole–Huber bounds, Cramér–Rao and van Trees inequalities and Bhattacharyya, Bregman, Burbea-Rao, Chernoff, Kullback–Leibler, Rényi and Tsallis divergences. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
6. The AdaptSgenoLasso, an extended version of the SgenoLasso, for gene mapping and for genomic prediction using the extremes.
- Author
-
Rabier, Charles-Elie and Delmas, Céline
- Subjects
- *
GENE mapping , *GAUSSIAN processes , *GENOMES , *LOCUS (Genetics) , *GENETICISTS - Abstract
We introduce here the AdaptSgenoLasso, a new penalized likelihood method for gene mapping and for genomic prediction, which is an extended version of the SgenoLasso. The AdaptSgenoLasso relies on the original concept of a selective genotyping that varies along the genome. The ‘classical’ selective genotyping on which the SgenoLasso is built on, consists in genotyping only extreme individuals, in order to increase the signal from genes. However, since the same amount of selection is applied at all genome locations, the signal is increased of the same proportional factor everywhere. With the AdaptSgenoLasso, we allow geneticists to impose more weights on some loci (i.e., locations) of interest, known to be responsible for the variation of the quantitative trait. The resulting signal is now dedicated to each locus. We propose here a deep theoretical study of the AdaptSgenoLasso, and we show on simulated data the superiority of this new approach over the SgenoLasso. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
7. Test for conditional Poissonity in integer-valued conditional autoregressive models.
- Author
-
Kang, Jiwon and Song, Junmo
- Subjects
- *
CHI-square distribution , *AUTOREGRESSIVE models , *ASYMPTOTIC distribution , *TIME series analysis , *GAUSSIAN distribution , *POISSON distribution - Abstract
The Poisson distribution is a representative distribution for discrete data, much like the normal distribution is for continuous data. Many time series models for count data have been developed based on the Poisson distribution. However, studies examining the validity of the Poisson assumption in these time series models have been relatively scarce. This study addresses the problem of testing for conditional Poissonity in integer-valued conditional autoregressive models. For this purpose, we introduce a test statistic based on the information matrix of the likelihood function. Under regularity conditions, it is shown that the proposed test has an asymptotic null distribution following a chi-square distribution. Simulation results demonstrate the validity of the proposed test. Additionally, a real data analysis is provided for illustration. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. Imprecision issues of two conditional powers and six predictive powers when the sample size of the interim data is fixed.
- Author
-
Zhang, Ying-Ying
- Subjects
- *
SAMPLE size (Statistics) , *TREATMENT effectiveness , *CLINICAL trials , *PROBABILITY theory - Abstract
Abstract.Imprecisions are often encountered for powers and predictive powers in clinical trials. The imprecision issues of two conditional powers (classical conditional power (CCP) and bayesian conditional power (BCP)) and six predictive powers with interim data are investigated in this article. At the beginning, we have evaluated the limits of the probabilities of control superior (CS), treatment superior (TS), and equivocal (E) of the two conditional powers and the six predictive powers at point 0 when the sample size of the interim data is fixed. Moreover, we have conducted extensive numerical experiments to exemplify the imprecision issues of the two conditional powers and the six predictive powers. First, we have computed the probabilities of CS, TS, and E for the two conditional powers when the true treatment effect favors control, treatment, and equivocal, respectively. Second, we have computed the probabilities of CS, TS, and E for the six predictive powers under the sceptical prior and the optimistic prior, respectively. We find that the two conditional powers and the six predictive powers will encounter the imprecision issues as long as the parameter values are properly chosen. Finally, a real data example is given to illustrate the imprecision issues. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Detecting weak changes in the mean of a class of nonlinear heteroscedastic models.
- Author
-
Ngatchou-Wandji, Joseph and Ltaifa, Marwa
- Subjects
- *
TIME series analysis - Abstract
We study a likelihood ratio test for detecting multiple weak changes in the mean of a class of CHARN models. The locally asymptotically normal (LAN) structure of the family of likelihoods under study is established. It results that the test is asymptotically optimal, and an explicit form of its asymptotic local power is given as a function of the candidates change locations and change magnitudes. Strategies for weak change-points detection and their locations estimates are described. The estimates are obtained as the time indices maximizing an estimate of the local power. A simulation study shows the good performance of our methods compared to some existing approaches. These methods are also applied to three sets of real data. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Asymptotic false discovery control of the Benjamini-Hochberg procedure for pairwise comparisons.
- Author
-
Liu, Weidong, Leung, Dennis, and Shao, Qi-Man
- Abstract
In a one-way analysis-of-variance (ANOVA) model, the number of pairwise comparisons can become large even with a moderate number of groups. Motivated by this, we consider a regime with a growing number of groups and prove that, when testing pairwise comparisons, the Benjamini-Hochberg (BH) procedure can asymptotically control false discoveries, despite the fact that the involved t-statistics do not exhibit the well-known positive dependence structure required for exact false discovery rate (FDR) control. Following Tukey's perspective that the difference between the means of any two groups cannot be exactly zero, our main result provides control over the directional false discovery rate and directional false discovery proportion. A key technical contribution of our work is demonstrating that the dependence among the t-statistics is sufficiently weak to establish the convergence result typically required for asymptotic FDR control. Our analysis does not rely on conventional assumptions such as normality, variance homogeneity, or a balanced design, thereby offering a theoretical foundation for applications in more general settings. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
11. Rollout designs for lump-sum data.
- Author
-
Xu, Qunzhi, Tian, Hongzhen, Sarkar, Ananda, and Mei, Yajun
- Subjects
- *
FALSE positive error , *ERROR probability , *PRIVACY - Abstract
This work studies rollout design problems with a focus of suitable choices of rollout rate under the standard Type I and Type II error probabilities control framework. The main challenge of rollout design is that data is often observed in a lump-sum manner from a spatio-temporal point of view: (1) temporally, only the sum of data in a given sliding time window can be observed; (2) spatially, there are two subgroups for the data at each time step: control and treatment, but one can only observe the total values instead of individual values from each subgroup. We develop rollout tests of lump-sum data under both fixed-sample-size and sequential settings, subject to the constraints on Type I and Type II error probabilities. Numerical studies are conducted to validate our theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Blinded sample size re-estimation in 2 × 2 crossover trial.
- Author
-
Banerjee, Kaustav, Chattopadhyay, Gaurangadeb, and Banerjee, Tathagata
- Subjects
- *
CROSSOVER trials , *SAMPLE size (Statistics) - Abstract
Blinded sample size re-estimation is often considered to have realistic estimate of the required sample size, during the mid-course of a trial, without compromising the trial's integrity, to save a trial from being under or over-powered. We consider blinded sample size re-estimation, in the context of $ 2 \times 2 $ 2 × 2 crossover trial, following permuted block randomization. The underlying likelihood function suffers from identifiability problems. Blinded re-estimation procedures are prescribed under this setting and it is shown that the associated risk of unblinding may not always be negligible. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Asymptotic Independence of the Quadratic Form and Maximum of Independent Random Variables with Applications to High-Dimensional Tests.
- Author
-
Chen, Da Chuan, Feng, Long, and Liang, De Cai
- Subjects
- *
MONTE Carlo method , *ASYMPTOTIC distribution , *RANDOM variables , *INDEPENDENT variables - Abstract
This paper establishes the asymptotic independence between the quadratic form and maximum of a sequence of independent random variables. Based on this theoretical result, we find the asymptotic joint distribution for the quadratic form and maximum, which can be applied into the high-dimensional testing problems. By combining the sum-type test and the max-type test, we propose the Fisher's combination tests for the one-sample mean test and two-sample mean test. Under this novel general framework, several strong assumptions in existing literature have been relaxed. Monte Carlo simulation has been done which shows that our proposed tests are strongly robust to both sparse and dense data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Goodness-of-fit tests for the one-sided Lévy distribution based on quantile conditional moments.
- Author
-
Pączek, Kewin, Jelito, Damian, Pitera, Marcin, and Wyłomańska, Agnieszka
- Subjects
- *
ASYMPTOTIC distribution , *STATISTICS , *LITERATURE , *GOODNESS-of-fit tests - Abstract
In this paper we introduce a novel statistical framework based on the first two quantile conditional moments that facilitates effective goodness-of-fit testing for one-sided Lévy distributions. The scale-ratio framework introduced in this paper extends our previous results in which we have shown how to extract unique distribution features using conditional variance ratio for the generic class of α-stable distributions. We show that the conditional moment-based goodness-of-fit statistics are a good alternative to other methods introduced in the literature tailored to the one-sided Lévy distributions. The usefulness of our approach is verified using an empirical test power study. For completeness, we also derive the asymptotic distributions of the test statistics and show how to apply our framework to real data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Empirical likelihood based confidence regions for functional of copulas.
- Author
-
Bouzebda, Salim and Keziou, Amor
- Subjects
- *
MARGINAL distributions , *STATISTICAL bootstrapping , *CONFIDENCE regions (Mathematics) , *DISTRIBUTION (Probability theory) , *INFERENTIAL statistics - Abstract
In the present paper, we are mainly concerned with the statistical inference for the functional of nonparametric copula models satisfying linear constraints. The asymptotic properties of the obtained estimates and test statistics are given. Finally, a general notion of bootstrap for the proposed estimates and test statistics, constructed by exchangeably weighting sample, is presented, which is of its own interest. These results are proved under some standard structural conditions on some classes of functions and some mild conditions on the model, without assuming anything about the marginal distribution functions, except continuity. Our theoretical results and numerical examples by simulations demonstrate the merits of the proposed techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. On limiting behaviors of stepwise multiple testing procedures.
- Author
-
Dey, Monitirtha
- Subjects
ERROR rates ,NULL hypothesis ,STATISTICIANS ,STATISTICS ,SIMPLICITY - Abstract
Stepwise multiple testing procedures have attracted several statisticians for decades and are also quite popular with statistics users because of their technical simplicity. The Bonferroni procedure has been one of the earliest and most prominent testing rules for controlling the familywise error rate (FWER). A recent article established that the FWER for the Bonferroni method asymptotically (i.e., when the number of hypotheses becomes arbitrarily large) approaches zero under any positively equicorrelated multivariate normal framework. However, similar results for the limiting behaviors of FWER of general stepwise procedures are nonexistent. The present work addresses this gap in a unified manner by elucidating that, under the multivariate normal setups with some correlation structures, the probability of rejecting one or more null hypotheses approaches zero asymptotically for any step-down procedure. Consequently, the FWER and power of the step-down procedures also tend to be asymptotically zero. We also establish similar limiting zero results on FWER of other popular multiple testing rules, e.g., Hochberg's and Hommel's procedures. It turns out that, within our chosen asymptotic framework, the Benjamini–Hochberg method can hold the FWER at a strictly positive level asymptotically under the equicorrelated normality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Specifications tests for count time series models with covariates.
- Author
-
Hudecová, Šárka, Hušková, Marie, and Meintanis, Simos G.
- Abstract
We propose a goodness-of-fit test for a class of count time series models with covariates which includes the Poisson autoregressive model with covariates (PARX) as a special case. The test criteria are derived from a specific characterization for the conditional probability generating function, and the test statistic is formulated as a L 2 weighting norm of the corresponding sample counterpart. The asymptotic properties of the proposed test statistic are provided under the null hypothesis as well as under specific alternatives. A bootstrap version of the test is explored in a Monte–Carlo study and illustrated on a real data set on road safety. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Modeling paired binary data by a new bivariate Bernoulli model with flexible beta kernel correlation.
- Author
-
Li, Xun-Jian, Li, Shuang, Tian, Guo-Liang, and Shi, Jianhua
- Abstract
Paired binary data often appear in studies of subjects with two sites such as eyes, ears, lungs, kidneys, feet and so on. Three popular models [i.e., (Rosner in Biometrics 38:105-114, 1982) R model, (Dallal in Biometrics 44:253-257, 1988) model and (Donner in Biometrics 45:605-661, 1989) model] were proposed to fit such twin data by considering the intra-person correlation. However, Rosner's R model can only fit the twin data with an increasing correlation coefficient, Dallal's model may incur the problem of over–fitting, while Donner's model can only fit the twin data with a constant correlation. This paper aims to propose a new bivariate Bernoulli model with flexible beta kernel correlation (denoted by Bernoulli 2 bk ) for fitting the paired binary data with a wide range of group–specific disease probabilities. The correlation coefficient of the Bernoulli 2 bk model could be increasing, or decreasing, or unimodal, or convex with respect to the disease probability of one eye. To obtain the maximum likelihood estimates (MLEs) of parameters, we develop a series of minorization–maximization (MM) algorithms by constructing four surrogate functions with closed–form expressions at each iteration of the MM algorithms. Simulation studies are conducted, and two real datasets are analyzed to illustrate the proposed model and methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. A Stationary Proportional Hazard Class Process and its Applications.
- Author
-
Kundu, Debasis
- Abstract
The motivation of this work came when we were trying to analyze gold price data of the Indian market and the exchange rate data between Indian Rupees and US Dollars. It is observed that in both the cases there is a significant amount of time when X n = X n + 1 , hence they cannot be ignored. In this paper we have introduced a very flexible discrete time and continuous state space stationary stochastic process { X n } , where X n has a proportional hazard class of distribution and there is a positive probability that X n = X n + 1 . We have assumed a very flexible piecewise constant hazard function of the base line distribution of the proportional hazard class. Various properties of the proposed class has been obtained. Various dependency properties have been established. Estimating the cut points of the piecewise constant hazard function is an important problem and it has been addressed here. The maximum likelihood estimators (MLEs) of the unknown parameters cannot be obtained in closed form, and we have proposed to use profile likelihood method to compute the estimators. The gold price data set and the exchange rate data set have been analyzed and the results are quite satisfactory. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. A bivariate load-sharing model.
- Author
-
Kundu, Debasis
- Subjects
- *
DIABETIC retinopathy , *SYSTEM failures , *EMPLOYEE motivation , *PEOPLE with diabetes , *DATA analysis - Abstract
The motivation of this work came from a data set obtained from an experiment performed on diabetic patients, with diabetic retinopathy disorder. The aim of this experiment is to test whether there is any significant difference between two different treatments which are being used for this disease. The two eyes can be considered as a two-component load-sharing system. In a two-component load-sharing system after the failure of one component, the surviving component has to shoulder extra load. Hence, it is prone to failure at an earlier time than what is expected under the original model. It may also happen sometimes that the failure of one component may release extra resources to the survivor, thus delaying the failure. In most of the existing literature, it has been assumed that at the beginning the lifetime distributions of the two components are independently distributed, which may not be very reasonable in this case. In this paper, we have introduced a new bivariate load-sharing model where the independence assumptions of the lifetime distributions of the two components at the beginning have been relaxed. In this present model, they may be dependent. Further, there is a positive probability that the two components may fail simultaneously. If the two components do not fail simultaneously, it is assumed that the lifetime of the surviving component changes based on the tampered failure rate assumption. The proposed bivariate distribution has a singular component. The likelihood inference of the unknown parameters has been provided. Simulation results and the analysis of the data set have been presented to show the effectiveness of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A comparative study on p value combination tests for unit roots in multiple time series.
- Author
-
Costantini, Mauro and Lupi, Claudio
- Subjects
- *
TIME series analysis , *TIME management , *COMPARATIVE studies - Abstract
This note offers a Monte Carlo investigation of the performance of nine p value combination methods for unit root testing in multiple time series using a novel approach to simulation. Rather than on time series, simulations are based on random draws of Dickey-Fuller p values under the null and under local alternatives. This makes it possible to investigate the properties of the different combiners under ideal conditions in which small sample effects and possible model misspecification issues have no role. The results show that no combination approach delivers a uniformly best test in the presence of independent p values. However, the probit combination method and the sum method on average outperform the other combiners within the unit root setting. The performance of the combination tests is generally unsatisfactory in the presence of cross-correlated p values. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Tests for comparing the means of two independent Conway-Maxwell Poisson distributions.
- Author
-
Vaidyanathan, V. S. and T., Traison
- Subjects
- *
POISSON distribution , *LIKELIHOOD ratio tests , *GENERALIZATION - Abstract
AbstractThe problem of comparing the means of two independent Poisson distributions has been addressed in the literature. A generalization of the Poisson distribution to accommodate over- and under-dispersion in the data is the two-parameter Poisson distribution developed by Conway and Maxwell. This distribution has nice properties, and it includes geometric, Poisson and binomial distributions as its special cases. The present work considers the problem of comparing the means of two independent reparametrized Conway-Maxwell Poisson distributions assuming the dispersion parameter to be known. The proposed test procedures make use of the conditional, exact, asymptotic and likelihood ratio test approaches. The test statistic under various approaches is derived, and the respective powers and effect sizes are evaluated and compared through a simulation study. A numerical illustration of the applicability of the tests is provided through real-life data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Modelling and diagnostic tests for Poisson and negative-binomial count time series.
- Author
-
Aleksandrov, Boris, Weiß, Christian H., Nik, Simon, Faymonville, Maxime, and Jentsch, Carsten
- Subjects
- *
ASYMPTOTIC normality , *STATIONARY processes , *TIME series analysis , *NULL hypothesis , *DIAGNOSIS methods , *GENERALIZED method of moments - Abstract
When modelling unbounded counts, their marginals are often assumed to follow either Poisson (Poi) or negative binomial (NB) distributions. To test such null hypotheses, we propose goodness-of-fit (GoF) tests based on statistics relying on certain moment properties. By contrast to most approaches proposed in the count-data literature so far, we do not restrict ourselves to specific low-order moments, but consider a flexible class of functions of generalized moments to construct model-diagnostic tests. These cover GoF-tests based on higher-order factorial moments, which are particularly suitable for the Poi- or NB-distribution where simple closed-form expressions for factorial moments of any order exist, but also GoF-tests relying on the respective Stein's identity for the Poi- or NB-distribution. In the time-dependent case, under mild mixing conditions, we derive the asymptotic theory for GoF tests based on higher-order factorial moments for a wide family of stationary processes having Poi- or NB-marginals, respectively. This family also includes a type of NB-autoregressive model, where we provide clarification of some confusion caused in the literature. Additionally, for the case of independent and identically distributed counts, we prove asymptotic normality results for GoF-tests relying on a Stein identity, and we briefly discuss how its statistic might be used to define an omnibus GoF-test. The performance of the tests is investigated with simulations for both asymptotic and bootstrap implementations, also considering various alternative scenarios for power analyses. A data example of daily counts of downloads of a TeX editor is used to illustrate the application of the proposed GoF-tests. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Directed likelihood statistic to test the concentration parameter in von Mises regressions.
- Author
-
Lemonte, Artur J.
- Subjects
- *
LIKELIHOOD ratio tests , *GAUSSIAN distribution , *REGRESSION analysis , *HYPOTHESIS - Abstract
We derive explicit expressions for the directed likelihood statistic and its modified version for testing several hypotheses on the concentration parameter in the von Mises regression model. We verify that the standard normal distribution gives a poor approximation to the true distribution of the usual directed likelihood statistic to test the concentration parameter, while its modified version leads to very accurate inference even for very small samples. An empirical application is considered for illustrative purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Bootstrap inference for unbalanced one-way classification model with skew-normal random effects.
- Author
-
Ye, Rendao, Du, Weixiao, and Lu, Yiting
- Subjects
- *
ANALYSIS of variance , *MONTE Carlo method , *RANDOM effects model , *MATRIX decomposition , *CARBON fibers , *FIXED effects model - Abstract
In this article, the one-sided hypothesis testing and interval estimation problems for fixed effect and variance component functions are considered in the unbalanced one-way classification model with skew-normal random effects. First, the Bootstrap approach is used to establish test statistics for fixed effects. Second, based on the matrix decomposition technique, Bootstrap approach and generalized approach, the test statistics, and confidence intervals for the single variance component and sum of variance components are constructed. Next, the exact test statistics for the ratio of variance components are obtained. The Monte Carlo simulation results indicate that the Bootstrap approach performs better than the generalized approach in most cases. Finally, the above approaches are illustrated with a real example of carbon fibers' strength. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. On Bayesian Hotelling's T2 test for the mean.
- Author
-
Al-Labadi, Luai, Fazeli Asl, Forough, and Lim, Kyuson
- Subjects
- *
GAUSSIAN distribution , *STATISTICAL sampling , *A priori , *HYPOTHESIS - Abstract
The multivariate one-sample problem considers an independent random sample from a multivariate normal distribution with mean μ and unknown variance Σ. For a given real vector μ 1 , the interest is to assess the hypothesis H 0 : μ = μ 1. This paper proposes a new Bayesian approach to this problem based on comparing the change in the Kullback-Leibler divergence from a priori to a posteriori via the relative belief ratio. Eliciting the prior is also considered. The use of the approach is illustrated through several examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Testing nonlinearity of heavy-tailed time series.
- Author
-
De Gooijer, Jan G.
- Subjects
- *
AUTOREGRESSIVE models , *TIME series analysis , *INFINITE series (Mathematics) , *ETHERNET - Abstract
A test statistic for nonlinearity of a given heavy-tailed time series process is constructed, based on the sub-sample stability of Gini-based sample autocorrelations. The finite-sample performance of the proposed test is evaluated in a Monte Carlo study and compared to a similar test based on the sub-sample stability of a heavy-tailed analogue of the conventional sample autocorrelation function. In terms of size and power properties, the quality of our test outperforms a nonlinearity test for heavy-tailed time series processes proposed by [S.I. Resnick and E. Van den Berg, A test for nonlinearity of time series with infinite variance, Extremes 3 (2000), pp. 145–172.]. A nonlinear Pareto-type autoregressive process and a nonlinear Pareto-type moving average process are used as alternative specifications when comparing the power of the proposed test statistic. The efficacy of the test is illustrated via the analysis of a heavy-tailed actuarial data set and two time series of Ethernet traffic. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Types of Stickiness in BHV Phylogenetic Tree Spaces and Their Degree
- Author
-
Lammers, Lars, Van, Do Tran, Nye, Tom M. W., and Huckemann, Stephan F.
- Subjects
Mathematics - Statistics Theory ,62F03 - Abstract
It has been observed that the sample mean of certain probability distributions in Billera-Holmes-Vogtmann (BHV) phylogenetic spaces is confined to a lower-dimensional subspace for large enough sample size. This non-standard behavior has been called stickiness and poses difficulties in statistical applications when comparing samples of sticky distributions. We extend previous results on stickiness to show the equivalence of this sampling behavior to topological conditions in the special case of BHV spaces. Furthermore, we propose to alleviate statistical comparision of sticky distributions by including the directional derivatives of the Fr\'echet function: the degree of stickiness., Comment: 8 Pages, 1 Figure, conference submission to GSI 2023
- Published
- 2023
29. High-Order Self-excited Threshold Integer-Valued Autoregressive Model: Estimation and Testing: High-Order Self-excited Threshold Integer-Valued...
- Author
-
Yang, Kai, Li, Ang, Li, Han, and Dong, Xiaogang
- Published
- 2025
- Full Text
- View/download PDF
30. Post-selection Inference in Multiverse Analysis (PIMA): an inferential framework based on the sign flipping score test
- Author
-
Girardi, Paolo, Vesely, Anna, Lakens, Daniël, Altoè, Gianmarco, Pastore, Massimiliano, Calcagnì, Antonio, and Finos, Livio
- Subjects
Statistics - Methodology ,Statistics - Applications ,62F03 ,G.3 - Abstract
When analyzing data researchers make some decisions that are either arbitrary, based on subjective beliefs about the data generating process, or for which equally justifiable alternative choices could have been made. This wide range of data-analytic choices can be abused, and has been one of the underlying causes of the replication crisis in several fields. Recently, the introduction of multiverse analysis provides researchers with a method to evaluate the stability of the results across reasonable choices that could be made when analyzing data. Multiverse analysis is confined to a descriptive role, lacking a proper and comprehensive inferential procedure. Recently, specification curve analysis adds an inferential procedure to multiverse analysis, but this approach is limited to simple cases related to the linear model, and only allows researchers to infer whether at least one specification rejects the null hypothesis, but not which specifications should be selected. In this paper we present a Post-selection Inference approach to Multiverse Analysis (PIMA) which is a flexible and general inferential approach that accounts for all possible models, i.e., the multiverse of reasonable analyses. The approach allows for a wide range of data specifications (i.e. pre-processing) and any generalized linear model; it allows testing the null hypothesis of a given predictor not being associated with the outcome, by merging information from all reasonable models of multiverse analysis, and provides strong control of the family-wise error rate such that it allows researchers to claim that the null-hypothesis can be rejected for each specification that shows a significant effect. The inferential proposal is based on a conditional resampling procedure. To be continued..., Comment: 37 pages, 2 figures
- Published
- 2022
31. Higher-order asymptotic refinements in a multivariate regression model with general parameterization.
- Author
-
Melo, Tatiane F. N., Vargas, Tiago M., Lemonte, Artur J., and Patriota, Alexandre G.
- Subjects
- *
ERRORS-in-variables models , *MONTE Carlo method , *NONLINEAR regression , *CORRECTION factors , *REGRESSION analysis , *FIXED effects model - Abstract
This paper derives a general Bartlett correction formula to improve the inference based on the likelihood ratio test in a multivariate model under a quite general parameterization, where the mean vector and the variance-covariance matrix can share the same vector of parameters. This approach includes a number of models as special cases such as non-linear regression models, errors-in-variables models, mixed-effects models with non-linear fixed effects, and mixtures of the previous models. We also employ the Skovgaard adjustment to the likelihood ratio statistic in this class of multivariate models and derive a general expression of the correction factor based on Skovgaard approach. Monte Carlo simulation experiments are carried out to verify the performance of the improved tests, and the numerical results confirm that the modified tests are more reliable than the usual likelihood ratio test. Applications to real data are also presented for illustrative purposes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Estimation and hypothesis test for varying coefficient single-index multiplicative models.
- Author
-
Zhang, Jun, Zhu, Xuehu, and Li, Gaorong
- Subjects
- *
PARAMETER estimation , *HYPOTHESIS , *STATISTICAL bootstrapping - Abstract
Estimation and hypothesis test for varying coefficient single-index multiplicative models are considered in this paper. To estimate an unknown single-index parameter, a profile product relative error estimation is proposed for the single-index parameter with a leave-one-component-out estimation method. A Wald-type test statistic is proposed to test a linear hypothesis test of the single-index. We employ the smoothly clipped absolute deviation penalty to simultaneously select variables and estimate regression coefficients. To study the model checking problem, we propose a variant of the integrated conditional moment test statistic by using a linear projection weighting function, and we also suggest a bootstrap procedure for calculating critical values. Simulation studies are conducted to demonstrate the performance of the proposed procedure and a real example is analysed for illustration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Covariance structure tests for multivariate t-distribution.
- Author
-
Filipiak, Katarzyna and Kollo, Tõnu
- Subjects
LIKELIHOOD ratio tests ,CHI-square distribution ,FALSE positive error ,MAXIMUM likelihood statistics ,ASYMPTOTIC distribution - Abstract
We derive an equation system for finding Maximum Likelihood Estimators (MLEs) for the parameters of a p-dimensional t-distribution with ν degrees of freedom, t p , ν , and use the MLEs for testing covariance structures for the t p , ν -distributed population. The likelihood ratio test (LRT), Rao score test (RST) and Wald test (WT) statistics are derived under the general null-hypothesis H 0 : Σ = Σ 0 , using a matrix derivative technique. Here the p × p -matrix Σ is a dispersion/scale parameter. Convergence to the asymptotic chi-square distribution under the null hypothesis is examined in extensive simulation experiments. Also the convergence to the chi-square distribution is studied empirically in the situation when the MLEs of a t p , ν -distribution are changed to the corresponding estimators for a normal population. Type I errors and the power of the tests are also examined by simulation. In the simulation study the RST behaved more adequately than all remaining statistics in the situation when the dimensionality p was growing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Testing many constraints in possibly irregular models using incomplete U-statistics.
- Author
-
Sturma, Nils, Drton, Mathias, and Leung, Dennis
- Subjects
FALSE positive error ,NULL hypothesis ,U-statistics ,SAMPLE size (Statistics) ,CONFORMANCE testing ,GOODNESS-of-fit tests - Abstract
We consider the problem of testing a null hypothesis defined by equality and inequality constraints on a statistical parameter. Testing such hypotheses can be challenging because the number of relevant constraints may be on the same order or even larger than the number of observed samples. Moreover, standard distributional approximations may be invalid due to irregularities in the null hypothesis. We propose a general testing methodology that aims to circumvent these difficulties. The constraints are estimated by incomplete U -statistics, and we derive critical values by Gaussian multiplier bootstrap. We show that the bootstrap approximation of incomplete U -statistics is valid for kernels that we call mixed degenerate when the number of combinations used to compute the incomplete U -statistic is of the same order as the sample size. It follows that our test controls type I error even in irregular settings. Furthermore, the bootstrap approximation covers high-dimensional settings making our testing strategy applicable for problems with many constraints. The methodology is applicable, in particular, when the constraints to be tested are polynomials in U-estimable parameters. As an application, we consider goodness-of-fit tests of latent-tree models for multivariate data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Multiple change point detection for high-dimensional data.
- Author
-
Zhao, Wenbiao, Zhu, Lixing, and Tan, Falong
- Abstract
This research investigates the detection of multiple change points in high-dimensional data without particular sparse or dense structure, where the dimension can be of exponential order in relation to the sample size. The estimation approach proposed employs a signal statistic based on a sequence of signal screening-based local U-statistics. This technique avoids costly computations that exhaustive search algorithms require and mitigates false positives, which hypothesis testing-based methods need to control. Consistency of estimation can be achieved for both the locations and number of change points, even when the number of change points diverges at a certain rate as the sample size increases. Additionally, the visualization nature of the proposed approach makes plotting the signal statistic a useful tool to identify locations of change points, which distinguishes it from existing methods in the literature. Numerical studies are performed to evaluate the effectiveness of the proposed technique in finite sample scenarios, and a real data analysis is presented to illustrate its application. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Weighted least squares: A robust method of estimation for sinusoidal model.
- Author
-
Kundu, Debasis
- Subjects
- *
LEAST squares , *RANDOM variables , *EIGENFUNCTIONS , *ASYMPTOTIC normality - Abstract
In this article, we consider the weighted least squares estimators (WLSEs) of the unknown parameters of a multiple sinusoidal model. Although, the least squares estimators (LSEs) are known to be the most efficient estimators in case of a multiple sinusoidal model, they are quite susceptible in presence of outliers. In presence of outliers, robust estimators like the least absolute deviation estimators (LADEs) or Huber's M-estimators (HMEs) may be used. But implementation of the LADEs and HMEs are quite challenging in case of a sinusoidal model, the problem becomes more severe in case of multiple sinusoidal model. Moreover, to derive the theoretical properties of the robust estimators, one needs stronger assumptions on the error random variables than what are needed for the LSEs. The proposed WLSEs are used as robust estimators and they have the following two major advantages. First, they can be implemented very easily in case of multiple sinusoidal model, and their properties can be obtained under the same set of error assumptions as the LSEs. Extensive simulation results suggest that in presence of outliers, the WLSEs behave better than the LSEs, and at par with the LADEs and HEMs. It is observed that the performance of the WLSEs depend on the weight function, and we discuss how to choose a proper weight function for a given data set. We have analyzed one synthetic data set to show how the proposed methods can be implemented in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Group sequential hypothesis tests with variable group sizes: Optimal design and performance evaluation.
- Author
-
Novikov, Andrey
- Subjects
- *
ALGORITHMS , *ERROR probability , *PROGRAMMING languages , *HYPOTHESIS , *SEQUENTIAL analysis - Abstract
In this article, we propose a computer-oriented method of construction of optimal group sequential hypothesis tests with variable group sizes. In particular, for independent and identically distributed observations, we obtain the form of optimal group sequential tests which turn to be a particular case of sequentially planned probability ratio tests (SPPRTs, see Schmitz 1993). Formulas are given for computing the numerical characteristics of general SPPRTs, like error probabilities, average sampling cost, etc. A numerical method of designing the optimal tests and evaluation of the performance characteristics is proposed, and computer algorithms of its implementation are developed. For a particular case of sampling from a Bernoulli population, the proposed method is implemented in R programming language, the code is available in a public GitHub repository. The proposed method is compared numerically with other known sampling plans. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Uniformly most accurate confidence intervals under weak restrictions.
- Author
-
Zhang, Jin
- Subjects
- *
INVARIANT measures , *PROBABILITY theory , *SIMPLICITY - Abstract
The natural and commonly used measure of accuracy for a confidence interval (CI) is its length, but it only applies to bounded CI's. More seriously, it is not an invariant measure, creating chaos on selecting CI's. Using the probability of false coverage as a finite and invariant measure of accuracy for a CI, we establish the uniformly most accurate (UMA) CI under weak restriction, which substantially improves the classical UMA unbiased CI for simplicity and optimality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. On local power of likelihood-based tests in von Mises regressions.
- Author
-
Lemonte, Artur J.
- Subjects
- *
LIKELIHOOD ratio tests , *CUMULATIVE distribution function , *REGRESSION analysis , *TEST scoring , *STATISTICS - Abstract
The von Mises distribution has played a central role as a distribution on the circle. Its associated circular regression model has been applied in a number of areas. In this paper, we consider the von Mises regression model and, under a sequence of Pitman alternatives, derive the nonnull asymptotic expansions of the cumulative distribution functions of the likelihood ratio, Wald, Rao score, and gradient test statistics for testing a subset of the von Mises regression parameters, as well as for testing the concentration parameter. We then compare analytically the local power of these likelihood-based tests on the basis of the asymptotic expansions and provide conditions where one test can be more locally powerful than the other one in this class of regression models. Consequently, on the basis of the general conditions established, the user can choose the most powerful test to make inferences on the model parameters. We also provide a numerical example to illustrate the usefulness and applicability of the general result. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Goodness-of-fit test for the one-sided Lévy distribution.
- Author
-
Kumari, Aditi and Bhati, Deepesh
- Subjects
- *
GOODNESS-of-fit tests , *ASYMPTOTIC distribution , *MONTE Carlo method , *NULL hypothesis , *ASYMPTOTIC normality , *GAMMA distributions - Abstract
The main aim of this work is to develop a new goodness-of-fit test for the one-sided Lévy distribution. The proposed test is based on the scale-ratio approach in which two estimators of the scale parameter of one-sided Lévy distribution are confronted. The asymptotic distribution of the test statistic is obtained under null hypotheses. The performance of the test is demonstrated using simulated observations from various known distributions. Finally, two real-world datasets are analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Confidence distributions and hypothesis testing.
- Author
-
Melilli, Eugenio and Veronese, Piero
- Subjects
CONFIDENCE ,HYPOTHESIS - Abstract
The traditional frequentist approach to hypothesis testing has recently come under extensive debate, raising several critical concerns. Additionally, practical applications often blend the decision-theoretical framework pioneered by Neyman and Pearson with the inductive inferential process relied on the p-value, as advocated by Fisher. The combination of the two methods has led to interpreting the p-value as both an observed error rate and a measure of empirical evidence for the hypothesis. Unfortunately, both interpretations pose difficulties. In this context, we propose that resorting to confidence distributions can offer a valuable solution to address many of these critical issues. Rather than suggesting an automatic procedure, we present a natural approach to tackle the problem within a broader inferential context. Through the use of confidence distributions, we show the possibility of defining two statistical measures of evidence that align with different types of hypotheses under examination. These measures, unlike the p-value, exhibit coherence, simplicity of interpretation, and ease of computation, as exemplified by various illustrative examples spanning diverse fields. Furthermore, we provide theoretical results that establish connections between our proposal, other measures of evidence given in the literature, and standard testing concepts such as size, optimality, and the p-value. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Some additional remarks on statistical properties of Cohen's d in the presence of covariates.
- Author
-
Groß, Jürgen and Möller, Annette
- Subjects
CONFIDENCE intervals ,REGRESSION analysis ,INDEPENDENT variables - Abstract
The size of the effect of the difference in two groups with respect to a variable of interest may be estimated by the classical Cohen's d. A recently proposed generalized estimator allows conditioning on further independent variables within the framework of a linear regression model. In this note, it is demonstrated how unbiased estimation of the effect size parameter together with a corresponding standard error may be obtained based on the non-central t distribution. The portrayed estimator may be considered as a natural generalization of the unbiased Hedges' g. In addition, confidence interval estimation for the unknown parameter is demonstrated by applying the so-called inversion confidence interval principle. The regarded properties collapse to already known ones in case of absence of any additional independent variables. The stated remarks are illustrated with a publicly available data set. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Goodness-of-fit tests for multinomial models with inverse sampling.
- Author
-
Cho, Hokwon
- Subjects
- *
GOODNESS-of-fit tests , *DISTRIBUTION (Probability theory) , *SAMPLE size (Statistics) , *PROBABILITY theory , *EMPIRICAL research - Abstract
This article proposes goodness-of-fit tests for multinomial models using an inverse sampling scheme. From the multiple decision-theoretic perspective, we devise a test statistic and stopping rule that satisfy a prespecified probability level P* and obtain corresponding optimal sample sizes. Incomplete Dirichlet type II distribution functions are used to develop the procedure and to express the probability of correct decisions for various cell configurations for multinomial models. For empirical studies, Monte Carlo experiments are conducted, and for illustrations, various cell configurations of a wheel of fortune are demonstrated. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Power of goodness-of-fit tests and some competitive proposals based on progressively type-II censored data from a location-scale distribution.
- Author
-
Nadeb, Hossein, Estabraqi, Javad, and Torabi, Hamzeh
- Subjects
- *
GOODNESS-of-fit tests , *CENSORING (Statistics) , *MONTE Carlo method , *GAUSSIAN distribution - Abstract
In this paper, we review some existing methods for testing goodness-of-fit based on progressively type-II censored samples in the location-scale family of distributions. Also, some similar procedures and new modifications are proposed. Using Monte Carlo simulation, the powers of the reviewed and proposed tests are compared for the normal and Gumbel distributions against several alternatives. Then, we present some results based on the simulation studies. Finally, an application to two datasets is presented for numerical illustration. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Flexible control of the median of the false discovery proportion
- Author
-
Hemerik, Jesse, Solari, Aldo, and Goeman, Jelle J
- Subjects
Statistics - Methodology ,62F03 - Abstract
We introduce a multiple testing procedure that controls the median of the proportion of false discoveries (FDP) in a flexible way. The procedure only requires a vector of p-values as input and is comparable to the Benjamini-Hochberg method, which controls the mean of the FDP. Our method allows freely choosing one or several values of alpha after seeing the data -- unlike Benjamini-Hochberg, which can be very liberal when alpha is chosen post hoc. We prove these claims and illustrate them with simulations. Our procedure is inspired by a popular estimator of the total number of true hypotheses. We adapt this estimator to provide simultaneously median unbiased estimators of the FDP, valid for finite samples. This simultaneity allows for the claimed flexibility. Our approach does not assume independence. The time complexity of our method is linear in the number of hypotheses, after sorting the p-values.
- Published
- 2022
46. First Betti number of the path homology of random directed graphs
- Author
-
Chaplin, Thomas
- Published
- 2024
- Full Text
- View/download PDF
47. Subexponential-Time Algorithms for Sparse PCA.
- Author
-
Ding, Yunzi, Kunisky, Dmitriy, Wein, Alexander S., and Bandeira, Afonso S.
- Subjects
- *
POLYNOMIAL time algorithms , *THRESHOLDING algorithms , *ALGORITHMS , *SEARCH algorithms , *INTERPOLATION algorithms , *RANDOM matrices , *RANDOM graphs - Abstract
We study the computational cost of recovering a unit-norm sparse principal component x ∈ R n planted in a random matrix, in either the Wigner or Wishart spiked model (observing either W + λ x x ⊤ with W drawn from the Gaussian orthogonal ensemble, or N independent samples from N (0 , I n + β x x ⊤) , respectively). Prior work has shown that when the signal-to-noise ratio (λ or β N / n , respectively) is a small constant and the fraction of nonzero entries in the planted vector is ‖ x ‖ 0 / n = ρ , it is possible to recover x in polynomial time if ρ ≲ 1 / n . While it is possible to recover x in exponential time under the weaker condition ρ ≪ 1 , it is believed that polynomial-time recovery is impossible unless ρ ≲ 1 / n . We investigate the precise amount of time required for recovery in the "possible but hard" regime 1 / n ≪ ρ ≪ 1 by exploring the power of subexponential-time algorithms, i.e., algorithms running in time exp (n δ) for some constant δ ∈ (0 , 1) . For any 1 / n ≪ ρ ≪ 1 , we give a recovery algorithm with runtime roughly exp (ρ 2 n) , demonstrating a smooth tradeoff between sparsity and runtime. Our family of algorithms interpolates smoothly between two existing algorithms: the polynomial-time diagonal thresholding algorithm and the exp (ρ n) -time exhaustive search algorithm. Furthermore, by analyzing the low-degree likelihood ratio, we give rigorous evidence suggesting that the tradeoff achieved by our algorithms is optimal. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. On classes of consistent tests for the Type I Pareto distribution based on a characterization involving order statistics.
- Author
-
Ngatchou–Wandji, Joseph, Nombebe, Thobeka, Santana, Leonard, and Allison, James
- Subjects
- *
PARETO distribution , *ORDER statistics , *CHARACTERISTIC functions , *CONFORMANCE testing , *GOODNESS-of-fit tests - Abstract
We propose new classes of goodness-of-fit tests for the Pareto Type I distribution. These tests are based on a characterization of the Pareto distribution involving order statistics. We derive the limiting null distribution of the tests and also show that the tests are consistent against fixed alternatives. The finite-sample performance of the newly proposed tests are evaluated and compared to some of the existing tests, where it is found that the new tests are competitive in terms of powers. The paper concludes with an application to a real world data set, namely the earnings of the 22 highest paid participants in the inaugural season of LIV golf. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. The evaluation of the p-value as an estimator for the null hypothesis in the exponential distribution.
- Author
-
Babadi, Masoumeh, Hormozinejad, Farshin, and Zaherzadeh, Ali
- Subjects
- *
DISTRIBUTION (Probability theory) , *NULL hypothesis , *BAYES' estimation , *CONFORMANCE testing , *DECISION theory - Abstract
This paper is concerned with investigating the adequacy of using the p-value as an estimator for the set specified by the null hypothesis in the Exponential distribution. It is shown that the p-value is an admissible estimator in the one-sided test of the location parameter. When the one-sided test of the scale parameter is considered, the p-value is found to be a generalized Bayes estimator with infinite Bayes risk. However, it is very difficult to find an estimator that dominates it. When the parameter space is restricted, the modified p-value is an admissible estimator in the one-sided test of the scale parameter and performs better than the usual p-value. Although the usual p-value is generally inadmissible in the two-sided test, it can be useful as an estimator in this type of test for the scale parameter. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Bounds on generalized family-wise error rates for normal distributions.
- Author
-
Dey, Monitirtha and Bhandari, Subir Kumar
- Subjects
GAUSSIAN distribution ,ERROR rates - Abstract
The Bonferroni procedure has been one of the foremost frequentist approaches for controlling the family-wise error rate (FWER) in simultaneous inference. However, many scientific disciplines often require less stringent error rates. One such measure is the generalized family-wise error rate (gFWER) proposed (Lehmann and Romano in Ann Stat 33(3):1138–1154, 2005, https://doi.org/10.1214/009053605000000084). FWER or gFWER controlling methods are considered highly conservative in problems with a moderately large number of hypotheses. Although, the existing literature lacks a theory on the extent of the conservativeness of gFWER controlling procedures under dependent frameworks. In this note, we address this gap in a unified manner by establishing upper bounds for the gFWER under arbitrarily correlated multivariate normal setups with moderate dimensions. Towards this, we derive a new probability inequality which, in turn, extends and sharpens a classical inequality. Our results also generalize a recent related work by the first author. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.