277 results on '"ARITHMETIC mean"'
Search Results
2. Parsimonious Model Averaging With a Diverging Number of Parameters.
- Author
-
Zhang, Xinyu, Zou, Guohua, Liang, Hua, and Carroll, Raymond J.
- Subjects
- *
ASYMPTOTIC distribution , *PARSIMONIOUS models , *FORECASTING , *ARITHMETIC mean , *NUMERICAL analysis , *PREDICTION models - Abstract
Model averaging generally provides better predictions than model selection, but the existing model averaging methods cannot lead to parsimonious models. Parsimony is an especially important property when the number of parameters is large. To achieve a parsimonious model averaging coefficient estimator, we suggest a novel criterion for choosing weights. Asymptotic properties are derived in two practical scenarios: (i) one or more correct models exist in the candidate model set and (ii) all candidate models are misspecified. Under the former scenario, it is proved that our method can put the weight one to the smallest correct model and the resulting model averaging estimators of coefficients have many zeros and thus lead to a parsimonious model. The asymptotic distribution of the estimators is also provided. Under the latter scenario, prediction is mainly focused on and we prove that the proposed procedure is asymptotically optimal in the sense that its squared prediction loss and risk are asymptotically identical to those of the best—but infeasible—model averaging estimator. Numerical analysis shows the promise of the proposed procedure over existing model averaging and selection methods. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
3. Discussion of "From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation".
- Author
-
Shen, Xiaotong and Huang, Hsin-Cheng
- Subjects
- *
FORECASTING , *ARITHMETIC mean , *APPLIED mathematics - Abstract
In Equation (1), assume that Graph HT ht and Graph HT ht . In Lemma 1, the assumption that Graph HT ht may be relaxed. One issue with such an approach is that the effective sample data for estimation of Graph HT ht reduces for the sake of estimation of the optimal I i . However, one may use the same training data for estimation of Graph HT ht and tuning I i . [Extracted from the article]
- Published
- 2020
- Full Text
- View/download PDF
4. Marginal Inferential Models: Prior-Free Probabilistic Inference on Interest Parameters.
- Author
-
Martin, Ryan and Liu, Chuanhai
- Subjects
- *
INFERENTIAL statistics , *PROBABILISTIC inference , *MATHEMATICAL statistics , *MATHEMATICAL variables , *ARITHMETIC mean - Abstract
The inferential models (IM) framework provides prior-free, frequency-calibrated, and posterior probabilistic inference. The key is the use of random sets to predict unobservable auxiliary variables connected to the observable data and unknown parameters. When nuisance parameters are present, a marginalization step can reduce the dimension of the auxiliary variable which, in turn, leads to more efficient inference. For regular problems, exact marginalization can be achieved, and we give conditions for marginal IM validity. We show that our approach provides exact and efficient marginal inference in several challenging problems, including a many-normal-means problem. In nonregular problems, we propose a generalized marginalization technique and prove its validity. Details are given for two benchmark examples, namely, the Behrens–Fisher and gamma mean problems. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
5. Inference in Semiparametric Regression Models Under Partial Questionnaire Design and Nonmonotone Missing Data.
- Author
-
Chatterjee, Nilanjan and Yan Li
- Subjects
- *
ARITHMETIC mean , *QUESTIONNAIRES , *MISSING data (Statistics) , *MULTIPLE regression analysis , *EXPERIMENTAL design - Abstract
In epidemiologic studies, partial questionnaire design (PQD) can reduce cost, time, and other practical burdens associated with lengthy questionnaires by assigning different subsets of the questionnaire to different, but overlapping, subsets of the study participants. In this article, we describe methods for semiparametric inference for regression model under PQD and other study settings that can generate nonmonotone missing data in covariates. In particular, motivated from methods for multiphase designs, we develop three estimators, namely mean score, pseudo-likelihood, and semiparametric maximum likelihood, each of which has some unique advantages. We develop the asymptotic theory and a sandwich variance estimator for each of the estimators under the underlying semiparametric model that allows the distribution of the covariates to remain nonparametric. We study the finite sample performances and relative efficiencies of the methods using simulation studies. We illustrate the methods using data from a case-control study of non-Hodgkin’s lymphoma where the data on the main chemical exposures of interest are collected using two different instruments on two different, but overlapping, subsets of the participants. This article has supplementary material online. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
6. Multiple Model Evaluation Absent the Gold Standard Through Model Combination.
- Author
-
IVERSEN Jr., Edwin S., PARMIGIANI, Giovanni, and Sining CHEN
- Subjects
- *
MATHEMATICAL statistics , *CANCER genes , *MATHEMATICAL models , *FORECASTING , *ESTIMATES , *ARITHMETIC mean - Abstract
We describe a method for evaluating an ensemble of predictive models given a sample of observations comprising the model predictions and the outcome event measured with error. Our formulation allows us to simultaneously estimate measurement error parameters, true outcome--the "gold standard"--and a relative weighting of the predictive scores. We describe conditions necessary to estimate the gold standard and to calibrate these estimates and detail how our approach is related to, but distinct from, standard model combination techniques. We apply our approach to data from a study to evaluate a collection of BRCA1/BRCA2 gene mutation prediction scores. In this example, genotype is measured with error by one or more genetic assays. We estimate true genotype for each individual in the data set, operating characteristics of the commonly used genotyping procedures, and a relative weighting of the scores. Finally, we compare the scores against the gold standard genotype and find that Mendelian scores are, on average, the more refined and better calibrated of those considered and that the comparison is sensitive to measurement error in the gold standard. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
7. From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation: Rejoinder.
- Author
-
Rosset, Saharon and Tibshirani, Ryan J.
- Subjects
- *
FORECASTING , *MARGINAL distributions , *APPLIED mathematics , *ARITHMETIC mean , *NONLINEAR regression , *ERROR analysis in mathematics , *SELECTION bias (Statistics) , *BIAS correction (Topology) - Abstract
The PB scheme is based on assuming a parametric linear model with iid normal errors: Graph HT ht To complete the framework, the authors also need to estimate the marginal distribution of I x i , and this is also assumed to be normal. We can demonstrate that this is not the case for "high-dimensional" cases, Graph HT ht by explicitly analyzing least squares regression with similar concepts. Consider the simplest case Graph HT ht with Graph HT ht normally distributed. [Extracted from the article]
- Published
- 2020
- Full Text
- View/download PDF
8. Rejoinder.
- Author
-
Hjort, Nils Lid and Claeskens, Gerda
- Subjects
- *
STATISTICS , *MATHEMATICAL models , *ARITHMETIC mean , *REGRESSION analysis , *PROBABILITY theory , *SIMULATION methods & models - Abstract
This article discusses aspects of focused information criteria (FIC) and frequentist model averaging (FMA) in the model selection process. The local neighborhood framework allows one to extend familiar standard iid and regression models in several parametric directions. This may be utilized for robustness purposes and sensitivity analyses and leads to a theory for model averaging and focused model selection criteria. Regression analyses have different aims on different occasions, and even the same dataset may be analyzed with different goals in mind. The three main strands of references in Bayesian model averaging (BMA) literature are: General Bayes theory, simulations and cross-validation-type predictive performance. The development of the FMA methodology includes understanding the behavior of BMA strategies. Comments from Cook and Li as well as Raftery and Zheng point to the usefulness of developing the FIC and FMA apparatus to assess prediction quality when averaged in suitable ways, rather than for one focus parameter at a time.
- Published
- 2003
- Full Text
- View/download PDF
9. Frequentist Model Average Estimators.
- Author
-
Hjort, Nils Lid and Claeskens, Gerda
- Subjects
- *
STATISTICAL sampling , *BAYESIAN analysis , *STATISTICS , *RIDGE regression (Statistics) , *ARITHMETIC mean - Abstract
The traditional use of model selection methods in practice is to proceed as if the final selected model had been chosen in advance, without acknowledging the additional uncertainty introduced by model selection. This often means underreporting of variability and too optimistic confidence intervals. We build a general large-sample likelihood apparatus in which limiting distributions and risk properties of estimators post-selection as well as of model average estimators are precisely described, also explicitly taking modeling bias into account. This allows a drastic reduction in complexity, as competing model averaging schemes may be developed, discussed, and compared inside a statistical prototype experiment where only a few crucial quantities matter. In particular, we offer a frequentist view on Bayesian model averaging methods and give a link to generalized ridge estimators. Our work also leads to new model selection criteria. The methods are illustrated with real data applications. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
10. Discussion: Performance of Bayesian Model Averaging.
- Author
-
Raftery, Adrian E. and Yingye Zheng
- Subjects
- *
STATISTICS , *MATHEMATICAL models , *ARITHMETIC mean , *REGRESSION analysis , *BAYESIAN analysis , *PROBABILITY theory - Abstract
This article discusses the performance of Bayesian model averaging (BMA) relative to other methods for real datasets. The performance of Bayesian model selection and BMA has been studied. There are three main strands of results: (1) General theoretical results from Jeffreys in 1939, (2) Simulation studies and (3) Results on out-of-sample predictive performance. There are several realistic simulation studies of the performance of BMA relative to other methods in a variety of situations, including linear regression (George and McCulloch 1993 and Raftery, Madigan and Hoeting 1997), log-linear models (Clyde 1999), logistic regression (Viallefont, Raftery and Richardson 2001) and wavelets (Clyde and George 2000). The BMA distribution of a quantity of interest can be viewed as the posterior distribution from the full model, with a mixed discrete-continuous prior distribution that assigns weight to the events that the individual components are zero. A simple normal example has been presented to assess numerical differences in performance and also of the extent to which results hold event if the prior distribution is not the same as the practical distribution. Posterior model probabilities can be computed in four ways.
- Published
- 2003
- Full Text
- View/download PDF
11. Discussion: Inference and Interpretability Considerations in Frequentist Model Averaging and Selection.
- Author
-
Xiaotong Shen, Dougherty, Daniel P., and Johnson, Wesley O.
- Subjects
- *
STATISTICS , *ARITHMETIC mean , *PROBABILITY theory , *MATHEMATICAL models , *SIMULATION methods & models - Abstract
This article presents information on a study which analyzed inference and interpretability considerations in frequentist model averaging (FMA) and selection. Model averaging and model selection as well as their associated inferential aspects are important in both statistical theory and practice. One key ingredient of statistical data analysis is model development. Others believe model development should be separated from the data and based on the theory of the process involved or derived from a completely algorithmic or black box approach. To highlight the differences in possible goals, the study considered the model for mixed enzyme inhibition from applied biochemistry, which describes a wide class of reaction kinetics. If predictive accuracy of the model is the stated goal, then some parameter of interest is better than another in achieving that goal via FMA when using focused information criterion. In many practical situations the number of models involved in averaging is so large that the computation of model averaging is prohibitive. A fundamental tenet of frequentist analysis is that unplanned contrasts should be avoided. FMA yields a biased estimator having smaller variance than the single model estimator that is unbiased.
- Published
- 2003
- Full Text
- View/download PDF
12. Pay Phones, Parking Meters, Vending Machines, and Optimal Bayasian Decisions on Collection Times.
- Author
-
Maitra, Ranjan and Dalal, Siddhartha R.
- Subjects
- *
VENDING machines , *COIN changing machines , *ARITHMETIC mean , *STANDARD deviations , *RETAIL equipment & supplies , *COIN-operated machines - Abstract
Payphones, parking meters, and vending machines illustrate the modem business practice of substituting machinery for manpower. They do not eliminate manual labor completely, because full coin-boxes still must be replaced and vending machines still must be stocked. Deciding when to replace a coin-box is important, with unequal losses resulting from underestimation and overestimation. This article derives optimal methodology for this problem by incorporating collection history and specifying common prior distributions over average daily fill rate and standard deviation at each box. The approach is implemented and analyzed on collection records from 11.308 pay phones over a large geographical region. When the loss from overestimation is 19 times that from underestimation, our methods outperform the one in current use at least 69.9% of the time, translating into average potential collection-cost reductions exceeding 21%. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
13. A Voyage of Discovery.
- Author
-
Billard, Lynne
- Subjects
- *
STATISTICS , *ASSOCIATIONS, institutions, etc. , *PERIODICALS , *CENSUS , *ARITHMETIC mean , *VARIATIONAL principles , *DISTRIBUTION (Probability theory) , *STATISTICAL correlation - Abstract
This article highlights the historical events that took place within 50 years since the American Statistical Association was founded in 1839 and the Journal of the American Statistical Association (JASA) was first published in 1888. For the first years, JASA contained almost exclusively nonmathematical papers. Many were mere repositories of extensive data sets, including many compilations from census counts with interpretations of what these data purportedly revealed. Others were from investigations undertaken by sociologists, economists, political scientists and historians. One area that attracted theoretical attention dealt with the concepts of averages, variation and distributions. The second area that received theoretical attention during these years was correlation and related concepts.
- Published
- 1997
- Full Text
- View/download PDF
14. Blind Deconvolution via Sequential Imputations.
- Author
-
Liu, Jun S. and Rong Chen
- Subjects
- *
MULTIPLE imputation (Statistics) , *COMPUTER networks , *DIGITAL signal processing , *STATISTICAL sampling , *ARITHMETIC mean , *SIMULATION methods & models - Abstract
The sequential imputation procedure is applied to adaptively and sequentially reconstruct discrete input signals that are blurred by an unknown linear moving average channel and contaminated by additive Gaussian noises, a problem known as blind deconvolution in digital communication. A rejuvenation procedure for improving the efficiency of sequential imputation is introduced and theoretically justified. The proposed method does not require the channel to be nonminimum phase and can be used in real time signal restoration. Two simulated systems are studied to illustrate the proposed method. Our result shows that the ideas of multiple imputations and flexible simulation techniques are as powerful in engineering as in survey sampling. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
15. Generalized S-Estimators.
- Author
-
Croux, Christophe, Rousseeuw, Peter J., and Hössjer, Ola
- Subjects
- *
REGRESSION analysis , *ESTIMATION theory , *STANDARD deviations , *THEORY of distributions (Functional analysis) , *ALGORITHMS , *ESTIMATES , *SQUARE , *ARITHMETIC mean , *MATHEMATICAL functions - Abstract
In this article we introduce a new type of positive-breakdown regression method, called a generalized S-estimator (or GS-estimator), based on the minimization of a generalized M-estimator of residual scale. We compare the class of GS-estimators with the usual S-estimators, including least median of squares. It turns out that GS-estimators attain a much higher efficiency than S-estimators, at the cost of a slightly increased worst-case bias. We investigate the breakdown point, the maxbias curve, and the influence function of GS-estimators. We also give an algorithm for computing GS-estimators and apply it to real and simulated data. [ABSTRACT FROM AUTHOR]
- Published
- 1994
- Full Text
- View/download PDF
16. Bootstrap Methods for Finite Populations.
- Author
-
Booth, James G., Butler, Ronald W., and Hall, Peter
- Subjects
- *
POPULATION research , *STATISTICAL bootstrapping , *CONFIDENCE intervals , *RESAMPLING (Statistics) , *STATISTICAL sampling , *STATISTICAL hypothesis testing , *SAMPLE size (Statistics) , *ARITHMETIC mean , *MEDIAN (Mathematics) - Abstract
We show that the familiar bootstrap plug-in rule of Efron has a natural analog in finite population settings In our method a characteristic of the population is estimated by the average value of the characteristic over a class of empirical populations constructed from the sample Our method extends that of Gross to situations in which the stratum sizes are not integer multiples of their respective sample sizes Moreover, we show that our method can be used to generate second-order correct confidence intervals for smooth functions of population means, a property that has not been established for other resampling methods suggested in the literature A second resampling method is proposed that also leads to second-order correct confidence intervals and is less computationally intensive than our bootstrap But a simulation study reveals that the second method can be quite unstable in some situations, whereas our bootstrap performs very well. [ABSTRACT FROM AUTHOR]
- Published
- 1994
- Full Text
- View/download PDF
17. M Estimators of Location for Gaussian and Related Processes With Slowly Decaying Serial Correlations.
- Author
-
Beran, Jan
- Subjects
- *
GAUSSIAN processes , *ESTIMATION theory , *STOCHASTIC processes , *DEPENDENCE (Statistics) , *MATHEMATICS - Abstract
We investigate the behavior of M estimators of the location parameter for stochastic processes with long-range dependence. The processes considered are Gaussian or one-dimensional transformations of Gaussian processes. It turns out that, up to a constant, all M estimators are asymptotically equivalent to the arithmetic mean. For Gaussian processes this constant is always equal to one, independently of the ψ function. In view of the case of ud observations, the results are surprising. They are related to earlier work by Gastwirth and Rubin. Some simulations illustrate the results. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
18. Local Bandwidth Selection for Kernal Estimates.
- Author
-
Staniswalis, Joan G.
- Subjects
- *
DIGITAL communications , *PARAMETER estimation , *KERNEL functions , *REGRESSION analysis , *ARITHMETIC mean , *BANDWIDTHS , *BROADBAND communication systems , *DATA transmission systems - Abstract
This article presents a kernel estimate of a curve that uses an adaptive procedure for local selection of the bandwidth. A two-step procedure is proposed for estimating the local bandwidth that minimizes the mean squared error (MSE) of a kernel estimator for nonparametric regression. First, a consistent estimate of the exact MSE is constructed. Then the bandwidth that minimizes the estimate of the MSE is calculated. Sufficient conditions under which this bandwidth is asymptotically optimal and normally distributed are given. The local bandwidth selection procedure was implemented on some simulated data and compared to a global bandwidth selection procedure. A 68%-91% reduction in the average MSE of a kernel estimator was realized with the local bandwidth selection procedure. Such a scheme was also studied by another statistician and he termed a direct pilot estimator approach. The result is slightly more general in the specification of the interval that is searched for the optimizing bandwidth.
- Published
- 1989
- Full Text
- View/download PDF
19. Rank-Based Tests for Randomness Against First-Order Serial Dependence.
- Author
-
Hallin, Marc and Mélard, Guy
- Subjects
- *
AUTOREGRESSION (Statistics) , *STATISTICAL correlation , *BOX-Jenkins forecasting , *HYPOTHESIS , *ARITHMETIC mean , *OPERATIONS research , *PROBABILITY theory , *METHODOLOGY - Abstract
Optimal rank-based procedures were derived in Hallin, Ingenbleek, and Puri (1985, 1987) and Hallin and Puri (1988) for some fundamental testing problems arising in time series analysis. The optimality properties of these procedures are of an asymptotic nature, however, whereas much of the attractiveness of rank-based methods lies in their small-sample applicability and robustness features. Accordingly, the objective of this article is twofold: (a) a study of the finite-sample behavior of the asymptotically optimal tests for randomness against first-order autoregressive moving average dependence proposed in Hallin et al. (1985), both under the null hypothesis (tables of critical values) and under alternatives of serial dependence (evaluation of the power function), and (b) an (heuristic) investigation of the robustness properties of the proposed procedures (with emphasis on the identification problem in the presence of "outliers"). We begin (Sec. 2) with a brief description of the rank-based measures of serial dependence to be considered throughout: (a) Van der Waerden, (b) Wilcoxon, (c) Laplace, and (d) Spearman-Wald-Wolfowitz autocorrelations. The article is mainly concerned with first-order (lag 1) coefficients of these types. Tables of the critical values required for performing tests of randomness are provided (Sec. 3), and the finite-sample power of the resulting tests is compared with that of their parametric competitors (Sec. 4). Although the exact level of classical parametric procedures is only approximately correct (whereas the distribution-free rank tests are of the correct size), the proposed rank-based tests compare quite favorably with the classical ones, and appear to perform at least as well as (often strictly better than) their classical counterparts. The examples of Section 5 emphasize the... [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
20. A Graphical Procedure for Determining Nonstationary in Time Series.
- Author
-
Cressie, Noel
- Subjects
- *
ANALYSIS of covariance , *HYPOTHESIS , *BOX-Jenkins forecasting , *REGRESSION analysis , *ARITHMETIC mean , *STATISTICS , *MATHEMATICS , *GRAPHIC methods - Abstract
Integrated processes as models for time series data have proved to be an important component of the highly flexible class of ARIMA(p, d, q) models. Determining the amount of differencing, d, has been a difficult task: too little and the process is not yet second-order stationary; too much and the process is more variable than it need be. It is shown that by introducing the notion of generalized covariances, developed by Matheron (1973) for spatial processes, the amount of differencing needed can be read easily from a sequence of graphs showing averages of squares of primary data increments. Formal inference to determine if the last difference really is necessary can then be carried out. Time series data are analyzed in this way and compared with the hypothesis-testing approach illustrated by Dickey, Bell, and Miller (1986). Once the order of differencing has been diagnosed, either the differenced time series can be analyzed or the generalized covariance of the undifferenced series can be estimated. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
21. Negative Regret, Optional Stopping, and the Elimination of Outliers.
- Author
-
Martinsek, Adam T.
- Subjects
- *
ESTIMATION theory , *FIX-point estimation , *DISTRIBUTION (Probability theory) , *STATISTICAL sampling , *ARITHMETIC mean , *ANALYSIS of variance , *SAMPLE size (Statistics) , *MATHEMATICAL statistics - Abstract
This article examines the phenomenon of negative regret in sequential point estimation, in which a stopping rule used in ignorance of the variance of the distribution sometimes performs better than the best fixed sample size when the variance is known. An explanation in terms of a certain robustifying effect of the stopping rule is given. Results of simulation studies illustrating both negative regret and its explanation are also given. Suppose X[sub 1], X[sub 2], . . . are iid with mean mu and variance sigma[sup 2]/X. One may stop observing the sequence of X,'s after any number of observations n and estimate mu by the sample mean X[sub n], subject to the loss L[sub n] = A(X[sub n] - mu)[sup 2] + n (A > 0). If sigma is known, then the best fixed (i.e., nonrandom) sample size, in the sense of minimum risk, can be used. If a is unknown, the best fixed sample size is unknown, necessitating a sequential procedure. For a sequential procedure of the type first proposed by Robbins (1959), the "regret'--expected additional loss from not knowing a and using the sequential procedure instead of the best fixed sample size--can take arbitrarily large negative values. This is explained by noting that the sequential procedure acts to control extreme values and can be viewed as a sequential analog of trimming and Winsorizing in the nonsequential setting. I illustrate analytically by showing that the sequential procedure reduces the mean squared error of the sample mean when the distribution of the X,'s is symmetric about mu and has large kurtosis. Simulation results show that both negative regret and its explanation can occur when the expected sample size is as small as 25 or 30. [ABSTRACT FROM AUTHOR]
- Published
- 1988
- Full Text
- View/download PDF
22. First-Order Deletion Designs and the Construction of Efficient Nearly Orthogonal Factorial Designs in Small Blocks.
- Author
-
Voss, Daniel T.
- Subjects
- *
FACTORIAL experiment designs , *STATISTICAL correlation , *FACTORIALS , *REPLICATION (Experimental design) , *ARITHMETIC mean , *ORTHOGONAL functions , *RATIO analysis , *FACTORS (Algebra) , *MATHEMATICAL statistics - Abstract
Single replicate factorial designs for incomplete block experiments are obtained by first constructing a single replicate preliminary design in incomplete blocks for the same number of factors but an excessive number of levels of the first factor, then deleting the excess treatment combinations to obtain a deletion design. Any single replicate preliminary design yields a single replicate deletion design. Furthermore, if the preliminary design is orthogonal, then the resulting deletion design is shown to be nearly orthogonal and, under certain reduced models, to provide efficient estimation of lower-order effects and in some cases an orthogonal analysis. For example, a 2 x 3[sup 2] deletion design is constructed in three blocks of size 6, the 2 df confounded being the sum of interaction effects of F[sub 2] and F[sub 3] and second-order interaction effects. If second-order interactions are assumed negligible, then the deletion design provides efficient estimation of interactions between F[sub 2] and F[sub 3] and an orthogonal analysis. In another example, a 2 x 3[sup 2] deletion design is constructed in nine blocks of size 2 with main effects of F[sub 1] unconfounded. In a main effects model, main effects of F[sub 2] and F[sub 3] are estimable with optimal average efficiency. Experimental settings involving more factor levels are also considered, tables are given showing the efficiency of the resulting deletion designs to compare favorably with optimal upper bounds, and the deletion designs are shown by comparison to be more efficient on lower-order effects than competing orthogonal designs. [ABSTRACT FROM AUTHOR]
- Published
- 1986
- Full Text
- View/download PDF
23. Exact Simultaneous Confidence Intervals for Pairwise Comparisons of Three Normal Means.
- Author
-
Spurrier, John D. and Isham, Steven P.
- Subjects
- *
ARITHMETIC mean , *SIMULTANEOUS equations , *GAUSSIAN distribution , *PARAMETERS (Statistics) , *ESTIMATION theory , *PROBABILITY theory , *CONFIDENCE intervals , *SAMPLE size (Statistics) , *APPROXIMATION theory , *STATISTICS - Abstract
The problem of simultaneously estimating the pairwise differences of means of three independent normal populations with equal variances is considered. A computational method involving a bivariate t density is used to form confidence intervals with simultaneous coverage probability equal to 1 - a. For equal sample sizes, the method is the Tukey studentized range procedure. With unequal sample sizes, the method is superior to the various generalized Tukey methods. A table of probability points is presented for small sample sizes. A large-sample approximation based on the bivariate normal and studentized range distributions is given. [ABSTRACT FROM AUTHOR]
- Published
- 1985
- Full Text
- View/download PDF
24. Multiple Comparisons and Type III Errors.
- Author
-
Bofinger, Eve
- Subjects
- *
MULTIPLE comparisons (Statistics) , *PROBABILITY theory , *ERRORS , *ORDER statistics , *ERROR functions , *PARAMETERS (Statistics) , *ARITHMETIC mean , *CONFIDENCE intervals , *STATISTICAL hypothesis testing , *STATISTICS - Abstract
In multiple comparison procedures often the ordering of populations is of more interest than the testing of hypotheses. In such a situation it is important to control the probability of a Type III error (introduced by Hatter 1957 to indicate that one population is concluded to be better than another when actually it is worse). The studentized differences between means may be used to give ordering conclusions on all pairs of populations under consideration. Although this may be done by setting up confidence intervals using Tukey's honest significant difference, a more efficient method (when the confidence intervals themselves are not of interest) is suggested here, and tables are provided for its implementation. [ABSTRACT FROM AUTHOR]
- Published
- 1985
- Full Text
- View/download PDF
25. Multiple Probability Assessments by Dependent Experts.
- Author
-
Agnew, Carson E.
- Subjects
- *
CALIBRATION , *PROBABILITY theory , *RANDOM variables , *DECISION making , *BAYESIAN analysis , *ARITHMETIC mean , *MULTIVARIATE analysis , *STATISTICS - Abstract
When two or more information sources ("experts") provide a decision maker with information on two or more random variables, the decision maker using Bayes's rule has an opportunity to (a) update a prior about the random variables and (b) calibrate the experts. (Calibration is the process of adjusting the decision maker's likelihood about the experts' assessments.) This article presents a model for this two-way process and specializes to the case in which the experts' assessment errors have a multivariate normal density. In general, we find that variables which the decision maker and the experts regard as independent a priori will be dependent a posteriori because of dependence in the assessment errors. Formulas for posterior densities are given for the normal model. In this model the posterior density of the random variables depends on only a weighted average of the expert's means, with weights that depend on the experts' assessments of previously known quantities. I also present a special case of the model for which the mean of the posterior density is correctly given by a simple (unweighted) average of assessments. [ABSTRACT FROM AUTHOR]
- Published
- 1985
- Full Text
- View/download PDF
26. The G-Spectral Estimator.
- Author
-
Morton, M. J. and Gray, H. L
- Subjects
- *
ESTIMATION theory , *SPECTRAL energy distribution , *AUTOREGRESSION (Statistics) , *MOMENTS method (Statistics) , *ARITHMETIC mean , *BOX-Jenkins forecasting , *STATISTICAL sampling , *MATHEMATICAL statistics - Abstract
In this article a modified definition of the G-spectral estimator is given. It is shown that the resulting estimator is a method of moments autoregressive moving average (ARMA) spectral estimator that does not require an estimate of the moving average parameters. As a result, a new formula for the power spectrum of an ARMA process is given that does not explicitly involve the moving average (MA) parameters. This formula then leads to a closed-form expression for the MA parameters and their corresponding moment estimators. [ABSTRACT FROM AUTHOR]
- Published
- 1984
- Full Text
- View/download PDF
27. Simplified Expressions for Obtaining Approximately Optimum System-Reliability Confidence Bounds from Exponential Data.
- Author
-
Mann, Nancy R.
- Subjects
- *
CONFIDENCE intervals , *EXPONENTS , *RELIABILITY (Personality trait) , *APPROXIMATION theory , *ARITHMETIC mean , *STATISTICS , *PROBABILITY theory - Abstract
Expressions derived by Mann and Grubbs [9] for obtaining confidence bounds on series-system reliability from exponential subsystem Type-II censored life-test data are simplified. The simplification allows calculation of confidence bounds based on weighted averages of only the first and second powers of the inverses of "total time on test" for the various independent subsystems tested. These approximate well (within about a unit in the second significant figure) optimum (uniformly most accurate unbiased) confidence bounds obtained iteratively by computer techniques from results of Lentner and Buehler [5] and El Mawaziny [1]. [ABSTRACT FROM AUTHOR]
- Published
- 1974
- Full Text
- View/download PDF
28. The Effect of Aggregation on Prediction in the Autoregressive Model.
- Author
-
Amemiya, Takeshi and Wu, Roland Y.
- Subjects
- *
MATHEMATICAL variables , *PATH analysis (Statistics) , *REGRESSION analysis , *AUTOREGRESSION (Statistics) , *NUMERICAL analysis , *MATHEMATICAL statistics , *ARITHMETIC mean , *STATISTICS - Abstract
The article shows that if the original variable follows a pth order autoregressive system then the non-overlapping moving sum follows a pth order autoregression with at most a pth order moving-average of an independent sequence regardless of the length of the summation. From such an aggregate model we derive the optimal predictor of the aggregate variable and show that it performs remarkably well compared to the optimal disaggregate predictor. The article contains both theoretical and numerical analysis. [ABSTRACT FROM AUTHOR]
- Published
- 1972
- Full Text
- View/download PDF
29. Optimum Sample Size and Sampling Interval for Controlling the Mean of Non-Normal Variables.
- Author
-
Nagendra, Y. and Rai, G.
- Subjects
- *
MATHEMATICAL variables , *DENSITY functionals , *SAMPLE size (Statistics) , *COST control , *EDGEWORTH expansions , *STATISTICAL sampling , *ARITHMETIC mean , *PROCESS control systems , *VALUES (Ethics) - Abstract
Following Duncan [3], the per hour cost of the process under the surveillance of a mean chart for controlling the mean of non-normal variables whose density function is represented by the first four terms of an Edgeworth series is obtained. The sample size and sampling interval which minimize the cost for detecting a particular shift in the process average are derived when the width of the control limits is specified. They are numerically determined for different non-normal situations by assuming various cost values when a particular shift in the process average is to be detected. They are presented in a table which compares the changes in optimum sample size and sampling interval due to changes in (a) the non-normal situation, (b) the shifts to be detected in the process average and (c) width of the control limits specified. [ABSTRACT FROM AUTHOR]
- Published
- 1971
- Full Text
- View/download PDF
30. Adjustment of Monthly or Quarterly Series to Annual Totals: An Approach Based on Quadratic Minimization.
- Author
-
Denton, Frank T.
- Subjects
- *
MATHEMATICAL optimization , *TIME series analysis , *QUADRATIC equations , *QUADRATIC forms , *PROBLEM solving , *ARITHMETIC mean , *STATISTICAL correlation , *MATHEMATICAL statistics - Abstract
This article considers the problem of adjusting monthly or quarterly time series to make them accord with independent annual totals or averages without introducing artificial discontinuities. A general approach and some specific procedures involving constrained minimization of a quadratic form in the differences between revised and unrevised series are proposed. Some computational advantages are noted. Attention is given to the relationships between the adjustment problem and earlier work by other authors on the creation of monthly or quarterly series when only annual figures are available. An example is provided to illustrate the application of the proposed adjustment procedures. [ABSTRACT FROM AUTHOR]
- Published
- 1971
- Full Text
- View/download PDF
31. Sequential Modification of the UMP Test for Binomial Probabilities.
- Author
-
Breslow, Norman
- Subjects
- *
BINOMIAL distribution , *PROBABILITY theory , *SEQUENTIAL analysis , *APPROXIMATION theory , *MATHEMATICAL formulas , *ARITHMETIC mean , *DECISION making - Abstract
The early decision version of the uniformly most powerful (UMP) fixed-sample-size test for binomial probabilities, as described for instance by Alling [1], is modified by changing all continuation points which lie on or below a line with specified slope into acceptance points. The modification reduces the average sample number (ASN) for small values of the binomial probability with only a slight loss of power. Approximate formulas for the ASN and power function of the resulting test are presented and a small sample comparison made with other sequential tests. [ABSTRACT FROM AUTHOR]
- Published
- 1970
- Full Text
- View/download PDF
32. THE MEDIAN SIGNIFICANCE LEVEL AND OTHER SMALL SAMPLE MEASURES O TEST EFFICACY.
- Author
-
Joiner, Brian L.
- Subjects
- *
STATISTICAL sampling , *DISTRIBUTION (Probability theory) , *SAMPLE size (Statistics) , *MEDIAN (Mathematics) , *PROBABILITY theory , *ARITHMETIC mean , *STATISTICS - Abstract
The concepts of the "median significance level" (MSL) and the 'significance level of the average" (SLA) are introduced and some relationships among these measures and the recently introduced "expected significance level" (ESL), "average critical value" (ACV), and "median critical value" (MCV) are considered. The median significance level is defined as the median of the distribution of the observed significance level for a given alternative and is shown to be equivalent to the significance level attained by the median of the test statistic for one-sided tests. The 'significance level of the average" is analogously defined to be the significance level attained by the average (expectation) of the test statistic and the MSL and SLA are shown to be inverse functions of Geary's MCV and ACV. Some relations between these small sample measures of test efficacy, and Pitmans' and Bahadur's asymptotic measures are described. The MCV is shown to be formally related to Hamaker's "indifference quality" method of classifying acceptance sampling plans. Several simple examples are given illustrating some relationships among the several criteria. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
33. COMPUTATION AND STRUCTURE OF OPTIMAL RESET POLICIES.
- Author
-
Johnson, Ellis L.
- Subjects
- *
DISCOUNT prices , *MODULES (Algebra) , *MATRICES (Mathematics) , *ARITHMETIC mean , *STATISTICS , *PROBABILITY theory , *STANDARD deviations - Abstract
Suppose there are a finite number of possible states of a system, the state is observable at the beginning of each period, and it can be changed at that time to any other state with a cost for changing the state or resetting. In addition, an immediate loss is incurred depending on the state at the beginning of the period, but after resetting. The problem is, then, for which states to reset and where to reset to. For the infinite horizon problem with discount factor 0 is less than or equal to beta < 1 and for average cost per period, a computational procedure is given which amounts essentially to inverting a matrix the size of the number of states for which it is optimal to not reset. In terms of the corresponding linear program, once certain columns enter the basis, they do not subsequently drop from the basis. Some results on structure of optimal policies are also given. [ABSTRACT FROM AUTHOR]
- Published
- 1967
- Full Text
- View/download PDF
34. ORDER STATISTICS ESTIMATORS OF THE LOCATION OF THE CAUCHY DISTRIBUTION.
- Author
-
Barnett, V. D.
- Subjects
- *
ESTIMATION theory , *DISTRIBUTION (Probability theory) , *STATISTICS , *ARITHMETIC mean , *ORDER statistics , *VARIANCES , *STATISTICAL sampling , *MEDIAN (Mathematics) - Abstract
In a recent paper in this Journal, Rothenberg, Fisher and Tilanus [1] discuss a class of estimators of tile location parameter of the Cauchy distribution, taking the form of the arithmetic average of a central subset, of the sample order statistics. They show that the average of roughly the middle quarter of the ordered sample has minimum asymptotic variance within this class, and that asymptotically it eliminates about. 36 per cent of the efficiency loss of the median (the most commonly used estimate,r) in comparison to the maximum likelihood estimator (m.l.e.). Of course both the m.l.e, and the best linear unbiased estimator based on the order statistics (BLUE) achieve full asymptotic efficiency in the Cramer-Rao sense and there can be no dispute about the relative merits of the three estimators asymptotically, or about the inferiority of the median (with asymptotic efficiency 8/pi[sup 2] * This character cannot be converted in ASCII text) 0.8 compared with about 0.88 for the estimator of Rothenberg et al.). In any practical situation however, we will be concerned with estimation from samples of finite size and asymptotic properties will not necessarily give any guidance here. We are essentially concerned with two points in assessing the relative merits of estimators in small samples, their case of application and "small-sample efficiency" which is conveniently measured as the ratio of the Cramer-Rao lower bound to the variance of the estimator. In this paper various estimators of the location of the Cauchy distribution arc compared in these two respects for samples of up to 20 observations. The small-sample properties of the m.l.e. have been extensively discussed elsewhere (Barnett [2]) and relevant results are summarized where necessary. The main purpose of the paper is to discuss general linear estimators based on the order statistics, and to assess their utility in the present context. Since this paper was prepared a further interesting 'quick estimator', b [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
35. Computer-Intensive Methods for Tests about the Mean of an Asymmetrical Distribution
- Author
-
Clifton D. Sutton
- Subjects
Statistics and Probability ,Distribution (mathematics) ,Skewness ,Sample size determination ,Monte Carlo method ,Statistics ,Statistics, Probability and Uncertainty ,Student's t-test ,Arithmetic mean ,Mathematics ,Type I and type II errors ,Test (assessment) - Abstract
For one-sided tests about the mean of a skewed distribution, the t test is asymptotically robust for validity; however, it can be quite inaccurate and inefficient with small sample sizes. Results presented here confirm that a procedure due to Johnson should be preferred to the t test when the parent distribution is asymmetrical, because it reduces the probability of type I error in cases where the t test has an inflated type I error rate and it is more powerful in other situations. But if the skewness is severe and the sample size is small, then Johnson's test can also be appreciably inaccurate. For such situations, computer-intensive test procedures using bootstrap resampling are proposed, and with an extensive Monte Carlo study it is shown that these procedures are remarkably robust and can result in reduced probabilities of type I and type II errors compared to Johnson's test.
- Published
- 1993
36. Post-Stratification: A Modeler's Perspective
- Author
-
Roderick J. A. Little
- Subjects
Statistics and Probability ,education.field_of_study ,Population ,Survey sampling ,Sampling distribution ,Sample size determination ,Statistics ,Econometrics ,Population proportion ,Truncation (statistics) ,Statistics, Probability and Uncertainty ,Marginal distribution ,education ,Arithmetic mean ,Mathematics - Abstract
Post-stratification is a common technique in survey analysis for incorporating population distributions of variables into survey estimates. The basic technique divides the sample into post-strata, and computes a post-stratification weight w ih = rP h /r h for each sample case in post-stratum h, where r h is the number of survey respondents in post-stratum h, P h is the population proportion from a census, and r is the respondent sample size. Survey estimates, such as functions of means and totals, then weight cases by w h . Variants and extensions of the method include truncation of the weights to avoid excessive variability and raking to a set of two or more univariate marginal distributions. Literature on post-stratification is limited and has mainly taken the randomization (or design-based) perspective, where inference is based on the sampling distribution with population values held fixed. This article develops Bayesian model-based theory for the method. A basic normal post-stratification mod...
- Published
- 1993
37. Calculating the geometric mean from a large amount of data
- Author
-
Zenon Szatrowski
- Subjects
Statistics and Probability ,Discrete mathematics ,Biometry ,Harmonic mean ,Statistics as Topic ,Weighted geometric mean ,Assumed mean ,Geometric–harmonic mean ,Quasi-arithmetic mean ,Applied mathematics ,Statistics, Probability and Uncertainty ,Geometric mean ,Pythagorean means ,Mathematics ,Arithmetic mean - Abstract
This note presents a short procedure for calculating the geometric mean from a large amount of data. The method involves a) using a grouping of the frequency distribution such that the ratios of the class limits are equal, and b) an application of the “short method” for calculating the arithmetic average. By this procedure the calculation of the geometric mean takes more time than that of the arithmetic mean, only to the extent of getting the logarithms of four values and the antilog-arithm of one value. Analogous procedure can be used for the harmonic and other means.
- Published
- 2010
38. On the Power of Tests for Multiple Comparison of Three Normal Means
- Author
-
Joachim Kunert
- Subjects
Statistics and Probability ,Discrete mathematics ,Studentization ,Conjecture ,F-test ,Large set (Ramsey theory) ,Multiple comparisons problem ,Statistics ,Pairwise comparison ,Statistics, Probability and Uncertainty ,Statistical hypothesis testing ,Arithmetic mean ,Mathematics - Abstract
This article gives theoretical reasons for the findings of a simulation study by Shaffer (1981). Two well-established procedures for multiple comparison of three normal means are compared—an F procedure and an R procedure. It turns out that for a large set of parameters the average number of rejected pairwise hypotheses is larger when the F procedure is applied. To achieve our result we have to prove a special instance of a conjecture by David, Lachenbruch, and Brandis (1972).
- Published
- 1990
39. The Exact Distribution of Bartlett's Test Statistic for Homogeneity of Variances with Unequal Sample Sizes
- Author
-
Ronald E. Glaser and Min-Te Chao
- Subjects
Statistics and Probability ,Sampling distribution ,Sample size determination ,Levene's test ,Statistics ,F-test of equality of variances ,Statistics, Probability and Uncertainty ,Weighted geometric mean ,Bartlett's test ,Statistic ,Arithmetic mean ,Mathematics - Abstract
An expression is derived for the null density of the Bartlett test statistic for testing equality of variances of n normal populations, when random samples not necessarily of the same size are taken. The expression permits computation of exact Bartlett critical values. As a generalization, an expression is derived for the density of the normalized ratio of an arbitrarily weighted geometric mean to the unweighted arithmetic mean, based on a sample of n independently distributed gamma random variables having common scale parameter but different shape parameters.
- Published
- 1978
40. Uniformly More Powerful Tests for Hypotheses concerning Linear Inequalities and Normal Means
- Author
-
Roger L. Berger
- Subjects
Statistics and Probability ,Combinatorics ,Linear inequality ,Multivariate random variable ,Covariance matrix ,Likelihood-ratio test ,Statistics ,Multivariate normal distribution ,Statistics, Probability and Uncertainty ,Majorization ,Mathematics ,Arithmetic mean ,Statistical hypothesis testing - Abstract
This article considers some hypothesis-testing problems regarding normal means. In these problems, the hypotheses are defined by linear inequalities on the means. We show that in certain problems the likelihood ratio test (LRT) is not very powerful. We describe a test that has the same size, α, as the LRT and is uniformly more powerful. The test is easily implemented, since its critical values are standard normal percentiles. The increase in power with the new test can be substantial. For example, the new test's power is 1/2α times bigger (10 times bigger for α = .05) than the LRT's power for some parameter points in a simple example. Specifically, let X = (X 1, …, Xp )′ (p ≥ 2) be a multivariate normal random vector with unknown mean μ = (μ1, …, μp )′ and known, nonsingular covariance matrix Σ. We consider testing the null hypothesis H 0: b′ i ,μ ≤ 0 for some i = 1, …, k versus the alternative hypothesis H 1: b′ i μ > 0 for all i = 1, …, k. Here b 1, …, b k (k ≥ 2) are specified p-dimensional ve...
- Published
- 1989
41. Estimating a Product of Means: Bayesian Analysis with Reference Priors
- Author
-
José M. Bernardo and James O. Berger
- Subjects
Statistics and Probability ,Estimation theory ,Product (mathematics) ,Bayesian probability ,Posterior probability ,Statistics ,Prior probability ,Inference ,Statistics, Probability and Uncertainty ,Mathematics ,Arithmetic mean ,Jeffreys prior - Abstract
Suppose that we observe X ∼ N(α, 1) and, independently, Y ∼ N(β, 1), and are concerned with inference (mainly estimation and confidence statements) about the product of means θ = αβ. This problem arises, most obviously, in situations of determining area based on measurements of length and width. It also arises in other practical contexts, however. For instance, in gypsy moth studies, the hatching rate of larvae per unit area can be estimated as the product of the mean of egg masses per unit area times the mean number of larvae hatching per egg mass. Approximately independent samples can be obtained for each mean (see Southwood 1978). Noninformative prior Bayesian approaches to the problem are considered, in particular the reference prior approach of Bernardo (1979). An appropriate reference prior for the problem is developed, and relatively easily implementable formulas for posterior moments (e.g., the posterior mean and variance) and credible sets are derived. Comparisons with alternative noninf...
- Published
- 1989
42. The Ratio of the Geometric Mean to the Arithmetic Mean for a Random Sample from a Gamma Distribution
- Author
-
Ronald E. Glaser
- Subjects
Statistics and Probability ,Combinatorics ,Discrete mathematics ,Contraharmonic mean ,Geometric–harmonic mean ,Gamma distribution ,Interval (graph theory) ,Statistics, Probability and Uncertainty ,Weighted geometric mean ,Geometric mean ,Mathematics ,Arithmetic mean - Abstract
Let X 1, …, Xn denote a random sample from an unknown member of the family of two-parameter gamma densities x > 0, α > 0, k > 0. Define U to be the ratio of the geometric mean to the arithmetic mean, U = (Π Xi )1/n /(Σ Xi/n). The density of U is derived in a useable form which is exact for the interval e -2π/n < u < 1. Large-sample properties of U are also considered.
- Published
- 1976
43. An Analysis of Some Properties of Alternative Measures of Income Inequality Based on the Gamma Distribution Function
- Author
-
James B. McDonald and Bartell C. Jensen
- Subjects
Statistics and Probability ,education.field_of_study ,Population ,Generalized gamma distribution ,Estimator ,Standard error ,Sample size determination ,Statistics ,Statistical inference ,Gamma distribution ,Statistics, Probability and Uncertainty ,education ,Mathematics ,Arithmetic mean - Abstract
The Gini, Theil entropy, and Pietra measures of inequality associated with the gamma distribution function are expressed in terms of the parameters defining the gamma distribution. Method of moments (MME) and maximum likelihood estimators (MLE) of these measures are obtained along with expressions for the asymptotic standard errors of the MLE measures. A table is presented that facilitates the calculation of MLE of inequality measures and the associated asymptotic standard errors by expressing each as a function of the ratio of the arithmetic mean and geometric mean. This table also facilitates the calculation of MME estimates of inequality measures. The results of a Monte Carlo study are used to compare the performance of the MME and MLE for data generated from a population characterized by a gamma distribution and to consider questions of statistical inference and requisite sample size.
- Published
- 1979
44. On the Exact Distribution of a Normalized Ratio of the Weighted Geometric Mean to the Unweighted Arithmetic Mean in Samples from Gamma Distributions
- Author
-
S. B. Nandi
- Subjects
Statistics and Probability ,Ratio distribution ,Mathematical analysis ,Generalized integer gamma distribution ,Statistics, Probability and Uncertainty ,Geometric distribution ,Weighted geometric mean ,Geometric mean ,Beta distribution ,Inverse distribution ,Mathematics ,Arithmetic mean - Abstract
The distribution of the product of several independent beta random variables as a mixture of beta distributions is derived by using a solution of Wilks's type B integral equation. This distribution is applied to finding the distribution of a normalized ratio of the weighted geometric mean (GM) to the unweighted arithmetic mean (AM) in random samples consisting of observations from gamma distributions, one observation being taken from each distribution. Two upper bounds for the truncation error related to the mixture representation are obtained. Application of these results to problems in testing statistical hypotheses is indicated.
- Published
- 1980
45. A Note on Estimation from a Cauchy Sample
- Author
-
Thomas J. Rothenberg, C. B. Tilanus, and Franklin M. Fisher
- Subjects
Statistics and Probability ,Delta method ,Order statistic ,Statistics ,Estimator ,Cauchy distribution ,Sample (statistics) ,Center (group theory) ,Statistics, Probability and Uncertainty ,Asymptotic theory (statistics) ,Arithmetic mean ,Mathematics - Abstract
A class of estimators is proposed for the estimation of the center of the Cauchy distribution. Each estimator in the class is the arithmetic average of a central subset of the sample order statistics. Although the sample median is a member of the proposed class, it is not the most efficient. The average of roughly the middle quarter of the ordered sample has the lowest asymptotic variance.
- Published
- 1964
46. The Relation between the Arithmetic and Geometric Average of two Index Numbers
- Author
-
R. von Huhn
- Subjects
Statistics and Probability ,Discrete mathematics ,Combinatorics ,Index (economics) ,Relation (database) ,Harmonic mean ,Geometric standard deviation ,Statistics, Probability and Uncertainty ,Geometric mean ,Inequality of arithmetic and geometric means ,Geometric progression ,Mathematics ,Arithmetic mean - Published
- 1930
47. A Table for Estimating the Mean of a Lognormal Distribution
- Author
-
Hanspeter Thöni
- Subjects
Statistics and Probability ,Minimum-variance unbiased estimator ,Logarithm ,Sample size determination ,Log-normal distribution ,Statistics ,Truncated mean ,Function (mathematics) ,Statistics, Probability and Uncertainty ,Mathematics ,Variable (mathematics) ,Arithmetic mean - Abstract
If experimental data have been analyzed using a logarithmic transformation of the actual observations, taking the antilog of the mean of the transformed variables yields a biased estimate of the mean μ of the original variable. To obtain an unbiased estimate of this mean a correction for bias must be applied. The table at the end of this note provides values of a function of the sample size and variance of data transformed to logarithms to base 10 which, when added to the mean of the transformed data, yields an unbiased estimate of the mean of the original variable after transforming back. This estimate is more efficient than the mean of the untransformed observations.
- Published
- 1969
48. Applications of a New Graphic Method in Statistical Measurement
- Author
-
Jacob Mincer
- Subjects
Statistics and Probability ,Computer science ,media_common.quotation_subject ,Computation ,Moving average ,Point (geometry) ,Simplicity ,Statistics, Probability and Uncertainty ,Frequency distribution ,Algorithm ,Pencil (mathematics) ,media_common ,Simple (philosophy) ,Arithmetic mean - Abstract
By virtue of simplicity and generality of the principle it supplies, a note by S. I. Askovitz which appeared in Science early in 1955 promises to become a new starting point toward the development of a "geometry of statistics."2 While ad hoc graphic procedures have been used in statistics all along, they involve either theoretically crude approximations, "free-hand drawing," or specially prepared scales (nomographs). The new method requires no special scales and leaves no room for "free-hand." The only practical limitation on its theoretical precision is the sharpness of eye and pencil. In his note, Askovitz presents a method for determining the mean value of n observations. While the need for labor saving or for visual demonstration is not particularly great in the computation of a simple arithmetic average, several applications to the calculation of other measures, most of them of importance in economic statistics, will illustrate the fruitfulness of the method more strikingly. We shall start with an exposition of the Askovitz method and show how it can be applied to such diverse problems as calculation of average deviations, geometric means, factorials, means of frequency distributions, Gini concentration ratios, moving averages, and seasonal adjustments.
- Published
- 1957
49. Approximate Distribution of Extremes for Nonsample Cases
- Author
-
John E. Walsh
- Subjects
Statistics and Probability ,education.field_of_study ,Basis (linear algebra) ,Cumulative distribution function ,Population ,Univariate ,Expression (mathematics) ,Set (abstract data type) ,Distribution (mathematics) ,Statistics ,Statistics, Probability and Uncertainty ,education ,Mathematics ,Arithmetic mean - Abstract
The data are n univariate observations that are not necessarily from the same population, from continuous populations, or independent. The problem is to develop an approximate expression for a specified one of the upper (or lower) extremes of this set of observations when a type of m-dependence occurs. General expressions are developed that depend on n, the extreme considered, and the arithmetic average of the cumulative distribution functions (cdf's) for the individual observations. These results seem to be in a form that is satisfactory for practical applications, and a rule is given for deciding when n is large enough for their use. Also, a basis is furnished for deciding on the suitability of the model from the characteristics of the experimental situation, and consistent estimation of the average of the cdf's is considered. In practice, the asymptotic distributions that occur for the case of samples from continuous populations also seem to frequently occur for nonsample cases involving data ...
- Published
- 1964
50. A Correlated Probit Model for Joint Modeling of Clustered Binary and Continuous Responses
- Author
-
Gueorguieva, Ralitza V. and Agresti, Alan
- Published
- 2001
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.