118 results
Search Results
2. The Birth of the Journal of Applied Probability
- Author
-
Gani, J. and Spier, A.
- Published
- 1965
- Full Text
- View/download PDF
3. Operational Research Today and Tomorrow
- Author
-
Swan, A. W.
- Published
- 1958
- Full Text
- View/download PDF
4. SUMMARIES OF PAPERS DELIVERED AT THE 129TH ANNUAL MEETING OF THE AMERICAN STATISTICAL ASSOCIATION, NEW YORK, 19-22, 1969.
- Subjects
- *
ABSTRACTS of title , *STATISTICS , *ANNUAL meetings , *SCIENCE associations , *EXPERIMENTAL design , *LEAST squares , *REGRESSION analysis , *PROBABILITY theory , *CONFERENCES & conventions - Abstract
The article presents abstracts and titles of abstracts submitted for papers presented at the 1969 Annual Meetings of the American Statistical Association and Biometric Society in New York. Some of the abstracts are Financial and Non-Financial Factors Affecting Post-High School Plans, 1939-1965," by Walter Adams, "Experimental Designs and Mathematical Models," by Sidney Addelman, "Consistency of Linear Least Squares Estimate in a Regression Model with Lagged Variable," by M. Ashanullah, "The Use of Prior Information in Testing of New Drugs," by David M. Allen, "A Comparison of Probability Density Estimates," by Gary Anderson, "Stochastic Models of City Population Density," by Robert T. Amsden, "ldentification and Input Signal Synthesis Problems in Discrete-Time Stochastic Control Systems," by Masanao Akoi, "A Computer Language for the Analysis of Variance," by David J. Aemor, "A Generalized Mathematical Model for Biological Recovery with Applications in Radiation Biology," by George M. Angelton."
- Published
- 1970
- Full Text
- View/download PDF
5. An Experiment in Probability Estimation.
- Author
-
Green, Paul E., Halbert, Michael H., and Robinson, Patrick J.
- Subjects
MARKETING research ,PROBABILITY theory ,RESEARCH ,MARKETING ,PROBABILITY measures ,ESTIMATION theory ,STATISTICS ,CONSUMERS ,BEHAVIOR ,MARKETING models - Abstract
While the activity of marketing research can be fruitfully viewed within a statistical decision theoretic model, relatively little is known concerning the descriptive aspects of how people-managers or consumers-revise probabilities in the light of new information. This paper reports the results of a behavioral study in probability revision and the implications of these findings for the operational use of decision theoretic concepts in prescriptive and descriptive choice-making models [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
6. Variance Reduction by Antithetic Variates in G1/G/1 Queuing Simulations.
- Author
-
Mitchell, Bill
- Subjects
ANALYSIS of variance ,VARIANCES ,STATISTICS ,QUEUING theory ,PROBABILITY theory ,RANDOM variables ,OPERATIONS research - Abstract
This paper considers the use of antithetic variates to reduce the variance of estimates obtained in the simulation of a GI/G/1 queue. Two experimental configurations are considered: in the first, 2n observations are taken in a single run; in the second, n observations are taken in each of two runs. If the sequences of uniform random variables that generate the realizations of the queuing system in the two runs are antithetic, we show that the variance of estimates of the mean and distribution of stationary waiting time and number in the queue is less in the second configuration than in the first. We also obtain sufficient conditions for the covariance of functions of a vector of uniform random variables to be nonnegative. Experimental results are given for M/M/1 queuing simulations to illustrate the magnitude of the variance reduction. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
7. ON SINGLE--SERVER BULK--QUEUING PROCESSES WITH BINOMIAL INPUT.
- Author
-
Bhat, U. Narayan
- Subjects
BINOMIAL distribution ,BULK queues ,QUEUING theory ,CONSUMERS ,DISTRIBUTION (Probability theory) ,PROBABILITY theory ,STATISTICS ,MATHEMATICS ,MATHEMATICAL models - Abstract
In this paper, treating time as a discrete variable, we study the transient behavior of a bulk-service queuing system in which the number of customers arriving within a fixed time interval follows a binomial probability distribution. The service times are assumed to be identically distributed and statistically independent. The busy period and queue length distributions have been obtained in terms of probability generating functions. [ABSTRACT FROM AUTHOR]
- Published
- 1964
- Full Text
- View/download PDF
8. Estimates of induced abortion in urban North Carolina.
- Author
-
Abernathy, James R., Greenberg, Bernard G., Horvitz, Daniel G., Abernathy, J R, Greenberg, B G, and Horvitz, D G
- Subjects
ABORTION ,BIRTH control ,FETAL death ,INTERVIEWING ,WHITE people ,ABORTION laws ,AGE distribution ,ATTITUDE (Psychology) ,BLACK people ,RESEARCH methodology ,META-analysis ,PROBABILITY theory ,STATISTICS ,CITY dwellers ,SOCIOECONOMIC factors ,PARITY (Obstetrics) - Abstract
In 1965, Warner developed an interviewing procedure designed to eliminate evasive answer bias when questions of a sensitive nature are asked. He called the procedure "randomized response." The authors have been studying the technique for several years and, in this paper, are re- porting some of the estimates of induced abortion in urban North Carolina using randomized response. Estimates of the proportion of women having an abortion during the past year among women 18-44 years of age are reported. For the study population indices were developed relating induced abortion to total conceptions for whites and nonwhites. The illegal abortion rate per 100 conceptions was estimated to be 14.9 for whites and 32.9 for nonwhites. Estimates of the proportion of women having an abortion during their lifetime among women 18 years old or over are also shown, Among ever married women, the proportion having an abortion during their lifetime declined as education increased. Estimates were high for women with 5 or more pregnancies. Most of the respondents stated that they were satisfied that the randomized response approach would not reveal their personal situation. Furthermore, they did not think their friends would truthfully respond to a direct question regarding abortion. [ABSTRACT FROM AUTHOR]
- Published
- 1970
- Full Text
- View/download PDF
9. MEASURING RISK ON CONSUMER INSTALMENT CREDIT.
- Author
-
Smith, Paul F.
- Subjects
CREDIT management ,PORTFOLIO management (Investments) ,CREDIT risk ,BUSINESS losses ,FINANCIAL risk management ,PROBABILITY theory ,RISK management in business ,BANK management ,STATISTICS ,RISK assessment ,MANAGEMENT science ,MATHEMATICAL models in business - Abstract
The advantages of statistical measures for grading credit risks in lending to consumers have been widely recognized but relatively little use has been made of such systems. The paper develops a relatively simple statistical method for measuring risk on individual accounts that can also be used for measuring and controlling portfolio quality and for estimating loss rates. The procedure entails four steps: 1. Comparison of good and bad accounts in the search for characteristics that are associated with bad accounts; 2. Calculation of bad account probabilities for discriminating characteristics; 3. Development of a risk index from bad account probabilities to be used in grading accounts; 4. Evaluation of the risk index. A test of the method on the accounts of a commercial bank is described and the judgements implied by the risk index are compared to the criteria used by interviewers in rejecting applicants. A great many similarities are found between the results of the two methods but a number of striking dissimilarities are observed. The last section of the paper illustrates the ways in which the risk index can be used to adjust credit quality to the desired volume and loss experience. It also demonstrates its use in measuring portfolio quality and in estimating loss rates. [ABSTRACT FROM AUTHOR]
- Published
- 1964
- Full Text
- View/download PDF
10. THE SYSTEMATIC BIAS EFFECTS OF INCOMPLETE RESPONSES IN ROTATION SAMPLES.
- Author
-
Williams, W. H.
- Subjects
SURVEYS ,STATISTICAL sampling ,ROTATION groups ,PROBABILITY theory ,STATISTICS - Abstract
Rotation samples are frequently used in continuing surveys in order to obtain estimates of changes in a characteristic over time as well as separate estimates of the characteristic at specific points in time. Rotation designs involve the retention of some sampling units and the replacement of others. It has been observed in some studies that there are systematic changes in the estimate of a characteristic, depending on the frequency of appearance of a rotation group in the sample. It is shown in this paper that these systematic changes must occur provided (1) the probability of a selected unit actually appearing in the sample is monotonically related to the characteristic under measurement, and (a) the probability of a selected unit actually appearing in the sample changes monotonically from one observation point to the next. Some numerical examples showing the form and magnitude of the potential biases are included. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
11. Probabilities for the Size of Largest Clusters and Smallest Intervals.
- Author
-
Wallenstein, Sylvan R. and Naus, Joseph I.
- Subjects
STATISTICS ,STATISTICAL correlation ,LEAST squares ,MATHEMATICAL statistics ,PROBABILITY theory ,REGRESSION analysis - Abstract
Given N points distributed at random on [0,1), let n
p , be the size of the largest number of points clustered within an interval of length p. Previous work finds Pr (np ≥ n), for n> N/2, and for n < N/2, p=1/L, L an integer. The formula for the case p=1/L is in terms of the sum of L × L determinants and is not computationally feasible for large L. The present paper derives such a computational formula. [ABSTRACT FROM AUTHOR]- Published
- 1974
- Full Text
- View/download PDF
12. A Bayesian Look at Inverse Linear Regression.
- Author
-
Hoadley, Bruce
- Subjects
- *
REGRESSION analysis , *INVERSE functions , *MATHEMATICAL statistics , *BAYESIAN analysis , *STATISTICS , *STATISTICAL decision making , *PROBABILITY theory - Abstract
The model considered in this paper is simple linear regression (Ey[sub i] = beta[sub 1] + beta[sub 2] x[sub 1], i = 1, ..., n), and the problem is to make statistical inferences about an unknown value of x corresponding to one or more additional observed values of y. The maximum likelihood estimator x of x and the classical (l - alpha) 100% confidence set S for x have some undesirable properties. For example, x has infinite mean square error and P {S = (- infinity, + infinity)} > 0. The purpose of this paper is to demonstrate that insight and understanding, as well as a useful class of solutions, can be obtained by looking at the problem from a Bayesian point of view. A result which follows from a general Bayes solution is that the inverse estimator [4] is Bayes with respect to a particular informative prior. [ABSTRACT FROM AUTHOR]
- Published
- 1970
- Full Text
- View/download PDF
13. MOMENTS OF THE DISTRIBUTION OF SAMPLE SIZE IN A SPRT.
- Author
-
Ghosh, B. K.
- Subjects
- *
MOMENTS method (Statistics) , *STATISTICAL sampling , *ARITHMETIC , *DISTRIBUTION (Probability theory) , *DIFFERENTIABLE functions , *PROBABILITY theory , *APPROXIMATION theory , *EQUATIONS , *STATISTICS - Abstract
The article discusses moments of the distribution of sample size, N in a sequential probability ratio test (SPRT). The present paper provides variance, the third and the fourth moments of N. The details are worked out in five common applications of the SPRIT. The relation of the variance of N to the truncation of a SPRT is discussed is also discussed in the paper. Scholar A. Wald indicated in passing how one can obtain the moments of N, but the only published literature where the author encountered a general expression for the variance of N. However, their expression is incorrect. Using scholar J. Wolfowitz's results, which they do, or differentiating Wald's, fundamental identity twice one gets provided. In many practical applications of the SPRT, μ and moments in an equation derived are differentiable functions of a real-valued parameter. The limiting expressions for the moments can then be determined by standard methods of mathematical analysis. However, for the third and fourth moments the actual technique may involve an excessive amount of arithmetic.
- Published
- 1969
- Full Text
- View/download PDF
14. A COMPARISON BETWEEN THE POWER OF THE DURBIN-WATSON TEST AND THE POWER OF THE BLUS TEST.
- Author
-
Abrahamse, A. P. J. and J. Koerts
- Subjects
- *
PROBABILITY theory , *DISTRIBUTION (Probability theory) , *STATISTICS , *VON Neumann algebras , *HYPOTHESIS , *STATISTICAL hypothesis testing , *DECISION making - Abstract
In an earlier paper [5] the authors compared the power of the BLUS test with the probability of a correct decision of the Durbin-Watson bounds test. A method to compute the distribution of the Von Neumann ratio under the null hypothesis and under the alternative hypothesis was given. In the present paper the latter method is used to tabulate the BLUS-test statistic and to compute the exact significance points of the Durbin-Watson test for several examples. Powers of both tests are computed and compared. It appears that, for the cases considered, the power of the exact Durbin-Watson test exceeds that of the BLUS procedure, while the latter is greater than the probability of a correct decision in the Durbin-Watson bounds test. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
15. TIES IN PAIRED-COMPARISON EXPERIMENTS: A GENERALIZATION OF THE BRADLEY-TERRY MODEL.
- Author
-
Rao, P. V. and Kupper, L. L.
- Subjects
- *
PROBABILITY theory , *MATHEMATICS , *BAYESIAN analysis , *PARAMETERS (Statistics) , *STATISTICS - Abstract
The Bradley-Terry model for a paired-comparison experiment with t treatments postulates a set of t 'true' treatment ratings π1, π2, ..., π1 such that π1 is ≥ 0, σpi;i = 1 and the probability for preferring treatment i to treatment j is π2 (π2 + πj)-1. Thus, according to this model, every comparison of two treatments results in a definite preference for one of the two. This is an unrealistic restriction since when there is no difference between the responses due to two treatments, any method of expressing preference for one over the other is somewhat arbitrary. This paper considers a modification of the Bradley-Terry model by introducing an additional parameter, called threshold parameter, into the model. This permits 'ties' in the model. The problem of estimation and tests of hypotheses for the parameters of the modified model is also dealt with in the paper. [ABSTRACT FROM AUTHOR]
- Published
- 1967
- Full Text
- View/download PDF
16. SOME PROBABILITIES, EXPECTATIONS AND VARIANCES FOR THE SIZE OF LARGEST CLUSTERS AND SMALLEST INTERVALS.
- Author
-
Naus, J. I.
- Subjects
- *
UNIFORM distribution (Probability theory) , *DISTRIBUTION (Probability theory) , *ANALYSIS of variance , *VARIANCES , *ESTIMATION theory , *STATISTICS , *PROBABILITY theory - Abstract
Given N points independently drawn from the uniform distribution on (0, 1), let p[sub n], be the size of the smallest interval that contains n out of the N points; let n[sub p], be the largest number of points to be found in any subinterval of (0, 1) of length p. This paper uses a result of Karlin, McGregor, Barton and Mallows to determine the distribution of n[sub p] for p = 1/k, k an integer. The paper gives simple determinations for the expectations and variances of p[sub n], for all fixed n > (N + 1)/2, and of n[sub 1/2]. The distribution and expectation of n[sub p] are estimated and tabulated for the cases p = 0.1(0.1)0.9, N =2(1)10. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
17. SYSTEMATIC SAMPLING WITH UNEQUAL PROBABILITY AND WITHOUT REPLACEMENT.
- Author
-
Hartley, H. O.
- Subjects
- *
ESTIMATION theory , *STATISTICS , *PROBABILITY theory , *STATISTICAL sampling , *ANALYSIS of variance , *SAMPLE size (Statistics) - Abstract
Given a population of N units, it is required to draw a sample of n distinct units in such a way that the probability for the ith unit to be in the sample is proportional to its 'size' x. From the alternative methods of achieving this we consider here only the so-called systematic method which, to the best of our knowledge, was first developed by W. G. Madow (1949): The units in the population are listed in a 'particular' order, their x, accumulated and a systematic selection of n elements from a 'random start' is then made on the accumulation. In a more recent paper (H. O. Hartley and J. N. K. Rao (1962) ) an asymptotic estimation theory (for large N) associated with this procedure was developed for the case when the order of the listed units is random. In this paper we draw attention to certain properties of Madow's estimator: We utilize the fact that with systematic sampling the total number of different samples is N (rather than ([This eq. cannot be change in char.]) as with completely random sampling). This simplification in the definition of the variance of the estimator in repeated sampling enables us to identify the exact variance of Madow's estimator with a 'between sample mean square' in a special analysis of variance (see section 4) and compare it with the variance of the pps estimator in sampling with replacement as well as in other sampling procedures. We also develop two approximate methods of variance estimation (see section 5). We pay particular attention to the case when the units are listed in the order of their size. With this particular arrangement our method can be described as 'systematic with random start' and the gain in precision that we accomplish has of course, analogues in systematic sampling with equal probabilities employing ratio estimators in which there is a relation between the ratio ri =yi/Xi and xi Compared with other methods the present procedure combines the advantage of ease of systematic sample selection with the availability of exact variance formulas for any n and N. Moreover, it usually leads to a more efficient estimate. Its shortcoming resides in the fact that the estimation of the variance is based on certain assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
18. ESTIMATION OF MULTIPLE CONTRASTS USING t-DISTRIBUTIONS.
- Author
-
Dunn, Olive Jean and Massey Jr, Frank J.
- Subjects
- *
TIME series analysis , *CHARACTERISTIC functions , *MATHEMATICAL statistics , *PROBABILITY theory , *CONFIDENCE intervals , *DISTRIBUTION (Probability theory) , *MATHEMATICAL models , *STATISTICAL sampling , *MULTIVARIATE analysis , *STATISTICS - Abstract
Various methods based on Student t variates have been suggested and used for obtaining simultaneous confidence intervals for several means, or for several contrasts among means. Determination of an overall confidence level for such intervals involves evaluating the probability mass of a multivariate t distribution over a hypercube centered at the origin, with sides paralleling the coordinate planes, or obtaining bounds for this probability mass. Since such distributions involve many nuisance parameters, an impossible number of tables would be necessary in order to make exact confidence intervals. In the virtual absence of tables, approximations and bounds become important. In this paper, an attempt has been made to investigate the adequacy of certain suggested approximations [2], [5], [8] by computing the exact distributions for some particular cases. These exact distributions have been compared with approximations. This paper is concerned with two-sided confidence intervals, rather than one-sided intervals. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
19. A HISTORY OF DISTRIBUTION SAMPLING PRIOR TO THE ERA OF THE COMPUTER AND ITS RELEVANCE TO SIMULATION.
- Author
-
Teichroew, Daniel
- Subjects
- *
SIMULATION methods & models , *PROBABILITY theory , *STATISTICAL sampling , *DISTRIBUTION (Probability theory) , *STATISTICS , *TIME series analysis , *DIGITAL computer simulation , *METHODOLOGY - Abstract
The use of simulation, as a technique for attacking difficult problems, has increased greatly with the availability of the digital computer. This is illustrated by the large number of references in Shubik's (1960) bibliography[sup 2] and in the large number of studies published since then. Simulation is essentially an extension of a technique known as empirical sampling, or distribution sampling, which has been used in the field of statistics for many years. The limitations of the technique, which are well known to statisticians, are apparently not as well known, or at least not as well recognized, by those using simulation today. The first part of this paper contains an historical survey of distribution sampling as used by statisticians. The material was originally prepared in 1953 and is reproduced here in slightly revised form to bring this history to the attention of present day simulators in order that the lessons that can be learned from this part can more readily be incorporated in the development of methodology today. The second part of this paper discusses the relevance of empirical sampling to the present day state of the art of simulation. The technique of generating random members, developed for empirical sampling can be applied directly to simulation. However in other aspects simulation is more difficult than empirical sampling and here the theory of distribution sampling does not have much to offer. The difficulties are due to lack of independence among time series, non-stationarity of the time series, and the large number of parameters. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
20. RANDOM WALKS, FIRE DAMAGE AMOUNT AND OTHER PARETIAN RISK PHENOMENA.
- Author
-
Mandelbrot, Benoit
- Subjects
FIRES ,CASUALTY insurance ,RISK ,FIRE victims ,INCOME inequality ,LIABILITY for fire damages ,PROPERTY damage ,PROBABILITY theory ,STATISTICS - Abstract
Being one of the oldest branches of operations research, actuarial science has accumulated a substantial store of knowledge about the risks associated with living. The present paper will discuss one such question. Although it is relative to a specific problem of fire casualty, it illustrates more generally why the Paretian distribution of incomes and fortunes should constitute ‘a source of anxiety for the risk theory of insurance.í Very similar mechanisms apply in many other problems. [ABSTRACT FROM AUTHOR]
- Published
- 1964
- Full Text
- View/download PDF
21. BETTER ESTIMATES OF CONFIDENCE INTERVALS FOR VERY LOW ERROR RATE POPULATION.
- Author
-
Birnberg, Jacob G. and Pratt, Robert J. A.
- Subjects
POPULATION ,ESTIMATION theory ,GRAPHIC methods ,POPULATION research ,PROBABILITY theory ,ERRORS ,CONFIDENCE intervals ,STATISTICAL sampling ,STATISTICS ,BOUNDARY element methods ,BOUNDARY value problems ,POPULATION forecasting - Abstract
In the estimation of error rates for populations, whose error rate is quite small, the usual assumption of normality can lead to erroneous results. The confidence interval that is actually calculated for any given probability will have too low a lower bound, and not a high enough upper bound. Thus, the user may be misled into too optimistic a view of the population being sampled. This article discusses the nature of the problem and provides graphs from which the more accurate interval can be read. The final section of the paper deals with the related problem of confidence intervals when the observed error rate is zero. Tables are provided which facilitates the developing of confidence statements for such samples. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
22. Models for the Estimation of the Probability of Dying between Birth and Exact Ages of Early Childhood.
- Author
-
Sullivan, Jeremiah M.
- Subjects
FERTILITY ,ESTIMATION theory ,PROBABILITY theory ,CHILDREN ,STATISTICS ,CENSUS - Abstract
This paper develops two models, each of which is designed to estimate the probability of surviving from birth to selected exact ages of early childhood: namely ages two, three and five. The models are designed for use in areas with deficient registration systems. They require, as input, statistics which can be derived from retrospective data supplied by census or survey respondents. The first model, the age model, converts statistics on the proportion dead of children ever born to women in age groups 20-24, 25-29 and 30-34 into estimates of q
2 , q3 and q5 . The second model, the marriage model, converts statistics on the proportion dead of children ever born to women of five-year marriage duration intervals into these estimates. The models can be used independently or simultaneously. These models were developed from data generated by a large number of empirical fertility and mortality schedules. Regression analysis was used to determine the parameter values of the relationships specified, and several sets of equations for estimating values of qa for a =2, 3 and 5 comprise the final product of the paper. It should be noted that the conceptual basis for the models was first derived by William Brass. The data generated for the regression analysis provided an opportunity to test the original Brass estimated model. We are able to report that the model performed well over the wide range of fertility and mortality conditions included in the test. [ABSTRACT FROM AUTHOR]- Published
- 1972
- Full Text
- View/download PDF
23. Planning the 41st ORSA Meeting: The Visiting-Fireman Problem, I.
- Author
-
Beckwith, Richard E.
- Subjects
STATISTICAL sampling ,SAMPLING (Process) ,PROBABILITY theory ,CONFERENCES & conventions ,MEETINGS ,STATISTICS - Abstract
A polling technique based on subjective probabilities was employed to produce advance estimates of the number of ORSA members attending the 41st National Meeting. The predicted figures compare favorably with the actual attendance figure, tending to support the credibility of such an approach. Action taken as a consequence of the poll's forewarning is believed to have had a significant salutary effect on the financial health of the meeting. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
24. BOUNDS AND APPROXIMATIONS FOR THE MOMENTS OF ORDER STATISTICS.
- Author
-
Joshi, Prakash C.
- Subjects
- *
APPROXIMATION theory , *NONPARAMETRIC statistics , *MOMENTS method (Statistics) , *MATHEMATICAL statistics , *STATISTICS , *DISTRIBUTION (Probability theory) , *NUMERICAL calculations , *PROBABILITY theory - Abstract
In this paper methods for obtaining approximations and bounds for the moments of order statistics from a continuous parent distribution are discussed. These bounds and approximations depend on the distribution function only through certain moments of order statistics in small samples. It is shown that for the Cauchy distribution bounds and approximations of all finite moments can be obtained. Some numerical calculations for normal and Cauchy distributions are also given. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
25. STATISTICAL PROBLEMS IN SCIENCE. THE SYMMETRIC TEST OF A COMPOSITE HYPOTHESIS.
- Author
-
Neyman, Jerzy
- Subjects
- *
PROBABILITY theory , *STATISTICS , *CLUSTER analysis (Statistics) , *RANDOM variables , *THEORY , *ERROR , *PROBLEM solving - Abstract
The purpose of the paper is to illustrate a "give and take" type of interaction between the frequentist theory of probability and statistics, on the one hand, and research in science, on the other. Invariably, research in science involves some observations x and the unknown "true" state of nature sigma. Quite often, replication of the experiments reveals considerable variation in x, indicating that no mathematical treatment of the problem is possible without the assumption that x is a sample value of a random variable X, treated in terms of frequentist theory of probability. As to the true state of nature, sigma, situations vary. Indeed, there are cases where it appears natural to consider that sigma is selected at random out of a certain known set SIGMA, with either known or unknown probability law. Section 2 lists three typical examples in which the frequentist theory of probability can "give" something to science. In each example, however, the assumption of the randomness of sigma appears extraneous. Section 3 describes another example of research in science, again with a non-random sigma, which happened to "give" something to the frequentist theory of statistics. The motivation to reduce the frequency of erroneous conclusions in the general circumstances of the biological study had led to considering the possibility that a certain derivative may fail to exist. This created a problem of testing (Section 4) which does not appear to have been considered. A solution of the problem, in the form of optimal symmetric (C alpha) tests, is given in Section 5. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
26. ON THE CLASSICAL RUIN PROBLEMS.
- Author
-
Takács, Lajos
- Subjects
- *
ARITHMETIC series , *GAME theory , *PROBABILITY theory , *MATHEMATICAL models , *STATISTICS , *DISTRIBUTION (Probability theory) , *RANDOM walks , *WIENER processes - Abstract
In the early years of the eighteenth century A. De Moivre, N. Bernoulli, and P. R. Montmort found three solutions for the following problem of games of chance: Two players, A and B, play a series of games. In each game, independently of the others, either A wins a counter from B with probability p or B wins a counter from A with probability q (p + q = 1). The series ends if either A wins a total number of a counters from B or B wins a total number of b counters from A. What is the probability that A wins the series in at most n games? Denote this probability by P[sub n](a,b). In this paper simple and elementary proofs are given for the various formulas for P[sub n](a,b). Furthermore, it is shown how these formulas can be applied in the theories of order statistics, random walks, storage, queues, Brownian motion, and dams. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
27. TABLES OF CRITICAL VALUES OF SOME RENYI TYPE STATISTICS FOR FINITE SAMPLE SIZES.
- Author
-
Birnbaum, Z. W. and Lientz, B. P.
- Subjects
- *
FINITE groups , *STATISTICAL sampling , *DISTRIBUTION (Probability theory) , *SAMPLE size (Statistics) , *STATISTICS , *RANDOM variables , *PROBABILITY theory - Abstract
Let X[sub (1)] is less than or equal to X[sub (2] is less than or equal to ... is less than or equal to X[sub (n)] be an ordered sample of a random variable X which has continuous probability distribution function F (x), and let F[sub n] (x) be the corresponding empirical distribution function. The following three statistics, introduced by A. Renyi, are considered: [Multiple line equation(s) cannot be represented in ASCII text] The paper presents table of exact probabilities for these statistics for finite sample sizes. The limiting distributions of these statistics for sample size n arrow right Infinity are discussed, and sample sizes are indicated for which these limiting distributions can be used instead of the exact distributions. Numerical examples for the use of the tables are presented, as well as applications to testing hypotheses on life distributions and to one-sided estimation of probability distribution functions from censored data. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
28. TESTS FOR RANDOMNESS OF DIRECTIONS AGAINST TWO CIRCULAR ALTERNATIVES.
- Author
-
Stephens, M. A.
- Subjects
- *
DISTRIBUTION (Probability theory) , *UNIFORM distribution (Probability theory) , *CIRCLE , *RANDOM polynomials , *CLUSTER analysis (Statistics) , *PROBABILITY theory , *SAMPLE size (Statistics) , *STATISTICS , *MULTILEVEL models - Abstract
Tests for randomness are described, when the data consists of points on the circumference of a unit circle, and the alternative to the uniform distribution (randomness) is either a unimodal or a bimodal distribution: the yon Mises distribution or an adaptation. Tables of significance points are given for the test statistics, and the power is used to give a table of sample sizes needed to detect a given degree of clustering, measured by the parameters of the distributions. This paper is a close parallel, for two dimensions, to Stephens [15], which gave corresponding tests and tables for three dimensions. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
29. BOUNDS FOR THE ERROR-VARIANCE OF AN ESTIMATOR IN SAMPLING WITH VARYING PROBABILITIES FROM A FINITE POPULATION.
- Author
-
Ajgaonkar, S. G. Prabhu
- Subjects
- *
ANALYSIS of variance , *ESTIMATION theory , *PROBABILITY theory , *VARIANCES , *ERROR analysis in mathematics , *MATHEMATICAL statistics , *STATISTICS - Abstract
This paper presents three upper bounds for the variance of an estimator, based on observations selected with varying probabilities from a finite population, the elements of which are ranked with respect to the Y values. Accordingly, the usefulness of these bounds relates to the pre-enumeration analysis where one may well know the intended probabilities and joint probabilities corresponding to the sampling scheme but does not know the Y values. If, however, one can make a conservative guess at the largest Y value, one can use these bounds. Some examples are included to illustrate the theory. [ABSTRACT FROM AUTHOR]
- Published
- 1968
- Full Text
- View/download PDF
30. EFFICIENCY LOSS DUE TO GROUPING IN DISTRIBUTION-FREE TESTS.
- Author
-
McNeil, D. R.
- Subjects
- *
DISTRIBUTION (Probability theory) , *NONPARAMETRIC statistics , *HYPOTHESIS , *PITMAN'S measure of closeness , *ESTIMATION theory , *NUMERICAL analysis , *PROBABILITY theory , *STATISTICS - Abstract
While distribution-free procedures are often appropriate when testing statistical hypotheses, they may become complicated or involve loss of power when the data are grouped. For rank tests the ties caused by grouping are generally broken either by using a randomization procedure or averaging the tied ranks. In this paper the power loss due to equi-spaced grouping (in terms of Pitman asymptotic relative efficiency) is investigated for some commonly used tests, for each method of tie-breaking. The tests considered are Wilcoxon's and Mood's tests for the two-sample problem, Mann's test for randomness, and Pitman's independence test. It is shown how the power loss depends on the width of the grouping intervals and the distribution of the data, and some numerical studies are given. The results seem to indicate that the power loss is small even for a sizable group interval, and that it may be preferable to break ties by randomization than by averaging ranks. [ABSTRACT FROM AUTHOR]
- Published
- 1967
- Full Text
- View/download PDF
31. ON THE METHOD OF INCLUSION AND EXCLUSION.
- Author
-
Takács, Lajos
- Subjects
- *
PROBABILITY theory , *THEORY , *INTEGRAL theorems , *INTEGRALS , *STATISTICS - Abstract
Let Ω be an arbitrary set, a, a σ-field of subsets of Ω, and V(A), a finite, countably additive set function defined on a. Let A1, A2, ..., be a sequence of subsets of Ω belonging to a. Denote by Hk, k = 0, 1, 2, ..., the set of elements of Ω which belong exactly to k sets among A1, A2, .... In the theory of probability, in combinatorial analysis and in the theory of numbers there frequently arises the problem of finding V(Hk), k = 0, 1, 2 .... In this paper V(Hk) is found if V(Ω) and V(Ai1, Ai2 ... Ain, 1 ... i2 < i2 < ... < ir, are known. [ABSTRACT FROM AUTHOR]
- Published
- 1967
- Full Text
- View/download PDF
32. PROBABILITY SAMPLING WITH QUOTAS.
- Author
-
Sudman, Seymour
- Subjects
- *
STATISTICAL sampling , *PROBABILITY measures , *PROBABILITY theory , *STATISTICS , *MATHEMATICS , *INTERVIEWING , *AGE & employment - Abstract
This paper describes certain quota sampling procedures and attempts to show that they are very close to traditional probability sampling. Quotas are shown to depend on availability for interviewing and evidence is presented to show that sex, age, and employment status are reasonable predictors of availability. Quota sampling methods are not unbiased but data are presented which suggest that the bias is generally of the order of 3 to 5 per cent. It is shown, however, that the cost differentials between these quota samples and call-back samples are small. The major advantage of this new procedure may well be the speed with which interviewing may be completed during crises such as the Kennedy assassination. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
33. ON MODELS AND HYPOTHESES WITH RESTRICTED ALTERNATIVES.
- Author
-
Hogg, Robert V.
- Subjects
- *
GAUSSIAN distribution , *DISTRIBUTION (Probability theory) , *HYPOTHESIS , *PROBABILITY theory , *STATISTICS , *METHODOLOGY , *STATISTICAL hypothesis testing - Abstract
A number of examples which involve restricted alternatives are given; each of these, except one, deals with normal distributions. The examples demonstrate the following: (a) one way to construct a simple test when the alternative hypothesis is restricted, (b) a method by which the additional restrictions can be justified, and (c) a procedure that can be used to defend the original model. An interesting relationship between Bartholomew's chi[sup 2] and a test statistic, which is used in this paper, is pointed out. Finally, a certain distribution-free problem and a solution of it are proposed. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
34. PERCENTILE MODIFICATIONS OF TWO-SAMPLE RANK TESTS.
- Author
-
Gastwirth, Joseph L.
- Subjects
- *
GAUSSIAN distribution , *RATIONAL numbers , *DISTRIBUTION (Probability theory) , *STATISTICAL sampling , *METHODOLOGY , *PROBABILITY theory , *STATISTICS , *PROBLEM solving - Abstract
This paper presents a simple method for increasing the limiting Pitman efficiency of rank tests relative to the best tests for samples normal distributions without using complicated scoring systems. Our proposal is to select two numbers p and r (0 < p, r < 1) and then score, with integer weights, the data in the top p[sup th] and bottom r[sup th] fractions of the combined sample. The percentile modified tests for scale are quite effective. When p = r = 1/8 (i.e., we score only the extreme quarter of the combined sample) the A.R.E. of the test to the F test is .85. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
35. A BAYESIAN INDIFFERENCE PROCEDURE.
- Author
-
Novick, Melvin R. and Hall, W. J.
- Subjects
- *
PROBABILITY theory , *DISTRIBUTION (Probability theory) , *FAMILIES , *STATISTICAL sampling , *THEORY of knowledge , *STATISTICS , *BAYESIAN analysis - Abstract
In a logical probability approach to inference, distributions on a parameter space are interpretable as representing states of knowledge, and any prevailing state of knowledge may be taken to have been arrived at from a previous state of ignorance (indifference) followed by an accumulation of prior data. In this paper an indifference procedure is introduced that requires postulating what size and what kind of samples will and will not (in a special sense) permit statistical inference and prediction--e.g., one observation from a two-parameter normal model is not (in our special sense) sufficient to permit inference about the variance but two observations are. In essence, the procedure stipulates that prior indifference distributions be improper but become proper after an appropriate minimal sample. With some limitation on the family of priors considered, this procedure permits unique specification of indifference for the more commonly encountered statistical models. Furthermore, these specifications are affected neither by change of the scale of measurement of the observations, nor by the sampling rule. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
36. PREDICTION AND DECISION PROBLEMS IN REGRESSION MODELS FROM THE BAYESIAN POINT OF VIEW.
- Author
-
Zellner, Arnold and Cretty, V. Karuppan
- Subjects
- *
REGRESSION analysis , *STATISTICAL decision making , *GAME theory , *PREDICTION theory , *MATHEMATICAL models of investments , *STATISTICS , *BAYESIAN analysis , *MATHEMATICAL models , *PROBABILITY theory , *DISTRIBUTION (Probability theory) - Abstract
In this paper we review the derivation of the predictive density function for the normal multiple regression model, state and prove a general theorem on optimal point prediction, and show how the predictive density can be employed in the analysis of an illustrative investment problem. Then we derive the predictive density function for the multivariate normal regression model and indicate how it can be used in the analysis of several problems. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
37. FOURIER METHODS FOR EVOLVING SEASONAL PATTERNS.
- Author
-
Nettheim, Nigel F.
- Subjects
- *
FOURIER analysis , *ECONOMIC statistics , *MATHEMATICAL analysis , *ECONOMIC seasonal variations , *MATHEMATICAL statistics , *STATISTICS , *TIME series analysis , *PROBABILITY theory - Abstract
The use of Fourier or spectral methods for the seasonal adjustment of economic and other time series has recently been suggested; see for example Hannah [3], [4]. The case in which the seasonal pattern does not change appreciably from year to year has been covered in some detail but methods appropriate to an evolving seasonal pattern have been given only a little attention. The present paper discusses the nature of evolving seasonal patterns which may arise in economic series and examines a method due to Harman which may be used when the pattern is changing slowly over time. Although the suggested method is not readily mechanized for routine application on a large scale since some personal judgments are called for, it is found useful in detailed studies and might well be applied to any series for which traditional methods are not adequate; it is also applicable to the problem of prediction. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
38. SOME GRAPHS USEFUL FOR STATISTICAL INFERENCE.
- Author
-
Guenther, William C. and Thomas, P. O.
- Subjects
- *
GRAPHIC methods , *GRAPHIC methods in statistics , *STATISTICAL sampling , *CONFIDENCE intervals , *HYPOTHESIS , *SAMPLE size (Statistics) , *PROBABILITY theory , *STATISTICS , *MATHEMATICAL statistics - Abstract
The determination of sample size to meet certain probability requirements is a problem which often faces an experimenter when obtaining a confidence interval or testing a hypothesis. This paper includes some new graphs useful in selecting sample size and gives a reference to a new textbook which includes a number of other similar type graphs. The method of construction is explained in detail and examples are included. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
39. ON THE F-TEST IN THE INTRABLOCK ANALYSIS OF A CLASS OF TWO ASSOCIATE PBIB DESIGNS.
- Author
-
Giri, N.
- Subjects
- *
DISTRIBUTION (Probability theory) , *MOMENT problems (Mathematics) , *CHARACTERISTIC functions , *F-distribution , *APPROXIMATION theory , *PROBABILITY theory , *STATISTICAL sampling , *STATISTICS - Abstract
In this paper the first two moments of the ratio (treatment sum of squares)/(treatment sum of squares + error sum of squares)over all possible random assignment of treatments to the experimental plots, for a class of 2 associate PBIBD has been obtained. These two moments are compared with the corresponding moments of a continuous beta distribution to settle the question of approximating the randomization test by the usual F-test. It has been shown that a reasonable approximation to the randomization test based on the statistic F is equivalent to modifying the normal theory test by multiplying the numbers of d.f. of the F-distribution by a factor depending on the heterogeneity of the blocks. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
40. MINIMAL SUFFICIENT STATISTICS FOR THE TWO-WAY CLASSIFICATION MIXED MODEL DESIGN.
- Author
-
Hultquist, Robert A. and Graybill, Franklin A.
- Subjects
- *
SUFFICIENT statistics , *DISCRIMINANT analysis , *STATISTICS , *DISTRIBUTION (Probability theory) , *MODELS & modelmaking , *PROBABILITY theory , *LINEAR statistical models , *HYPOTHESIS - Abstract
This paper presents theorems which can be used to obtain sufficient and minimal sufficient statistics for the two-way classification mixed model design. Using the general linear hypothesis model Y = X tau + Z beta + e the authors prove that the dimension of a minimal sufficient statistic is a function of the ranks of certain submatrices of Z'X. Minimal sufficient statistics are presented in tabular form for some two-way classification designs and some of the distributional properties are given. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
41. PRINCIPAL COMPONENTS REGRESSION IN EXPLORATORY STATISTICAL RESEARCH.
- Author
-
Massy, William F.
- Subjects
- *
REGRESSION analysis , *DISTRIBUTION (Probability theory) , *PROBABILITY theory , *MATHEMATICAL variables , *STATISTICS , *INCOME , *ESTIMATION theory , *HOUSEHOLD surveys , *CENSUS - Abstract
Regression upon principal components of the percentage points of the income and education distributions for 1950 census tracts in the city of Chicago led to the estimation of "beta coefficient profiles" for television receiver and refrigerator ownership, for central heating system usage, and for a measure of dwelling unit overcrowding. The betas are standardized coefficients of regression of a dependent variable upon the proportions of families in the classes of the marginal income and education distributions. They measure the relative contribution of families in these classes to the over-all per cent saturation of the dependent variable in the tract. The coefficients were estimated by techniques developed in the first portion of the paper; estimation by classical regression methods would have been impossible because of multicollinearity. The empirical results are in substantial agreement with findings from regressions of the dependent variables upon the mean values of income and education, and their squares. The statistical devices appear to be useful in exploratory empirical research. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
42. SYSTEMATIC STATISTICS USED FOR DATA COMPRESSION IN SPACE TELEMETRY.
- Author
-
Eisenberger, Isidore and Posner, Edward C.
- Subjects
- *
NONPARAMETRIC statistics , *STATISTICS , *GOODNESS-of-fit tests , *ESTIMATION theory , *STATISTICAL sampling , *AEROSPACE telemetry , *ESTIMATION bias , *DATA compression , *STATISTICAL hypothesis testing , *DISTRIBUTION (Probability theory) , *PROBABILITY theory - Abstract
The need for data compression, a consequence of the demands made on the telemetry system of a space vehicle, prompts consideration of the use of sample quantiles in estimating population parameters and obtaining tests of goodness of fit for large samples. In this paper optimal unbiased estimators of the mean and standard deviation are given using up to twenty quantiles when the parent population is normal. Moreover, the estimators are relatively insensitive to deviations from normality. A distribution-free goodness-of-fit test is presented based on the sum of the squares of four quantiles after an orthogonal transformation to independent normal deviates. If a frequency function is of the form f(x; p) = pf[sub 1](x) + (1 - p) f[sub 2](x), 0 < p < 1, where f, and f[sub 2] are normal frequency functions, the distribution is likely to be bimodal. Another goodness-of-fit test is obtained using four quantiles, which is likely to have considerable power with a null hypothesis of normality and the alternative hypothesis of bimodality. The "data compression ratios" obtained with the use of a quantile system can be on the order of 100 to 1. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
43. DETERMINATION OF CONSUMER UNIT SCALES.
- Author
-
Singh, Balvir and Nagar, A. L.
- Subjects
SURVEYS ,ESTIMATION theory ,PROBABILITY theory ,STATISTICAL correlation ,STATISTICS ,ECONOMICS ,CONSUMERS ,REGRESSION analysis ,ANALYSIS of variance - Abstract
This paper develops an iterative procedure for estimating "specific" and "income" consumer unit scales in Engel curve analysis. The proposed procedure is essentially a modification of the Prais and Houthakker method and is illustrated by means of a numerical example based on the Indian National Sample Survey data. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
44. THE EXISTENCE OF MOMENTS OF THE ORDINARY LEAST SQUARES AND TWO-STAGE SQUARES AND TWO-STAGE LEAST SQUARES ESTIMATORS.
- Author
-
Mariano, Roberto S.
- Subjects
LEAST squares ,ESTIMATION theory ,STATISTICAL correlation ,MATHEMATICAL statistics ,PROBABILITY theory ,REGRESSION analysis ,STATISTICS ,ANALYSIS of variance ,MATHEMATICS - Abstract
This paper deals with two single-equation estimators in a set of simultaneous linear stochastic equations-namely, ordinary least squares (OLS) and two-stage least squares (2SLS). Under the assumption' that all predetermined variables in the model are exogenous, necessary and sufficient conditions are obtained for the existence of even moments of the above estimators. It is shown that for the general case with an arbitrary number of included endogenous variables, even moments of the 2SLS estimator are finite if and only if the order is less than K2 - G1 + 1. Furthermore, even moments of the OLS estimator exist if and only if the order is less than N - K
1 - G1 + 1 where N is the sample size, G1 + 1 is the number of included endogenous variables, K1 and K2 respectively are the number of included and excluded exogenous variables in the equation to be estimated. [ABSTRACT FROM AUTHOR]- Published
- 1972
- Full Text
- View/download PDF
45. HEURISTIC METHODS FOR ESTIMATING THE GENERALIZED VERTEX MEDIAN OF A WEIGHTED GRAPH.
- Author
-
Teitz, Michael B. and Ban, Polly
- Subjects
OPERATIONS research ,MEDIAN (Mathematics) ,ARITHMETIC mean ,STANDARD deviations ,STATISTICS ,PROBABILITY theory - Abstract
The generalized vertex median of a weighted graph may be found by complete enumeration or by some heuristic method. This paper investigates alternatives and proposes a method that seems to perform well in comparison with others found in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 1968
- Full Text
- View/download PDF
46. THE UTILITY OF STATISTICS OF RANDOM NUMBERS.
- Author
-
Clark, Charles E.
- Subjects
RANDOM numbers ,STATISTICAL sampling ,MONTE Carlo method ,NUMERICAL analysis ,QUEUING theory ,SAMPLE variance ,PROBABILITY theory ,STATISTICS ,PARAMETERS (Statistics) - Abstract
The utility of tables of random numbers is enhanced when statistics of the numbers are known. Such statistics permit, inter alia, the efficiencies of stratified sampling. This fact is indicated by numerical examples Computer Monte Carlo is also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1960
- Full Text
- View/download PDF
47. Certain properties of bivariate distributions associated with generalized hypergeometric functions.
- Author
-
Saxena, R. K. and Sethi, P. L.
- Subjects
HYPERGEOMETRIC distribution ,HYPERGEOMETRIC functions ,DISTRIBUTION (Probability theory) ,PROBABILITY theory ,STATISTICS - Abstract
Copyright of Canadian Journal of Statistics is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 1973
- Full Text
- View/download PDF
48. A Simplified Form of the ASN for a Curtailed Sampling Plan.
- Author
-
Shah, D.K. and Phatak, A.G.
- Subjects
STATISTICAL sampling ,STATISTICS ,PROBABILITY theory - Abstract
In this paper we have obtained, following Craig's procednre, a simplified form for the ASN of a single sampling plan under curtailment arising from observation of enough nondefectives to accept a lot or enough defectives to reject a lot. [ABSTRACT FROM AUTHOR]
- Published
- 1972
- Full Text
- View/download PDF
49. Estimating by Sample the Size and Age-Sex Structure of a Population.
- Author
-
Silcock, H.
- Subjects
AGE ,GENDER ,STATISTICAL correlation ,POPULATION ,PROBABILITY theory ,STATISTICS - Abstract
A brief description is given of a population sample of the County Borough of Dudley in which households were sampled with probabilities proportional to the number of adults in the household. 2. The variance of estimates for specific age-sex groups is considered and an empirical regression obtained relating the coefficient of variation to the size of the group. 3. Estimates are given of the variance that would result from a number of alternative schemes of sampling and estimation. 4. The possibility of increased, precision by district stratification is examined and shown to be negligible. 5. The variance to be expected in the age-sex tabulations from the I % sample of the 1951 Population Census is estimated from the results of the previous analysis. [ABSTRACT FROM AUTHOR]
- Published
- 1952
- Full Text
- View/download PDF
50. Comments on the Economic Design of X-Charts.
- Author
-
Chiu, W. K.
- Subjects
- *
CHARTS, diagrams, etc. , *ECONOMICS , *GRAPHIC methods , *MATHEMATICAL statistics , *PROBABILITY theory , *MATHEMATICAL variables , *STATISTICS - Abstract
This article provides some corrections to the numerical results obtained in a recent paper by Duncan [2] and suggests a more efficient procedure for determining the optimum control parameters. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.