113 results
Search Results
2. A Study in Newspaper Sampling.
- Author
-
Coats, Wendell J. and Mulkey, Steve W.
- Subjects
STATISTICAL sampling ,NEWSPAPER circulation ,STATISTICAL correlation ,MASS media research ,STATISTICS - Abstract
This article reports on the study conducted in the United States to find out whether effective newspaper sampling is possible or not. To select the sample purposively in two equal increments of 50 newspapers each, the universe was stratified with respect to certain objective characteristics, the characteristics of an ideal 50 paper sub-sample were determined, and two sub-samples were selected deliberately with characteristics matching those of the ideal sub-sample. These were then coded for certain items of military interest, and results from the two compared. In conducting the study, the sampling universe was limited to general circulation. Then the two samples were compared with the help of correlation method. Correlation between the content of the two samples was close enough to justify the conclusion that effective newspaper sampling is possible. It is based upon the consistent, day-to-day, subject-by-subject correlation between the two sub-samples.
- Published
- 1950
- Full Text
- View/download PDF
3. Scheduling the 41st-ORSA-Meeting Sessions: The Visiting-Fireman Problem, II.
- Author
-
Arnold, Larry R., Beckwith, Richard E., and Jones, Carl M.
- Subjects
PRODUCTION scheduling ,MATHEMATICAL programming ,MEETINGS ,CONFERENCES & conventions ,STATISTICAL sampling ,STATISTICS - Abstract
The invited-paper program of the New Orleans ORSA meeting was organized so as to minimize the conflict among coscheduled sessions. A surrey of a random sample of ORSA members was conducted; the respondents ranking their top session choices from a preliminary program. The results were processed into a weighted session-conflict matrix, with an optional override to reflect session chairmen's scheduling restrictions and multiple-presentation author constraints. The minimum-conflict schedule was produced heuristically by a combination of random schedule generation and discrete optimization. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
4. Design of household sample surveys to test death registration completeness.
- Author
-
Sirken, Monroe G. and Sirken, M G
- Subjects
HOUSEHOLD surveys ,DEMOGRAPHIC surveys ,POPULATION research ,POPULATION geography ,DEATH rate ,CENSUS ,HOUSING ,RESEARCH methodology ,META-analysis ,MORTALITY ,STATISTICAL sampling ,STATISTICS ,VITAL statistics ,RESIDENTIAL patterns ,EVALUATION research - Abstract
This paper studies the design effect of counting rules, for linking deaths to housing units where they are enumerated in the survey, on the sampling variance of dual system and single system estimators of death registration completeness. It investigates estimators based on conventional rules that uniquely link each death to a single housing unit as well as estimators based on multiplicity rules which permit deaths to be linked to more than one housing unit. Sampling variance formulas are derived containing parameters that reflect the efficiency of the counting rule. Estimates of these parameters for different counting rules are compared utilizing information that was collected in a mortality survey experiment. Finally, the design of a national death registration test is considered and the sample size implications of different counting rules are compared. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
5. THE SYSTEMATIC BIAS EFFECTS OF INCOMPLETE RESPONSES IN ROTATION SAMPLES.
- Author
-
Williams, W. H.
- Subjects
SURVEYS ,STATISTICAL sampling ,ROTATION groups ,PROBABILITY theory ,STATISTICS - Abstract
Rotation samples are frequently used in continuing surveys in order to obtain estimates of changes in a characteristic over time as well as separate estimates of the characteristic at specific points in time. Rotation designs involve the retention of some sampling units and the replacement of others. It has been observed in some studies that there are systematic changes in the estimate of a characteristic, depending on the frequency of appearance of a rotation group in the sample. It is shown in this paper that these systematic changes must occur provided (1) the probability of a selected unit actually appearing in the sample is monotonically related to the characteristic under measurement, and (a) the probability of a selected unit actually appearing in the sample changes monotonically from one observation point to the next. Some numerical examples showing the form and magnitude of the potential biases are included. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
6. Sex ratio at birth: values, variance, and some determinants.
- Author
-
Markle, Gerald E. and Markle, G E
- Subjects
SEX ratio ,FIRST-born children ,POPULATION research ,HOUSEHOLD surveys ,SOCIAL indicators ,ATTITUDE (Psychology) ,BLACK people ,SOCIAL dominance ,FAMILIES ,OCCUPATIONS ,PRAYER ,STATISTICAL sampling ,SEX distribution ,SOCIAL role ,SOCIAL values ,STATISTICS ,EDUCATIONAL attainment - Abstract
This paper examines the values, variance and some possible determinants of sex ratios for the first child and for all children in expected and desired families. For adults in Tallahassee, Florida, it was found that a large majority of respondents within sixty demographic categories chose males for their first child. Of those who actually had girls for their first child, a plurality would, nevertheless, prefer a first boy in their desired family. It was hypothesized and demonstrated that sex-role ideologies were a strong predictor of variance in first-child sex preferences. Sex ratios for all children in expected and desired families were 116 and 113, respectively. If people could choose the sex of their future children, these data suggest that several population parameters might be significantly altered; a preliminary model is outlined which might project some of these changes. [ABSTRACT FROM AUTHOR]
- Published
- 1974
- Full Text
- View/download PDF
7. MOMENTS OF THE DISTRIBUTION OF SAMPLE SIZE IN A SPRT.
- Author
-
Ghosh, B. K.
- Subjects
- *
MOMENTS method (Statistics) , *STATISTICAL sampling , *ARITHMETIC , *DISTRIBUTION (Probability theory) , *DIFFERENTIABLE functions , *PROBABILITY theory , *APPROXIMATION theory , *EQUATIONS , *STATISTICS - Abstract
The article discusses moments of the distribution of sample size, N in a sequential probability ratio test (SPRT). The present paper provides variance, the third and the fourth moments of N. The details are worked out in five common applications of the SPRIT. The relation of the variance of N to the truncation of a SPRT is discussed is also discussed in the paper. Scholar A. Wald indicated in passing how one can obtain the moments of N, but the only published literature where the author encountered a general expression for the variance of N. However, their expression is incorrect. Using scholar J. Wolfowitz's results, which they do, or differentiating Wald's, fundamental identity twice one gets provided. In many practical applications of the SPRT, μ and moments in an equation derived are differentiable functions of a real-valued parameter. The limiting expressions for the moments can then be determined by standard methods of mathematical analysis. However, for the third and fourth moments the actual technique may involve an excessive amount of arithmetic.
- Published
- 1969
- Full Text
- View/download PDF
8. THE EXCEEDANCE TEST FOR TRUNCATION OF A SUPPLIER'S DATA.
- Author
-
Deely, J. J., Amos, D. E., and Steck, G. P.
- Subjects
- *
DISTRIBUTORS (Commerce) , *STATISTICAL sampling , *DATABASE management , *SUPPLIERS , *ELECTRONIC data processing , *SUPPLY chains , *SAMPLE size (Statistics) , *STATISTICS , *TESTING - Abstract
The purpose of this paper is to present an easily applied test useful in determining whether or not a supplier's data have been truncated. The proposed test has the following desirable properties: (I) it is the uniformly most powerful rank test, (ii) it is asymptotically uniformly most powerful, (iii) power computations can easily be made for arbitrary sample sizes, formulas for such computations being given in the paper. Although formulated in the context of verifying a supplier's data, the test can be applied to other situations in which false representation of data in the form of truncation is important. Such is the case, for example, in reliability demonstrations or legal suits involving physical measurements. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
9. ORDER STATISTICS ESTIMATORS OF THE LOCATION OF THE CAUCHY DISTRIBUTION.
- Author
-
Barnett, V. D.
- Subjects
- *
ESTIMATION theory , *DISTRIBUTION (Probability theory) , *STATISTICS , *ARITHMETIC mean , *ORDER statistics , *VARIANCES , *STATISTICAL sampling , *MEDIAN (Mathematics) - Abstract
In a recent paper in this Journal, Rothenberg, Fisher and Tilanus [1] discuss a class of estimators of tile location parameter of the Cauchy distribution, taking the form of the arithmetic average of a central subset, of the sample order statistics. They show that the average of roughly the middle quarter of the ordered sample has minimum asymptotic variance within this class, and that asymptotically it eliminates about. 36 per cent of the efficiency loss of the median (the most commonly used estimate,r) in comparison to the maximum likelihood estimator (m.l.e.). Of course both the m.l.e, and the best linear unbiased estimator based on the order statistics (BLUE) achieve full asymptotic efficiency in the Cramer-Rao sense and there can be no dispute about the relative merits of the three estimators asymptotically, or about the inferiority of the median (with asymptotic efficiency 8/pi[sup 2] * This character cannot be converted in ASCII text) 0.8 compared with about 0.88 for the estimator of Rothenberg et al.). In any practical situation however, we will be concerned with estimation from samples of finite size and asymptotic properties will not necessarily give any guidance here. We are essentially concerned with two points in assessing the relative merits of estimators in small samples, their case of application and "small-sample efficiency" which is conveniently measured as the ratio of the Cramer-Rao lower bound to the variance of the estimator. In this paper various estimators of the location of the Cauchy distribution arc compared in these two respects for samples of up to 20 observations. The small-sample properties of the m.l.e. have been extensively discussed elsewhere (Barnett [2]) and relevant results are summarized where necessary. The main purpose of the paper is to discuss general linear estimators based on the order statistics, and to assess their utility in the present context. Since this paper was prepared a further interesting 'quick estimator', b [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
10. ON ROBUST PROCEDURES.
- Author
-
Gastwirth, Joseph L.
- Subjects
- *
DENSITY functionals , *ROBUST statistics , *DISTRIBUTION (Probability theory) , *PITMAN'S measure of closeness , *STATISTICAL sampling , *STATISTICS , *PARAMETER estimation , *RANKING - Abstract
This paper discusses a procedure for finding robust; estimators of the location parameter of symmetric unimodal distributions. The estimators are based on robust rank tests and the methods used are applicable to other one parameter problems. To every density function there corresponds an asymptotically most powerful rank test (a.m.p.r.t.). For a set F of density functions the maximin rank test, R, maximizes the minimum limiting Pitman's efficiency of R relative to the a.m.p.r.t. for each member of F. This maximin test, R, can be used to construct estimators according to the proposal of Hodges and Lehman; it generates another estimator T in the following manner. If the test based on R is the a.m.p.r.t. for samples from a density function g(x - theta), then the estimator T will be the best linear unbiased estimate (b.l.u.e.) of the location parameter for samples from g(x). Unfortunately, the estimator T is not necessarily consistent for all members of F. A class of rank tests which generate linear combinations of a few order statistics is introduced and a simple estimator using the 33 1/3rd, 50th and 66 2/3rd percentiles is proposed. The relationship of the present paper to the work of Huber is discussed and it is shown that the b.l.u.e. corresponding to his least favorable distribution is the trimmed mean. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
11. A NOTE ON THE ESTIMATION OF THE LOCATION PARAMETER OF THE CAUCHY DISTRIBUTION.
- Author
-
Bloch, Daniel
- Subjects
- *
ASYMPTOTIC theory in estimation theory , *ESTIMATION theory , *CAUCHY integrals , *LOCATION analysis , *ASYMPTOTIC expansions , *DISTRIBUTION (Probability theory) , *STATISTICAL sampling , *VARIANCES , *STATISTICS - Abstract
Recently Professors T. J. Rothenberg, F. M. Fisher, and C. B. Tilanus published a paper proposing the class of trimmed means as estimators of the location parameter of the Cauchy distribution [5]. They showed that the asymptotic sampling variance of the estimators in this class is essentially minimized by using the middle 24% of the sample order statistics. The corresponding estimate has an asymptotic relative efficiency to the best estimator for complete samples (A.R.E.) of .87796 as compared to an A.R.E. of .81057 for the sample median. In this paper a few "quick estimators" are considered as estimators for the location parameter of the Cauchy Distribution. A "quick estimate" is a linear combination (a weighted average) of one or more order statistics. Our goal is to find a simple estimator, i.e. an estimator based on only a few order statistics, which has an A.R.E. of at least 90%. We found an estimator based on five order statistics which is considerably better than the optimum trimmed mean (using the middle 24% of the sample order statistics) and much better than the sample median. The A.R.E. of the optimum censored estimate with censored fractiles .38 and .62 is also found, and a comparison between the trimmed, censored, and proposed estimators is made. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
12. SYSTEMATIC SAMPLING WITH UNEQUAL PROBABILITY AND WITHOUT REPLACEMENT.
- Author
-
Hartley, H. O.
- Subjects
- *
ESTIMATION theory , *STATISTICS , *PROBABILITY theory , *STATISTICAL sampling , *ANALYSIS of variance , *SAMPLE size (Statistics) - Abstract
Given a population of N units, it is required to draw a sample of n distinct units in such a way that the probability for the ith unit to be in the sample is proportional to its 'size' x. From the alternative methods of achieving this we consider here only the so-called systematic method which, to the best of our knowledge, was first developed by W. G. Madow (1949): The units in the population are listed in a 'particular' order, their x, accumulated and a systematic selection of n elements from a 'random start' is then made on the accumulation. In a more recent paper (H. O. Hartley and J. N. K. Rao (1962) ) an asymptotic estimation theory (for large N) associated with this procedure was developed for the case when the order of the listed units is random. In this paper we draw attention to certain properties of Madow's estimator: We utilize the fact that with systematic sampling the total number of different samples is N (rather than ([This eq. cannot be change in char.]) as with completely random sampling). This simplification in the definition of the variance of the estimator in repeated sampling enables us to identify the exact variance of Madow's estimator with a 'between sample mean square' in a special analysis of variance (see section 4) and compare it with the variance of the pps estimator in sampling with replacement as well as in other sampling procedures. We also develop two approximate methods of variance estimation (see section 5). We pay particular attention to the case when the units are listed in the order of their size. With this particular arrangement our method can be described as 'systematic with random start' and the gain in precision that we accomplish has of course, analogues in systematic sampling with equal probabilities employing ratio estimators in which there is a relation between the ratio ri =yi/Xi and xi Compared with other methods the present procedure combines the advantage of ease of systematic sample selection with the availability of exact variance formulas for any n and N. Moreover, it usually leads to a more efficient estimate. Its shortcoming resides in the fact that the estimation of the variance is based on certain assumptions. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
13. SAMUEL S. WILKS.
- Author
-
Stephan, Frederick F., Tukey, John W., Mosteller, Frederick, Mood, Alex M., Hansen, Morris H., Simon, Leslie E., and Dixon, W. J.
- Subjects
- *
STATISTICIANS , *MATHEMATICAL statistics , *RANDOM variables , *MATHEMATICIANS , *NONPARAMETRIC statistics , *STATISTICAL sampling , *STATISTICS - Abstract
The article presents information about Samuel S. Wilks, a great contributor to statistics. Wilks' behavior toward applications was peculiarly split, he encouraged his students to work on applications, not always an easy thing to do in a strongly theoretical mathematics department; he often told students about applications that he regarded as "neat" or "cute" or "clever"; the statistical colloquium he guided often had speakers on practical applications of mathematical statistics; he himself wrote some practical papers in statistics, but in the classroom he rarely discussed applications. Repeatedly, however, practical problems explicitly influenced both his own publications and those of his students and Wilks often used applications as motivation for the discussion of distribution theory, usually going well beyond the needs of the original problem. The papers to be discussed exhibit incompletely and fragmentarily a major influence on the work of the man and his students. Wilks watched the development of the statistical theory of order statistics closely. Indeed he wrote a masterful summary of the literature. The general area involves the study of the statistical properties of ordered measurements.
- Published
- 1965
- Full Text
- View/download PDF
14. ESTIMATION OF MULTIPLE CONTRASTS USING t-DISTRIBUTIONS.
- Author
-
Dunn, Olive Jean and Massey Jr, Frank J.
- Subjects
- *
TIME series analysis , *CHARACTERISTIC functions , *MATHEMATICAL statistics , *PROBABILITY theory , *CONFIDENCE intervals , *DISTRIBUTION (Probability theory) , *MATHEMATICAL models , *STATISTICAL sampling , *MULTIVARIATE analysis , *STATISTICS - Abstract
Various methods based on Student t variates have been suggested and used for obtaining simultaneous confidence intervals for several means, or for several contrasts among means. Determination of an overall confidence level for such intervals involves evaluating the probability mass of a multivariate t distribution over a hypercube centered at the origin, with sides paralleling the coordinate planes, or obtaining bounds for this probability mass. Since such distributions involve many nuisance parameters, an impossible number of tables would be necessary in order to make exact confidence intervals. In the virtual absence of tables, approximations and bounds become important. In this paper, an attempt has been made to investigate the adequacy of certain suggested approximations [2], [5], [8] by computing the exact distributions for some particular cases. These exact distributions have been compared with approximations. This paper is concerned with two-sided confidence intervals, rather than one-sided intervals. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
15. R.A. FISHER AND THE LAST FIFTY YEARS OF STATISTICAL METHODOLOGY.
- Author
-
Bartlett, M. S.
- Subjects
- *
STATISTICS , *ECONOMICS , *INTERVAL analysis , *GENETICS , *STATISTICAL sampling , *ESTIMATION theory - Abstract
In this article, the author will talk about researcher R.A. Fisher's contributions to statistics. It is well-known that his work in genetics was of comparable status. It is largely represented by his book "The Genetical Theory of Natural Selection," though in his subsequent work his further association with ecological and experimental studies in evolutionary genetics, and his share in the development of studies in the human blood groups, might especially be recalled. Fisher's contributions to statistics began with a paper in 1912 advocating the method of maximum likelihood for fitting frequency curves, although the first paper of substance was his 1915 paper in the journal "Biometrika," on the sampling distribution of the correlation coefficient. Fisher tried to pose problems of analysis as the reduction and simplification of statistical data. He put forward his well-known concept of amount of information in estimation theory, such that information might be lost, but never gained, by analysis. His concept has been of great practical value, especially in large sample theory.
- Published
- 1965
- Full Text
- View/download PDF
16. A HISTORY OF DISTRIBUTION SAMPLING PRIOR TO THE ERA OF THE COMPUTER AND ITS RELEVANCE TO SIMULATION.
- Author
-
Teichroew, Daniel
- Subjects
- *
SIMULATION methods & models , *PROBABILITY theory , *STATISTICAL sampling , *DISTRIBUTION (Probability theory) , *STATISTICS , *TIME series analysis , *DIGITAL computer simulation , *METHODOLOGY - Abstract
The use of simulation, as a technique for attacking difficult problems, has increased greatly with the availability of the digital computer. This is illustrated by the large number of references in Shubik's (1960) bibliography[sup 2] and in the large number of studies published since then. Simulation is essentially an extension of a technique known as empirical sampling, or distribution sampling, which has been used in the field of statistics for many years. The limitations of the technique, which are well known to statisticians, are apparently not as well known, or at least not as well recognized, by those using simulation today. The first part of this paper contains an historical survey of distribution sampling as used by statisticians. The material was originally prepared in 1953 and is reproduced here in slightly revised form to bring this history to the attention of present day simulators in order that the lessons that can be learned from this part can more readily be incorporated in the development of methodology today. The second part of this paper discusses the relevance of empirical sampling to the present day state of the art of simulation. The technique of generating random members, developed for empirical sampling can be applied directly to simulation. However in other aspects simulation is more difficult than empirical sampling and here the theory of distribution sampling does not have much to offer. The difficulties are due to lack of independence among time series, non-stationarity of the time series, and the large number of parameters. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
17. BETTER ESTIMATES OF CONFIDENCE INTERVALS FOR VERY LOW ERROR RATE POPULATION.
- Author
-
Birnberg, Jacob G. and Pratt, Robert J. A.
- Subjects
POPULATION ,ESTIMATION theory ,GRAPHIC methods ,POPULATION research ,PROBABILITY theory ,ERRORS ,CONFIDENCE intervals ,STATISTICAL sampling ,STATISTICS ,BOUNDARY element methods ,BOUNDARY value problems ,POPULATION forecasting - Abstract
In the estimation of error rates for populations, whose error rate is quite small, the usual assumption of normality can lead to erroneous results. The confidence interval that is actually calculated for any given probability will have too low a lower bound, and not a high enough upper bound. Thus, the user may be misled into too optimistic a view of the population being sampled. This article discusses the nature of the problem and provides graphs from which the more accurate interval can be read. The final section of the paper deals with the related problem of confidence intervals when the observed error rate is zero. Tables are provided which facilitates the developing of confidence statements for such samples. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
18. A Dual Purpose Cost Based Quality Control System.
- Author
-
Schmidt, J.W. and Taylor, R.E.
- Subjects
QUALITY control ,ACCEPTANCE sampling ,STATISTICAL sampling ,STATISTICS - Abstract
This paper considers the development of a cost based industrial quality control system which functions both as an acceptance sampling plan and as a means for determining an assignable cause of quality variation in the manufacturing process. The manufacturing facility studied normally produces items at a relatively constant proportion defective. The facility is subject, however, to random failures in time which cause the proportion defective to shift upwards. The objective of the paper is to develop a sampling plan which minimizes the expected total annual cost of quality control by determining the sample size, acceptance number, and time period between successive samples. The optimization technique employed is a combination of a multivariate search technique and a partial enumeration. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
19. Comparative sociology: some problems of theory and method .
- Author
-
Payne, Geoff
- Subjects
SOCIAL sciences ,COMPARATIVE sociology ,COMPARATIVE studies ,SOCIAL facts ,STATISTICAL sampling ,STATISTICS - Abstract
The article focuses on comparative sociology. The purpose of this paper is in part to explain this apparently paradoxical situation by looking at the historical development of comparative sociology, but more important, it tries to assess the role of comparison and the inter-relationship of theory and method in the generation of social theory, through a discussion of the methodological problems inherent in comparative analysis. The association of evolutionism with comparison was a major reason for the comparative method being devalued, if not discredited. Unfortunately comparison has also been linked with functionalism, a link which is not inevitable, and is certainly undesirable. Sampling is another attempt at solving the central problem of how one compares a potentially vast quantity of social facts. Choice of unit or level, specification, quantification, and randomization are all attempts to handle this basic dilemma, while controlling against the chance of generalizing from the atypical.
- Published
- 1973
- Full Text
- View/download PDF
20. Patterns in Recent Social Research.
- Author
-
Florence, P. Sargant
- Subjects
SOCIAL science research ,RESEARCH methodology ,METHODOLOGY ,STATISTICS ,STATISTICAL sampling - Abstract
This paper offers a look at patterns in methods of social inquiry or research as of September 1950. Apart from pure economics and some political science, the social sciences today proceed, like the natural sciences, at least partly inductively. All the inquiries use some statistical methods as against purely literary statements. This is not due to sheer perversity or exhibitionism, but to scientific necessity--the necessity, in short, of summarizing the main general trends among a wide variety of observed findings. The chief substitute of the social sciences for experiment is statistical strategy. Both in the natural and the realistic social sciences, the narrower and the more intensive the study and the less the variability of behaviour under study, the less statistics need come into it. Variability is reduced by experimenting under laboratory conditions and thus isolating the factors, but the alternative strategy is statistical and isolates the factors at issue by selection of numerically stable conditions or by control of variations. A further common feature of the social research pattern and the feature which, in spite of its great importance, has escaped attention is that of comprehensiveness. The lack of attention to this rule of discipline has been due largely to the inordinate attention bestowed upon the practical devices for random sampling.
- Published
- 1950
- Full Text
- View/download PDF
21. The Sampling Problem.
- Author
-
Furfey, Paul Hanly
- Subjects
SOCIAL science research ,STATISTICAL sampling ,MARRIAGE ,SOCIOLOGY ,HEART beat ,STATISTICS - Abstract
The article focuses on the sampling problem while studying some racial phenomenon. It happens much more commonly that the investigator cannot study all the instances of the phenomenon which he has selected for research. In this case he studies a limited number of them, called a sample. The total number of instances from which this sample is drawn is called a statistical universe or simply a universe. In the present paper an adequate sample is defined as a sample accompanied by sampling errors so small that they do not invalidate whatever conclusions the investigator draws. The sampling problem is defined as the problem of obtaining a sample adequate for a given investigation. A sample which represents a universe adequately for some purposes may be quite inadequate for certain others. For a study of the sociology of marriage a sample of three married couples from a modern city in the U.S. would be very inadequate. Yet the same sample of three men and three women might be fairly adequate for certain physiological studies, say, a study of the effect of a new drug on the heart rate.
- Published
- 1947
- Full Text
- View/download PDF
22. PROBLEMS IN EXPERIMENTING WITH THE APPLICATION OF STATISTICAL TECHNIQUES IN AUDITING.
- Author
-
Neter, John
- Subjects
AUDITING ,MATHEMATICAL statistics ,STATISTICAL sampling ,STATISTICS ,SAMPLE size (Statistics) ,ACCOUNTS ,CORPORATION reports - Abstract
The article focuses on the problems in experimenting with the application of statistical techniques in auditing. Before statistical sampling techniques can be applied to any area, a clear statement of the purpose of sampling has to be developed. This statement of purpose is essential for the determination of the alternative decisions from which is to be made, of the criteria according to which the appropriate decision is to be chosen, and of the relevant statistical model for the population under study. Problems which arise in this initial stage may be extremely difficult ones. This paper discusses some of the problems of this initial stage in introducing statistical techniques in auditing, which must be solved before the more technical statistical problems, such as determination of the optimum sampling procedure, necessary sample size, and the like can be studied. Sampling is used extensively in auditing, for instance, in verifying transactions and in confirming balances of accounts receivable. It is important to distinguish between sampling in auditing and sampling of accounting records in general. In the latter area, some interesting results of the usefulness of statistical sampling techniques have been reported.
- Published
- 1954
23. SOME TIME AND FREQUENCY DOMAIN DISTRIBUTED LAG ESTIMATORS: A COMPARATIVE MONTE CARLO STUDY.
- Author
-
Cargill, Thomas F. and Meyer, Robert A.
- Subjects
ESTIMATION theory ,DISTRIBUTED lags (Economics) ,SAMPLE size (Statistics) ,STATISTICAL sampling ,ECONOMETRICS ,ECONOMICS ,ECONOMIC models ,MATHEMATICAL economics ,MATHEMATICAL models ,STATISTICS ,ECONOMETRIC models - Abstract
This paper presents a comparison of three distributed lag estimators: OLS, the Almon procedure, and the Hannan "inefficient" method. Each method is compared for sample sizes of 50 and 100 for several alternative distributed lag shapes and residual process structures. The results not only reveal the relative performance of these estimators, but also provide evidence on each method's performance under misspecification with respect to lag length and the residual process. [ABSTRACT FROM AUTHOR]
- Published
- 1974
- Full Text
- View/download PDF
24. COMBINING MICROSIMULATION AND REGRESSION: A 'PREPARED' REGRESSION OF POVERTY INCIDENCE ON UNEMPLOYMENT AND GROWTH.
- Author
-
Bergmann, Barbara R.
- Subjects
STATISTICAL matching ,STATISTICAL sampling ,REGRESSION analysis ,POVERTY ,UNEMPLOYMENT ,ECONOMIC development ,EMPLOYMENT ,MATHEMATICAL statistics ,STATISTICS - Abstract
In most empirical work, the investigator's understanding of the economic process under study is only minimally reflected in the econometric methodology. This paper suggests that in many cases, the construction of a small-scale simulation can "prepare" the data for regression in a manner which takes cognizance of the theory of the process. Regression is then used to scale the output of the simulation up to observed magnitudes of the variable to be predicted. The simulation has the function of exploring for the nature of the non-linearities and interactions and thus replaces the usual search for a form which maximizes R². The simulation may also be helpful where colinear data are a problem. An example is presented in which the effects of wages, unemployment rates, and labor turnover on poverty are studied through a "prepared" regression. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
25. Planning the 41st ORSA Meeting: The Visiting-Fireman Problem, I.
- Author
-
Beckwith, Richard E.
- Subjects
STATISTICAL sampling ,SAMPLING (Process) ,PROBABILITY theory ,CONFERENCES & conventions ,MEETINGS ,STATISTICS - Abstract
A polling technique based on subjective probabilities was employed to produce advance estimates of the number of ORSA members attending the 41st National Meeting. The predicted figures compare favorably with the actual attendance figure, tending to support the credibility of such an approach. Action taken as a consequence of the poll's forewarning is believed to have had a significant salutary effect on the financial health of the meeting. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
26. A Conservative Confidence Interval for a Likelihood Ratio.
- Author
-
Mitchell, Ann F. S. and Payne, Clive D.
- Subjects
- *
ANALYSIS of variance , *GAUSSIAN distribution , *CONFIDENCE intervals , *PARAMETER estimation , *RATIO analysis , *SIMULATION methods & models , *STATISTICAL sampling , *RATIO measurement , *STATISTICS - Abstract
A method is described for assigning an observation to one of two normal populations with differing, unknown means and differing, unknown variances. The classification procedure rests on the likelihood ratio, which, for a given observation, is a function of four unknown parameters. Sample information is used to obtain a confidence region for these parameters. From this confidence region, a conservative confidence interval for the likelihood ratio is derived. Interpreting the likelihood ratio as a measure of the odds in favor of each population as the source of the observation, the interval can be used, in an obvious manner, for classification purposes. Simulation techniques are employed to examine the conservative nature of the interval Finally, as an illustration of the method, the results are applied to the determination of authorship of the disputed Federalist papers. [ABSTRACT FROM AUTHOR]
- Published
- 1971
- Full Text
- View/download PDF
27. A CLASS OF SEQUENTIAL TESTS FOR AN EXPONENTIAL PARAMETER.
- Author
-
Hoel, D. G. and Mazumdar, M.
- Subjects
- *
STATISTICAL sampling , *BAYESIAN analysis , *POPULATION , *BERNOULLI numbers , *STATISTICS , *PROBLEM solving , *METHODOLOGY , *RESEARCH - Abstract
Weiss [5] and Freeman and Weiss [2] have shown how to construct sampling plans which at least approximately minimize the maximum expected sample size in the case of Bernoulli and Normal populations. In this paper we construct sequential procedures of this type for the testing of an exponential parameter. An invariance property of the stepping region is presented which simplifies the construction of the Bayes regions. Several sampling plans are given and some of their properties are obtained, [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
28. RELATIVE COSTS OF COMPUTERIZED ERROR INSPECTION PLANS.
- Author
-
O'Reagan, Robert T.
- Subjects
- *
COST effectiveness , *DATA editing , *STATISTICS , *QUALITY control , *ELECTRONIC data processing , *PROBLEM solving , *STATISTICAL sampling , *COMPUTER software - Abstract
Data editing by a computer program can be regarded as a form of statistical quality control. In that perspective, costs and benefits of alternative programs can be compared to one another and, of course, to no control plan at all. This paper outlines one approach to cost-benefit comparison and utilizes a realistic example to suggest that computerized data editing is generally superior to clerical editing. It also warns against edit programs which provide for record rejections and off-line review. Most importantly, it urges that a systematic comparison of costs can assist significantly in edit planning. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
29. THE EXACT SAMPLING DISTRIBUTION OF ORDINARY LEAST SQUARES AND TWO-STAGE LEAST SQUARES ESTIMATORS.
- Author
-
Sawa, Takamitsu
- Subjects
- *
STATISTICAL sampling , *STOCHASTIC processes , *DISTRIBUTION (Probability theory) , *STATISTICS , *LEAST squares , *STOCHASTIC analysis , *REGRESSION analysis , *DENSITY functionals - Abstract
This paper presents the exact sampling distributions of the ordinary and the two-stage least squares estimators of a structural parameter in a structural equation with two endogenous variables in a complete system of stochastic equations. The results show that the distributions of the two estimators are essentially similar to each other. It can also be seen that both distributions depend crucially upon the deviation of a regression coefficient of disturbance terms of two endogenous variables from a structural parameter, and that the first estimator possesses moments up to the order N-2, while the second possesses them up to the order K-1, where N is the sample size and K is the number of exogenous variables excluded from the equation to be estimated. The small sample properties of the estimators are investigated by numerical evaluations of the density functions. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
30. TABLES OF CRITICAL VALUES OF SOME RENYI TYPE STATISTICS FOR FINITE SAMPLE SIZES.
- Author
-
Birnbaum, Z. W. and Lientz, B. P.
- Subjects
- *
FINITE groups , *STATISTICAL sampling , *DISTRIBUTION (Probability theory) , *SAMPLE size (Statistics) , *STATISTICS , *RANDOM variables , *PROBABILITY theory - Abstract
Let X[sub (1)] is less than or equal to X[sub (2] is less than or equal to ... is less than or equal to X[sub (n)] be an ordered sample of a random variable X which has continuous probability distribution function F (x), and let F[sub n] (x) be the corresponding empirical distribution function. The following three statistics, introduced by A. Renyi, are considered: [Multiple line equation(s) cannot be represented in ASCII text] The paper presents table of exact probabilities for these statistics for finite sample sizes. The limiting distributions of these statistics for sample size n arrow right Infinity are discussed, and sample sizes are indicated for which these limiting distributions can be used instead of the exact distributions. Numerical examples for the use of the tables are presented, as well as applications to testing hypotheses on life distributions and to one-sided estimation of probability distribution functions from censored data. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
31. THE COMPUTATION OF THE UNRESTRICTED AOQL WHEN DEFECTIVE MATERIAL IS REMOVED BUT NOT REPLACED.
- Author
-
Endres, Allen C.
- Subjects
- *
STATISTICAL sampling , *PRODUCT quality , *MANUFACTURING defects , *STATISTICS , *HYPOTHESIS , *PLANNING , *FRACTIONS - Abstract
The choice of a Dodge-type continuous sampling plan is usually based upon the requirement of a maximum limit on the expected fraction of accepted defective material (AOQL). This limit is contingent upon the process producing the material being in "a state of statistical control". White (1966) has provided a method by which a corresponding limit (UAOQL) can be obtained without the assumption that the process in statistical control. White assumed that all defective items were replaced by good items. This paper presents a method for calculating UAOQL's when defective items are removed but not replaced by good units. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
32. THE UNRELATED QUESTION RANDOMIZED RESPONSE MODEL: THEORETICAL FRAMEWORK.
- Author
-
Greenberg, Barnard G., Abul-Ela, Abdel-latif A., Simmons, Walt R., and Horvitz, Daniel G.
- Subjects
- *
RANDOM variables , *STATISTICS , *DEMOGRAPHIC surveys , *STATISTICAL sampling , *MATHEMATICAL models , *PARAMETER estimation , *METHODOLOGY , *MULTILEVEL models , *MATHEMATICAL statistics - Abstract
This paper develops a theoretical framework for the unrelated question randomized response technique suggested by Walt R. Simmons. The statistical efficiency of this technique is compared with the Warner technique under situations of both truthful and untruthful responses. Methods of allocating the total sample to each of two subsamples required by the unrelated question approach are developed. Recommendations are made concerning choices of values for those parameters which can be assigned at the discretion of the investigator. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
33. PLAY THE WINNER RULE AND THE CONTROLLED CLINICAL TRIAL.
- Author
-
Zelen, M.
- Subjects
- *
CLINICAL trials , *STATISTICAL sampling , *THERAPEUTICS , *SAMPLE size (Statistics) , *PATIENTS , *STATISTICS - Abstract
Consider a clinical trial to compare two treatments where response is dichotomous and patients enter the trial sequentially. This paper investigates the conduct of such a trial where the "Play the Winner Rule" (PWR) is used to assign patients to the different therapies. The implementation of the PWR in a clinical trial tends to place more patients on the better treatment. Both theoretical and numerical investigations show that over a wide range of situations this rule leads to near optimum results when used in a two-stage manner. Furthermore, these results are insensitive to optimum sample size requirements. [ABSTRACT FROM AUTHOR]
- Published
- 1969
- Full Text
- View/download PDF
34. THE FIRST-MEDIAN TEST: A TWO-SIDED VERSION OF THE CONTROL MEDIAN TEST.
- Author
-
Gastwirth, J. L.
- Subjects
- *
DISTRIBUTION (Probability theory) , *HYPOTHESIS , *STATISTICAL sampling , *MEDIAN (Mathematics) , *STATISTICS , *TESTING - Abstract
Consider two independent samples (called x's and y's) of mutually independent observations from populations with c.d.f.'s F(x) and G(x) respectively. Let v be the number of x's which are smaller than the median of the y-sample and let u be the number of y's which are less than the median of the x-sample. If the x-sample is regarded as the control group, then the control median test proposed by Kimball et al. [9], rejects the hypothesis that F(x) Is equivalent to G(x) if u is small. The present paper discusses a symmetrized version of this test, the first median test, which is based on u if the median of the x-sample precedes the median of the y-sample and on v, otherwise. The asymptotic distribution theory of both tests is developed. The tests are useful in analyzing life trial data because they permit the experimeter to reach a decision early. In the life trial situation it is important to minimize the expected number of observations required to reach a decision. It is. shown that in large samples, when curtailed sampling is utilized, these procedures reach a decision before the standard median test [12, 13]. [ABSTRACT FROM AUTHOR]
- Published
- 1968
- Full Text
- View/download PDF
35. AN INVESTIGATION INTO THE SMALL SAMPLE PROPERTIES OF A TWO SAMPLE TEST OF LEHMANN'S'S.
- Author
-
Afifi, A. A., Elashoff, R. M., and Langley, P. G.
- Subjects
- *
ASYMPTOTIC distribution , *STATISTICAL sampling , *GAUSSIAN distribution , *SAMPLE size (Statistics) , *DISTRIBUTION (Probability theory) , *APPROXIMATION theory , *MATHEMATICAL statistics , *STATISTICS - Abstract
In this paper we examine how well the asymptotic null distribution of a two sample test due to Lehmann approximates the small sample distribution of the test, compare the validity of this Lehman test with the validity of the two sample t test under the null hypothesis of equal means, and compare the power of this Lehmannn test with the power of the t test. Our general conclusion is that experimenters will prefer to use the t test when the underlying distribution is the scale contaminated compound normal distribution and the sample sizes are less than thirty. [ABSTRACT FROM AUTHOR]
- Published
- 1968
- Full Text
- View/download PDF
36. ESTIMATES IN SUCCESSIVE SAMPLING USING A MULTI-STAGE DESIGN.
- Author
-
Singh, S.
- Subjects
- *
STATISTICAL sampling , *EXPERIMENTAL design , *STATISTICS , *ESTIMATION theory , *MATHEMATICAL statistics , *SAMPLE variance - Abstract
In this paper, successive sampling procedures using multi-stage sampling design have been developed. It is found that it is generally advantageous to retain a fraction of the sample selected on previous occasions for improving the estimate of a mean on the most recent occasion. This type of partial replacement may sometimes be recommended for estimating a mean over all occasions. This is specially true if the experimenter is interested in not only obtaining an overall estimate for the entire period but also in separate estimates for each occasion. In a sampling enquiry repeated on three occasions it has been observed that for estimating the mean on the third occasion it would be preferable to repeat the same sample fraction from one occasion to the next, while for estimating the mean over all occasions the sample fraction repeated on the second occasion should not be repeated on the third but in its place a sub-sample from the sample selected afresh on the second occasion should be retained. [ABSTRACT FROM AUTHOR]
- Published
- 1968
- Full Text
- View/download PDF
37. SOME NONRESPONSE SAMPLING THEORY WHEN THE FRAME CONTAINS AS UNKNOWN AMOUNT OF DUPLICATION.
- Author
-
Rao, J. N. K.
- Subjects
- *
STATISTICAL sampling , *STATISTICS , *THEORY , *SURVEYS , *BEEF cattle , *BUSINESS partnerships - Abstract
Hanson and Hurwitz [3] have developed some non-response sampling theory, using the double sampling method. In this paper, this theory is extended to the case where there is an unknown amount of duplication in the available frame. The theory is developed from the context of a sample survey of beef cattle producers. The available frame was a list of names and addresses of beef cattle producers, whereas the unit of interest was a beef cattle producing operation which could be operated individually or in partnership. [ABSTRACT FROM AUTHOR]
- Published
- 1968
- Full Text
- View/download PDF
38. ON SOME MULTISAMPLE PERMUTATION TESTS BASED ON A CLASS OF U-STATISTICS.
- Author
-
Sen, Pranad Kumar
- Subjects
- *
U-statistics , *DISTRIBUTION (Probability theory) , *STATISTICAL sampling , *STATISTICS , *SAMPLE size (Statistics) , *COST analysis , *VARIANCES - Abstract
A class of permutation tests based on appropriate U-statistics is proposed for the multisample location, scale, and association problems. These U-statistics need not be functions solely of the ranks of the different sample observations and for the application of the tests the parent distributions need not be continuous. The proposed tests are based on the existence of some equally likely permutations of the sample observations and are valid for small as well as large sample sizes. Further, the underlying permutation principle yields optimum estimators of the variances and covariances of U-statistics entering into the expressions for the test statistics. The large sample test procedure is thereby simplified. This paper consists partly of expository material and partly of extensions of some allied two-sample results, considered earlier by the author (Sen [28]). [ABSTRACT FROM AUTHOR]
- Published
- 1967
- Full Text
- View/download PDF
39. MINIMUM VARIANCE UNBIASED AND MAXIMUM LIKELIHOOD ESTIMATORS OF RELIABILITY FUNCTIONS FOR SYSTEMS IN SERIES AND IN PARALLEL.
- Author
-
Zacks, S. and Even, M.
- Subjects
- *
ESTIMATION theory , *STATISTICS , *ANALYSIS of variance , *VARIANCES , *POISSON processes , *EXPONENTIAL families (Statistics) , *EXPONENTIAL sums , *DISTRIBUTION (Probability theory) , *STATISTICAL sampling - Abstract
This paper investigates the properties of the minimum variance unbiased (M.V.U) and maximum likelihood (M.L.) estimators of the reliability functions of systems composed of two subsystems connected in series. The study falls into two parts, one for the Poisson case and one for the exponential case. In each of these cases the situations are distinguished between, where the two subsystems are identical and situations subsystems are different. In the Poisson case under minimum variance unbiased estimators a system A is considered which is composed of two subsystems connected in series. Failure time points of the subsystem follow a Poisson process with intensity. An experiment is performed on n independent replicates of each of the considered subsystems over a period of length. Under the exponential case, a system A is considered, same as Poisson case which consists of two subsystems connected in series. Failure time points of the two subsystems follow a Poisson process. Independent observations are available on the interfailure time lengths; namely, the life-lengths of the subsystems.
- Published
- 1966
- Full Text
- View/download PDF
40. THE EVALUATION OF H 106 CONTINUOUS SAMPLING PLANS UNDER THE ASSUMPTION OF WORST CONDITIONS.
- Author
-
White, Leon S.
- Subjects
- *
STATISTICAL sampling , *CHARTS, diagrams, etc. , *STATISTICS , *QUALITY , *PRODUCTION (Economic theory) , *PRODUCTION control , *GRAPHIC methods - Abstract
Dodge-ype continuous sampling plans are used to screen mass produced items so that long run quality requirements will be met. The choice of a particular Dodge-type plan is often based on the plan's AOQL, the long run proportion of defectives that remains in the output after inspection under the assumption that the production process is in a "state of statistical control." Another measure by which a plan can be evaluated is the UAOQL, the long run proportion of defectives remaining under the assumption that the production process operates in the worst possible way relative to the plan. This paper presents five graphs that can be used to provide UAOQL values for a large number of H 106 plans. The linear program used to calculate these graphs is also included. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
41. PROBABILITY SAMPLING WITH QUOTAS.
- Author
-
Sudman, Seymour
- Subjects
- *
STATISTICAL sampling , *PROBABILITY measures , *PROBABILITY theory , *STATISTICS , *MATHEMATICS , *INTERVIEWING , *AGE & employment - Abstract
This paper describes certain quota sampling procedures and attempts to show that they are very close to traditional probability sampling. Quotas are shown to depend on availability for interviewing and evidence is presented to show that sex, age, and employment status are reasonable predictors of availability. Quota sampling methods are not unbiased but data are presented which suggest that the bias is generally of the order of 3 to 5 per cent. It is shown, however, that the cost differentials between these quota samples and call-back samples are small. The major advantage of this new procedure may well be the speed with which interviewing may be completed during crises such as the Kennedy assassination. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
42. AN INTRODUCTION TO RANKING AND SELECTION PROCEDURES.
- Author
-
Barr, Capt. D. R. and Rizvi, M. Haseeb
- Subjects
- *
RANKING (Statistics) , *STATISTICAL sampling , *PROBABILITY measures , *SAMPLE size (Statistics) , *STATISTICS , *SELECTION theorems , *MATHEMATICAL statistics - Abstract
This is an expository paper on the fixed sample size indifference zone formulation of the ranking and selection problem. A particularly simple proof of a theorem which is useful in this formulation is given, and its use is illustrated. [ABSTRACT FROM AUTHOR]
- Published
- 1966
- Full Text
- View/download PDF
43. PERCENTILE MODIFICATIONS OF TWO-SAMPLE RANK TESTS.
- Author
-
Gastwirth, Joseph L.
- Subjects
- *
GAUSSIAN distribution , *RATIONAL numbers , *DISTRIBUTION (Probability theory) , *STATISTICAL sampling , *METHODOLOGY , *PROBABILITY theory , *STATISTICS , *PROBLEM solving - Abstract
This paper presents a simple method for increasing the limiting Pitman efficiency of rank tests relative to the best tests for samples normal distributions without using complicated scoring systems. Our proposal is to select two numbers p and r (0 < p, r < 1) and then score, with integer weights, the data in the top p[sup th] and bottom r[sup th] fractions of the combined sample. The percentile modified tests for scale are quite effective. When p = r = 1/8 (i.e., we score only the extreme quarter of the combined sample) the A.R.E. of the test to the F test is .85. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
44. A BAYESIAN INDIFFERENCE PROCEDURE.
- Author
-
Novick, Melvin R. and Hall, W. J.
- Subjects
- *
PROBABILITY theory , *DISTRIBUTION (Probability theory) , *FAMILIES , *STATISTICAL sampling , *THEORY of knowledge , *STATISTICS , *BAYESIAN analysis - Abstract
In a logical probability approach to inference, distributions on a parameter space are interpretable as representing states of knowledge, and any prevailing state of knowledge may be taken to have been arrived at from a previous state of ignorance (indifference) followed by an accumulation of prior data. In this paper an indifference procedure is introduced that requires postulating what size and what kind of samples will and will not (in a special sense) permit statistical inference and prediction--e.g., one observation from a two-parameter normal model is not (in our special sense) sufficient to permit inference about the variance but two observations are. In essence, the procedure stipulates that prior indifference distributions be improper but become proper after an appropriate minimal sample. With some limitation on the family of priors considered, this procedure permits unique specification of indifference for the more commonly encountered statistical models. Furthermore, these specifications are affected neither by change of the scale of measurement of the observations, nor by the sampling rule. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
45. SOME GRAPHS USEFUL FOR STATISTICAL INFERENCE.
- Author
-
Guenther, William C. and Thomas, P. O.
- Subjects
- *
GRAPHIC methods , *GRAPHIC methods in statistics , *STATISTICAL sampling , *CONFIDENCE intervals , *HYPOTHESIS , *SAMPLE size (Statistics) , *PROBABILITY theory , *STATISTICS , *MATHEMATICAL statistics - Abstract
The determination of sample size to meet certain probability requirements is a problem which often faces an experimenter when obtaining a confidence interval or testing a hypothesis. This paper includes some new graphs useful in selecting sample size and gives a reference to a new textbook which includes a number of other similar type graphs. The method of construction is explained in detail and examples are included. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
46. ON THE F-TEST IN THE INTRABLOCK ANALYSIS OF A CLASS OF TWO ASSOCIATE PBIB DESIGNS.
- Author
-
Giri, N.
- Subjects
- *
DISTRIBUTION (Probability theory) , *MOMENT problems (Mathematics) , *CHARACTERISTIC functions , *F-distribution , *APPROXIMATION theory , *PROBABILITY theory , *STATISTICAL sampling , *STATISTICS - Abstract
In this paper the first two moments of the ratio (treatment sum of squares)/(treatment sum of squares + error sum of squares)over all possible random assignment of treatments to the experimental plots, for a class of 2 associate PBIBD has been obtained. These two moments are compared with the corresponding moments of a continuous beta distribution to settle the question of approximating the randomization test by the usual F-test. It has been shown that a reasonable approximation to the randomization test based on the statistic F is equivalent to modifying the normal theory test by multiplying the numbers of d.f. of the F-distribution by a factor depending on the heterogeneity of the blocks. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
47. ON A METHOD OF USING MULTI-AUXILIARY INFORMATION IN SAMPLE SURVEYS.
- Author
-
Raj, Des
- Subjects
- *
ESTIMATION theory , *STATISTICAL sampling , *SURVEYS , *ANALYSIS of variance , *STATISTICS , *MATHEMATICAL statistics , *VARIANCES - Abstract
Usually auxiliary information based on just one variate is used to improve the precision of estimators of population totals, means, etc. In this paper a method is proposed of using information on several variates to achieve higher precision. The technique of difference estimation is employed throughout. It is shown that the variances of difference estimators are comparable to those of ratio estimators. The results are extended to double sampling procedures and sampling over two occasions. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
48. OPTIMUM ALLOCATION OF SAMPLING UNITS TO STRATA WHEN THERE ARE R RESPONSES OF INTEREST.
- Author
-
Folks, John Leroy and Antle, Charles E.
- Subjects
- *
MATHEMATICAL variables , *STATISTICAL sampling , *MATHEMATICAL functions , *MATHEMATICS , *SAMPLE size (Statistics) , *STATISTICS - Abstract
A common problem in many areas is to achieve a combination of control variables which will simultaneously maximize all of the responses which are functions of the control variables. The fact that in general we can not simultaneously maximize all of the responses has led to the formulation of compromise solutions to this vector maximum problem. A point in the control variable space is better than a second point if each response at the first point is greater than or equal to the corresponding response at the second point. If a point is such that there is no better point, it is said to be admissible, or efficient. A set of points is complete if given any point not in the set there is a point in the set which is better. In this paper this formulation is used to consider the allocation of sample size to several strata when there are several characteristics of interest. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
49. ERRORS OF CLASSIFICATION IN A BINOMIAL POPULATION.
- Author
-
Bryson, Marion R.
- Subjects
- *
BINOMIAL distribution , *CLASSIFICATION , *DISCRIMINANT analysis , *ERROR analysis in mathematics , *MATHEMATICAL statistics , *STATISTICAL sampling , *STATISTICS , *NUMERICAL analysis - Abstract
In the classification of the elements in a binomial population, there is in many cases a chance of misclassifying an item. When an item in a sample from a binomial population is misclassified, it causes a bias, independent of the standard error, in the estimate of the population parameter P. This paper analyzes this bias when each of two interviewers independently classify the items in a single sample. Upper and lower bounds for the bias are derived. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
50. SYSTEMATIC STATISTICS USED FOR DATA COMPRESSION IN SPACE TELEMETRY.
- Author
-
Eisenberger, Isidore and Posner, Edward C.
- Subjects
- *
NONPARAMETRIC statistics , *STATISTICS , *GOODNESS-of-fit tests , *ESTIMATION theory , *STATISTICAL sampling , *AEROSPACE telemetry , *ESTIMATION bias , *DATA compression , *STATISTICAL hypothesis testing , *DISTRIBUTION (Probability theory) , *PROBABILITY theory - Abstract
The need for data compression, a consequence of the demands made on the telemetry system of a space vehicle, prompts consideration of the use of sample quantiles in estimating population parameters and obtaining tests of goodness of fit for large samples. In this paper optimal unbiased estimators of the mean and standard deviation are given using up to twenty quantiles when the parent population is normal. Moreover, the estimators are relatively insensitive to deviations from normality. A distribution-free goodness-of-fit test is presented based on the sum of the squares of four quantiles after an orthogonal transformation to independent normal deviates. If a frequency function is of the form f(x; p) = pf[sub 1](x) + (1 - p) f[sub 2](x), 0 < p < 1, where f, and f[sub 2] are normal frequency functions, the distribution is likely to be bimodal. Another goodness-of-fit test is obtained using four quantiles, which is likely to have considerable power with a null hypothesis of normality and the alternative hypothesis of bimodality. The "data compression ratios" obtained with the use of a quantile system can be on the order of 100 to 1. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.