113 results
Search Results
2. Markov Chain Monte Carlo: 10 Years and Still Running!
- Author
-
Cappé, Olivier and Robert, Christian P.
- Subjects
MARKOV processes ,MONTE Carlo method ,ALGORITHMS ,PROBABILITY theory ,STATISTICS - Abstract
This article presents the Markov chain Monte Carlo (MCMC) statistical technique. The impact on the discipline is deep and durable, because these methods have opened new horizons in the scale of the problems that one can deal with, thus enhancing the position of statistics in most applied fields. The MCMC revolution has in particular boosted Bayesian statistics to new heights by providing a virtually universal tool for dealing with integration problems. This can be seen in the explosion of papers dealing with complex models, hierarchical modelings, nonparametric Bayesian estimation, and spatial statistics. This trend has also created new synergies with mathematicians and probabilists, as well as econometricians, engineers, ecologists, astronomers, and others, for theoretical requests and practical implications of MCMC techniques. The main factor in the success of MCMC algorithms is that they can be implemented with little effort in a large variety of settings. This is obviously true of the Gibbs sampler, which provided some conditional distributions are available as shown by the BUGS software. We mentioned BUGS and CODA as existing software dedicated to MCMC algorithms but much remains to be done before MCMC becomes part of commercial software.
- Published
- 2000
- Full Text
- View/download PDF
3. An Evaluation of Model-Dependent and Probability-Sampling Inferences in Sample Surveys.
- Author
-
Hansen, Morris H., Madow, William G., and Tepping, Benjamin J.
- Subjects
- *
STATISTICAL sampling , *PROBABILITY theory , *SURVEYS , *POPULATION , *SCALING laws (Statistical physics) , *STATISTICS , *ESTIMATION theory - Abstract
In this paper we are concerned with inferences from a sample survey to a finite population. We contrast inferences that are dependent on an assumed model with inferences based on the randomization induced by the sample selection plan. Randomization consistency for finite population estimators is defined and adopted as a requirement of probability sampling. A numerical example is examined to illustrate the dangers in the use of model-dependent estimators even when the model is apparently consonant with the sample data. The paper concludes with a summary of principles that we believe should guide the practitioner of sample surveys of finite populations. [ABSTRACT FROM AUTHOR]
- Published
- 1983
- Full Text
- View/download PDF
4. Comment.
- Author
-
Kempthorne, Oscar
- Subjects
- *
MATHEMATICAL statistics , *STOCHASTIC processes , *ECONOMICS , *RANDOM variables , *PROBABILITY theory , *STATISTICIANS , *ANALYSIS of variance , *RANDOM sets , *STATISTICS - Abstract
The article presents the author's comments on paper by researcher D. Basu related to randomization analysis of experimental data. Basu writes entertainingly, perhaps, but not informatively. Basu's paper discusses prerandomization, postrandomization, and unrecorded randomization. This discussion is irrelevant. But it is useful, perhaps, to make a remark. It also discusses the sufficiency principle. As Basu has written, this is a data-reduction principle. Basu also discusses researcher R.A. Fisher randomization test. It is obvious that the population in a randomization test of a randomized experiment is "the product of the statistician's imagination." With respect to Basu's writing on "the physical act of randomization," the author believes Basu is merely plain wrong. The paper also describes randomized pair trial. In his paper, Basu gives a hypothetical interchange of a statistician and a scientist and the author. The author suggests that this serves no useful purpose. The author finds the lack of knowledge that underlies Basu's thesis rather surprising, incongruous, and deplorable.
- Published
- 1980
- Full Text
- View/download PDF
5. Density Estimation and Bump-Hunting by the Penalized Likelihood Method Exemplified by Scattering and Meteorite Data: Comment.
- Author
-
Parzen, Emanuel
- Subjects
- *
CLUSTER analysis (Statistics) , *ESTIMATION theory , *PROBABILITY theory , *STATISTICAL sampling , *DENSITY functionals , *ECONOMISTS , *STATISTICAL correlation , *MULTIVARIATE analysis , *LEAST squares , *SAMPLE size (Statistics) , *STATISTICS - Abstract
The author is pleased to discuss a paper on the estimation of probability density functions and the location of bumps. There is an extensive literature on density estimation, but many statisticians seem doubtful about the usefulness of these techniques because their application seems subjective and complicated. A major criticism the author would make of this paper is that it does not help to dispel this negative attitude of statisticians toward density estimation. One cannot help but be impressed by the ingenuity of I. J. Good and R. A. Gaskins and even to believe that they may be able to successfully fit probability densities to data. The existence of bumps at the extremes of a sample can be investigated for large sample sizes by treating the bottom and top ends of the original sample as two new samples to be analyzed by themselves. Statistical scientists should heed the flippant advice of Sir Arthur Eddington: "Never trust an experimental result until it has been confirmed by theory."
- Published
- 1980
- Full Text
- View/download PDF
6. Probabilities for the Size of Largest Clusters and Smallest Intervals.
- Author
-
Wallenstein, Sylvan R. and Naus, Joseph I.
- Subjects
STATISTICS ,STATISTICAL correlation ,LEAST squares ,MATHEMATICAL statistics ,PROBABILITY theory ,REGRESSION analysis - Abstract
Given N points distributed at random on [0,1), let n
p , be the size of the largest number of points clustered within an interval of length p. Previous work finds Pr (np ≥ n), for n> N/2, and for n < N/2, p=1/L, L an integer. The formula for the case p=1/L is in terms of the sum of L × L determinants and is not computationally feasible for large L. The present paper derives such a computational formula. [ABSTRACT FROM AUTHOR]- Published
- 1974
- Full Text
- View/download PDF
7. Review of statistical actuarial risk modelling.
- Author
-
Shiraishi, Hiroshi and Lu, Zudi
- Subjects
- *
ACTUARIAL risk , *DIVIDENDS , *STATISTICS , *PROBABILITY theory , *MATHEMATICAL functions - Abstract
In this paper, we review some results for insurance risk theory. We first introduce a variety of the insurance risk models proposed thus far. Then, we show that the expected discounted penalty function (the so-called Gerber-Shiu function) can describe some risk indicators. Next, the dividend problem is discussed; more precisely, the (approximated) optimal dividend barrier is derived and other extended dividend strategies introduced. In addition, some modified models depending on reinsurance or tax are introduced. Finally, we discuss the statistical estimation of the ruin probability and the Gerber-Shiu function. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
8. Comment.
- Author
-
Dempster, A. P.
- Subjects
- *
PROBABILITY theory , *FUZZY sets , *FUZZY logic , *FUZZY systems , *STATISTICS - Abstract
This article comments on a paper by Nozer D. Singpurwalla and Jane M. Booker which aimed to develop a line of argument that demonstrates that probability theory has a sufficiently rich structure for incorporating fuzzy sets within its framework. Google reports about 670,000 websites that connect with the phrase fuzzy logic. From a mathematical perspective, the concept of a fuzzy set is somewhat disconcerting, because a fuzzy set is generally not a set, but rather a variable. A crucial issue, which was debated already in the earliest days of fuzzy logic, concerns whether or not membership values are just probabilities in disguise. Certainly, the original motivating examples deal primarily with formalizing variables that correspond to natural language concepts that are in everyday use. The paper's section 4 introduces Lofti Zadeh's rejection of the law of the excluded middle, with reference to differentiating between membership values and probabilities. The commenter's reading of the paper's section 4, and of formula (1) in particular, suggests that Zadeh in 1968 might well have approved a solution along the lines of the preceding paragraph, except that he would have used the posterior expectation of m(x) as a summary of the posterior distribution.
- Published
- 2004
- Full Text
- View/download PDF
9. Comment.
- Author
-
Lindley, D. V. and Laviolette, Michael
- Subjects
- *
PROBABILITY theory , *FUZZY sets , *FUZZY logic , *FUZZY systems , *MATHEMATICS , *STATISTICS - Abstract
This article comments on a paper by Nozer D. Singpurwalla and Jane M. Booker which aimed to develop a line of argument that probability theory has a sufficiently rich structure tor incorporating fuzzy sets within its framework. According to the commenter, the paper provides a real advance in the understanding of fuzzy sets, by providing a sensible connection between membership functions and likelihood, and thereby probability. He says the paper's modification of the basic question is answered most convincingly. However, the answer raises an apparent conflict between the two calculi of fuzzy logic and probability. What is now needed is a second article to resolve this conflict. The commenter's conjecture is that the resolution will show that the rules of fuzzy logic are untenable. According to him, his principal reason behind the conjecture is that the rules of the probability calculus follow from simple, obvious assumptions, whereas those of the fuzzy calculus have been arbitrarily selected. He points out that this arbitrariness is fine for a pure mathematician. But, according to him, an applied mathematician, faced with the reality of uncertainty in the real world, must take into account that world and cannot ignore the basic assumptions about uncertainty that underlie the probability calculus and reflect basic truths about the world.
- Published
- 2004
- Full Text
- View/download PDF
10. Rejoinder.
- Author
-
Bayarri, M. J. and Berger, J. O.
- Subjects
BAYESIAN analysis ,PROBABILITY theory ,SET theory ,MATHEMATICAL models ,STATISTICS - Abstract
This article presents a response to several commentaries regarding a study which proposed computations for Bayesian p values in composite null models. In the double use of data and conditioning, Michael Evans and Hal S. Stern both discussed what constitutes double use of the data in computing a p value. They take opposite perspectives on this issue. Evans argues that the only way to avoid double use of the data is to separate the data into two independent subsets, using one subset to fit the model and the other to assess the validity of the model. Because the procedures that we recommend do not formally split the data into two parts, Evans suggests that they can be guilty double use of the data. In contrast, Stern does not feel that it is necessary to worry about double use of the data the extent that we worry, he does not feel that even the posterior predictive p value involves double use of the data. These contrasting positions perfectly highlight what we view to be the main motivation for our paper. We feel that it is very important to avoid a double use of the data, but that separating the data into two independent subsets is a too drastic solution that can lose considerable power. We consider each of these issues in turn.
- Published
- 2000
- Full Text
- View/download PDF
11. Comment.
- Author
-
Rubin, Donald B.
- Subjects
- *
MATHEMATICAL statistics , *STOCHASTIC processes , *RANDOM variables , *STATISTICS , *PROBABILITY theory , *ANALYSIS of variance , *RANDOM sets - Abstract
The article presents the authors' comments on researcher D. Basu's paper related to randomization analysis of experimental data. Basu's paper on researcher R.A. Fisher's randomization test for experimental data (FRTED) is certainly entertaining. Although much of the paper is devoted to the thesis that Fisher changed his views on FRTED, apparently the primary point of the paper is to argue that FRTED is "not logically viable." Admittedly, FRTED is not the ultimate statistical weapon, even in randomized experiments, but calling it illogical is rather bizarre. Basu criticizes FRTED through two primary arguments. His first line of criticism follows from his attack on a nonparametric test labeled as "Fisher's randomization test." Basu's second line of criticism of FRTED takes the form of a discussion between a statistician and a scientist. The author sees nothing illogical about the FRTED, it is relevant for those rare situations when a purely confirmatory test of a priori sharp hypothesis is to be made using a priori defined statistic having an associated priori definition of extremeness. FRTED cannot adequately handle the full variety of real data problems that practicing statisticians face when drawing causal inferences, and for this reason it might be illogical to try to rely solely on it in practice.
- Published
- 1980
- Full Text
- View/download PDF
12. Comment.
- Author
-
Hinkley, David V.
- Subjects
- *
MATHEMATICAL statistics , *STOCHASTIC processes , *RANDOM variables , *STATISTICS , *PROBABILITY theory , *ANALYSIS of variance , *RANDOM sets - Abstract
The article presents the author's comments on researcher D. Basu's paper related to randomization analysis of experimental data. Basu has provided researchers with an interesting and provocative critique of significance tests related to randomized experiments. It does seem to be true that there is not a unified mathematical theory of significance tests developed by researcher R.A. Fisher. Nevertheless, it is important to point out a fallacy in Basu's criticism of nonunique significance level. After confessing to a "ruthless cross-examination" of the wrong topic, the non-Fisherian nonparametric tests, Basu suggests that Fisher's silence in 1956 may be used to condemn the randomization test. The empirical evidence confronting Fisher certainly suggested the necessity of randomization in most field experiments, if the standard methods of analysis were to be used. The final substantial issue of Basu's paper is that of the ancillarity of the design outcome. Technically Basu is quite correct, if the randomization has validated a parametric model, the design outcome is then ancillary by design. It would, however, be as well not to forget the purpose of an ancillary statistic, since other definitions.
- Published
- 1980
- Full Text
- View/download PDF
13. Comment.
- Author
-
Kruskal, William
- Subjects
- *
STATISTICS , *WEATHER control , *MULTIPLICITY (Mathematics) , *STATISTICAL hypothesis testing , *PROBABILITY theory , *RANDOM variables , *RAIN-making - Abstract
This article presents the views of the author on a paper by Roscoe R. Braham, Jr. which focused on involvement of statisticians in weather modification work. At two or three points in Braham's paper, distinctions are made between physical and statistical experiments (and observational programs), or between physical and statistical modes of thought. The comparison is sometimes to the discredit of statistics. In one broad sense of the word "statistics," such a distinction is otiose, for the meteorologist certainly makes inferences from quantitative data and thus does statistics, whether or not the word is used. So presumably some narrower sense of "statistics" is intended. Another sense in which the distinction might be intended is that of divergence between the result of a conventional statistical analysis, perhaps a significance test, on the one hand, and standard physical knowledge or possibly the intuition of one or more meteorologists on the other hand. Certainly cumulative scientific theory and the intuitions of scientists should be given great weight; yet scientific intuitions often vary widely, and there are many cases in which accepted doctrine has turned out to be wrong, sometimes after carrying out controlled randomized trials.
- Published
- 1979
- Full Text
- View/download PDF
14. The GLM framework of the Lee–Carter model: a multi-country study.
- Author
-
Azman, Shafiqah and Pathmanathan, Dharini
- Subjects
MORTALITY ,STATISTICS ,PROBABILITY theory - Abstract
The Lee–Carter model is a well-known model in modeling mortality. We aim to compare three probability models (Poisson, negative binomial and binomial) based on the Generalized Linear Model (GLM) framework of the Lee–Carter model. These models are applied to mortality data for 10 selected countries (Japan, United States, United Kingdom, Australia, Sweden, Spain, Belgium, Canada, Netherlands and Bulgaria) and the fit of these models is assessed using the deviance statistics and standardized residuals against fitted value plot. Among these three models, the negative binomial Lee–Carter model gave the best fit based on the deviance statistics and estimates of the log of deaths. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. A criterion for comparing and selecting batsmen in limited overs cricket.
- Author
-
Barr, G. D. I. and Kantor, B. S.
- Subjects
SPORTS ,CRICKET (Sport) ,PROBABILITY theory ,WORLD Cup (Cricket) ,AUSTRALIANS ,STATISTICS - Abstract
The batting average statistic has been used almost exclusively to assess the worth of a batsman. It reveals a great deal about the potential performance of batsmen in cricket played at the first class level. However, in the one-day game, strict limits on the number of balls bowled have introduced a very important additional dimension to performance. In the one-day game, it is clearly not good enough for a batsman to achieve a high batting average with a low strike rate. Runs scored slowly, even without the loss of wickets, will generally result in defeat rather than victory in the one-day game. Assessing batting performance in the one-day game, therefore, requires the application of at least a two-dimensional measurement approach because of the time dimension imposed on limited overs cricket. In this paper, we use a new graphical representation with Strike rate on one axis and the Probability of getting out on the other, akin to the risk- return framework used in portfolio analysis, to obtain useful, direct and comparative insights into batting performance, particularly in the context of the one-day game. Within this two-dimensional framework we develop a selection criterion for batsmen, which combines the average and the strike rate. As an example of the application, we apply this criterion to the batting performances of the 2003 World Cup. We demonstrate the strong and consistent performances of the Australian and Indian batsmen as well as provide a ranking of batting prowess for the top 20 run scorers in the tournament. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
16. Applications of Transportation Theory to Statistical Problems.
- Author
-
Causey, Beverley D., Cox, Lawrence H., and Ernst, Lawrence R.
- Subjects
- *
STATISTICAL sampling , *TRANSPORTATION , *STATISTICS , *PROBABILITY theory , *DISTRIBUTION (Probability theory) , *MATHEMATICAL statistics - Abstract
The two-dimensional controlled selection problem and the problem of maximizing the overlap of old and new primary sampling units after restratification and change of selection probabilities have been studied for several decades but have never been completely solved until now. Using transportation theory, complete solutions are obtained here for these and other problems. The solution to the controlled selection problem is based on a specific transportation model that was originally developed, in a previous paper by Cox and Ernst (1982), to solve completely the controlled rounding problem, namely the problem of optimally rounding real-valued entries in a two-way tabular array to adjacent integer values in a manner that preserves the tabular (additive) structure of the array. This model is also applied to other statistical problems, such as raking and statistical disclosure for frequency count tabulations and microdata. [ABSTRACT FROM AUTHOR]
- Published
- 1985
- Full Text
- View/download PDF
17. Rejoinder.
- Author
-
Pratt, J.W. and Schlaifer, Robert
- Subjects
- *
MATHEMATICAL models , *CAUSATION (Philosophy) , *STRUCTURAL frame models , *ESTIMATION theory , *MATHEMATICAL statistics , *MATHEMATICAL variables , *RESEARCH , *OBSERVATION (Psychology) , *STATISTICAL correlation , *PROBABILITY theory , *STATISTICS - Abstract
The article presents authors' reply comments on various papers related to nonexperimental data and estimation of structural effects. Authors thank researcher John Geweke for his instructive comments and in reply only want to make sure that the relation between Geweke's third paragraph and authors' article is correctly understood. Authors' language did suggest, however, that meaningful statements about correlation with excluded variables would usually involve a very large sufficient set and in the case, authors find Geweke's example instructive. In response to the comment of researcher A.P. Dawid, authors are glad that he likes parts of their, but authors disagree with Dawid's comment on observational studies. Authors feel that selection of observations is not the important new feature in observational studies. The one and only new feature that is important is that the "treatments" or "factors" are not randomized. Selection of observations is not even new, because missing values can make estimates of causal effects inconsistent even when treatments are randomized.
- Published
- 1984
- Full Text
- View/download PDF
18. The Most Powerful Invariant Test of Normal Versus Cauchy With Applications to Stable Alternatives.
- Author
-
Franck, Wallace E.
- Subjects
- *
INVARIANTS (Mathematics) , *CAUCHY integrals , *SYMMETRIC functions , *DISTRIBUTION (Probability theory) , *STATISTICS , *PROBABILITY theory - Abstract
This paper gives a derivation of the most powerful scale and location invariant test of normal versus Cauchy. A simulation study of this test shows that in testing normality versus symmetric stable alternatives it comes closer to being uniformly more powerful than any of five tests previously studied. [ABSTRACT FROM AUTHOR]
- Published
- 1981
- Full Text
- View/download PDF
19. A New Maximum Likelihood Algorithm for Piecewise Regression.
- Author
-
Tishler, Asher and Israel Zang
- Subjects
- *
REGRESSION analysis , *ALGORITHMS , *ESTIMATION theory , *MATHEMATICAL variables , *PROBABILITY theory , *ANALYSIS of variance , *STATISTICS - Abstract
This paper presents a piecewise regression method for continuous models containing max or min operators, or both. This method does not require knowledge of the zone in which a shift in regimes occurs. Moreover, it allows the application of analytical derivatives to maximize the likelihood function, which greatly simplifies the estimation of the model. The method proposed exhibits fast convergence and can be used for an arbitrary number of regimes and variables. [ABSTRACT FROM AUTHOR]
- Published
- 1981
- Full Text
- View/download PDF
20. On Partitioning a Sample With Binary-Type Questions in Lieu of Collecting Observations.
- Author
-
Arrow, Kenneth J., Pesotchinsky, Leon, and Sobel, Milton
- Subjects
- *
PROBABILITY theory , *STATISTICAL sampling , *DISTRIBUTION (Probability theory) , *RANDOM variables , *MATHEMATICAL variables , *STATISTICS - Abstract
The problem is to search for the t largest observations in a random sample of size n by asking binary-type questions of the people (or items) in the sample without collecting any exact data whatever. The unordered and ordered cases are both considered; in the latter case the complete ranking is of special interest. Two different criteria of optimality are considered: (a) to minimize the expected number of questions required and (b) to maximize the probability of terminating the search in at most r questions for specified r. Optimal procedures are found and compared; in some sense the solutions for these two criteria are close to each other. The analysis is nonparametric in the sense that it holds for any underlying sampling distribution, but the actual optimal procedures depend on the specified distribution. In the above, we count (the cost of) a question as one regardless of the number of people addressed; other models in which the cost depends on the number of people are considered only briefly here and will be treated in a separate paper. [ABSTRACT FROM AUTHOR]
- Published
- 1981
- Full Text
- View/download PDF
21. Probabilities Based on Circumstantial Evidence.
- Author
-
Finney, D. J.
- Subjects
- *
GUILT (Psychology) , *PROBABILITY theory , *STATISTICS , *TRAIT intercorrelations , *EVIDENCE , *MATHEMATICAL statistics , *DISTRIBUTION (Probability theory) - Abstract
A recent paper by Smith and Charrow (1975) shows how easily misunderstandings about statistical independence can confuse assessment of the extent to which a combination of unusual traits reinforces evidence against a suspect. A simple example illustrates the importance to the calculation of a probability of exact specification of the way in which data and ancillary information are obtained. [ABSTRACT FROM AUTHOR]
- Published
- 1977
- Full Text
- View/download PDF
22. A Purchase Incidence Model with Inverse Gaussian Interpurchase Times.
- Author
-
Banerjee, A. K. and Bhattacharyya, G. K.
- Subjects
- *
INCIDENCE functions , *ARITHMETIC functions , *GAUSSIAN distribution , *HOUSEHOLDS , *INVERSE Gaussian distribution , *CONSUMER attitudes , *STATISTICS , *DISTRIBUTION (Probability theory) , *PROBABILITY theory - Abstract
This paper deals with a new purchase incidence model where the interpurchase time of an individual household is described by a two-parameter inverse Gaussian distribution, and the population heterogeneity is modeled by the natural conjugate family which has truncated t and modified gamma marginals. The model, more flexible than the exponential and one-parameter gamma models previously used for purchase incidence, is applied to consumer panel data on toothpaste purchases and an excellent fit is obtained. A more logical approach is employed for the assessment of consumer heterogeneity than the methods in existing literature. [ABSTRACT FROM AUTHOR]
- Published
- 1976
- Full Text
- View/download PDF
23. Rejoinder.
- Author
-
Landwehr, James M., Pregibon, Daryl, and Shoemaker, Anne C.
- Subjects
- *
BINARY number system , *DISTRIBUTION (Probability theory) , *LOGISTIC distribution (Probability) , *STANDARD deviations , *PROBABILITY theory , *STATISTICS , *METHODOLOGY , *MATHEMATICS - Abstract
The paper presents authors' reply to various articles on logistic models. Authors thank various discussants for their careful and thought-provoking examination of their proposed methods. Statistician's agreement that authors have tried to tackle an important problem and their interest in authors' work, is gratifying. In response, authors first consider the alternative approach suggested by researcher Donald B. Rubin; then some of the specific comments regarding authors' three methods are discussed. Rubin proposes an intriguing alternative methodology for assessing a fitted logistic model based on a theorem concerned with conditional probabilities. Authors agree with Rubin that the crucial question for new methods is their performance with data. Statistician S.E. Fienberg and G.D. Gong comment on the fact that the distribution of the deviance contributions depends on probability and that the order in which authors cumulate local deviances can affect interpretations. Authors recognize this feature of the display but have no simple way of ameliorating it. However, authors recommend interpretations based on points to right as much as possible without including large clusters.
- Published
- 1984
- Full Text
- View/download PDF
24. Invariant Sequential Estimation of the Exponential Mean Life From the Ordered Observations.
- Author
-
Takada, Yoshikazu
- Subjects
- *
ORDER statistics , *SEQUENTIAL analysis , *ESTIMATION theory , *NONPARAMETRIC statistics , *DISTRIBUTION (Probability theory) , *STATISTICAL sampling , *STATISTICS , *LEAST squares , *PROBABILITY theory , *SAMPLE size (Statistics) - Abstract
This paper derives the best invariant sequential estimate of exponential mean life for the situation where observations become available sequentially. If a cost function proportional to the observed time and relative squared error loss are adopted, it turns out that the best invariant rule is a fixed sample size rule. [ABSTRACT FROM AUTHOR]
- Published
- 1981
- Full Text
- View/download PDF
25. Rejoinder.
- Author
-
Basu, D.
- Subjects
- *
MATHEMATICAL statistics , *STOCHASTIC processes , *RANDOM variables , *STATISTICS , *PROBABILITY theory , *ANALYSIS of variance , *RANDOM sets - Abstract
The article presents the author's response to comments on his paper discussing randomization analysis of experimental data developed by researcher R.A. Fisher. The author challenges Fisher nonparametric test. The logic of the test is same as that of the paired-comparison test. When the author said that the Fisher randomization test is not logically viable, he only meant that the logic of the test procedure is not viable. The author has no objection to prerandomization as such. Indeed, he thinks that the scientist ought to prerandomize and have the physical act of randomization properly witnessed and notarized. It is the randomization-test argument that rests on an infinitesimal slice of the sample space by holding fixed everything but the design outcome. The Bayesian statistical decision theory recommends to hold the data fixed and to speculate about the still-variable parameters. In the end, the author suggests that when it comes to changing one's opinion on a scientific paradigm, the mind of a stubborn scientist, for that matter, the minds of a whole community of trained scientists, certainly does not, follow any logic.
- Published
- 1980
- Full Text
- View/download PDF
26. Comment.
- Author
-
Lane, David A.
- Subjects
- *
MATHEMATICAL statistics , *STOCHASTIC processes , *RANDOM variables , *STATISTICS , *PROBABILITY theory , *ANALYSIS of variance , *RANDOM sets - Abstract
The article presents the author's comments on researcher D. Basu's paper related to randomization analysis of experimental data. The scientist's experimental results contain evidence bearing on the superiority of the improved diet. He asks the statistician to evaluate this evidence. The statistician answers by computing a significance probability by means of researcher R.A. Fisher's randomization test. One of the scientist's goals is to obtain public confirmation for the superiority of the improved diet. If this can be accomplished with a minimum of fuss and assumption, preliminary to the detailed, model-based analysis, and without contradicting explicitly or implicitly the results of that analysis, so much the better. Here, the randomization test may be of use. The way in which "chance variability" enters into scientist's experiment should be carefully explicated by the scientist when he constructs the statistical model he will use for analyzing his results. The randomization test ignores this model and substitutes an alternative relation between chance and the experiment, based on a frequency distribution induced by the physical act that assigns animals to diets.
- Published
- 1980
- Full Text
- View/download PDF
27. Matrix Completion With Covariate Information.
- Author
-
Mao, Xiaojun, Chen, Song Xi, and Wong, Raymond K. W.
- Subjects
DATA corruption ,STATISTICS ,COVARIANCE matrices ,LOW-rank matrices ,PROBABILITY theory ,NUMERICAL analysis - Abstract
This article investigates the problem of matrix completion from the corrupted data, when the additional covariates are available. Despite being seldomly considered in the matrix completion literature, these covariates often provide valuable information for completing the unobserved entries of the high-dimensional target matrix A
0 . Given a covariate matrix X with its rows representing the row covariates of A0 , we consider a column-space-decomposition model A0 = Xβ0 + B0 , where β0 is a coefficient matrix and B0 is a low-rank matrix orthogonal to X in terms of column space. This model facilitates a clear separation between the interpretable covariate effects (Xβ0 ) and the flexible hidden factor effects (B0 ). Besides, our work allows the probabilities of observation to depend on the covariate matrix, and hence a missing-at-random mechanism is permitted. We propose a novel penalized estimator for A0 by utilizing both Frobenius-norm and nuclear-norm regularizations with an efficient and scalable algorithm. Asymptotic convergence rates of the proposed estimators are studied. The empirical performance of the proposed methodology is illustrated via both numerical experiments and a real data application. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
28. Can municipality-based post-discharge follow-up visits including a general practitioner reduce early readmission among the fragile elderly (65+ years old)? A randomized controlled trial.
- Author
-
Thygesen, Lau Caspar, Fokdal, Sara, Gjørup, Thomas, Taylor, Rod S., and Zwisler, Ann-Dorthe
- Subjects
CHI-squared test ,COMMUNITY health nursing ,FRAIL elderly ,PATIENT aftercare ,HEALTH outcome assessment ,PHYSICIAN-patient relations ,GENERAL practitioners ,PROBABILITY theory ,RESEARCH funding ,STATISTICS ,T-test (Statistics) ,DATA analysis ,PATIENT readmissions ,DATA analysis software ,DESCRIPTIVE statistics ,KAPLAN-Meier estimator ,LOG-rank test ,OLD age - Abstract
Objective. To evaluate how municipality-based post-discharge follow-up visits including a general practitioner and municipal nurse affect early readmission among high-risk older people discharged from a hospital department of internal medicine. Design and setting. Centrally randomized single-centre pragmatic controlled trial comparing intervention and usual care with investigator-blinded outcome assessment. Intervention. The intervention was home visits with a general practitioner and municipal nurse within seven days of discharge focusing on medication, rehabilitation plan, functional level, and need for further health care initiatives. The visit was concluded by planning one or two further visits. Controls received standard health care services. Patients. People aged 65 years discharged from Holbæk University Hospital, Denmark, in 2012 considered at high risk of readmission. Main outcome measures. The primary outcome was readmission within 30 days. Secondary outcomes at 30 and 180 days included readmission, primary health care, and municipal services. Outcomes were register-based and analysis used the intention-to-treat principle. Results. A total of 270 and 261 patients were randomized to intervention and control groups, respectively. The groups were similar in baseline characteristics. In all 149 planned discharge follow-up visits were carried out (55%). Within 30 days, 24% of the intervention group and 23% of the control group were readmitted (p 0.93). No significant differences were found for any other secondary outcomes except that the intervention group received more municipal nursing services. Conclusion. This municipality-based follow-up intervention was only feasible in half the planned visits. The intervention as delivered had no effect on readmission or subsequent use of primary or secondary health care services. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
29. Impact of an ABCDE team triage process combined with public guidance on the division of work in an emergency department.
- Author
-
Kantonen, Jarmo, Lloyd, Robert, Mattila, Juho, Kauppila, Timo, and Menezes, Ricardo
- Subjects
ANALYSIS of variance ,CHI-squared test ,CONSUMER attitudes ,EMERGENCY medical services ,HEALTH attitudes ,HELP-seeking behavior ,EVALUATION of medical care ,MULTIVARIATE analysis ,PROBABILITY theory ,RESEARCH funding ,SOCIAL psychology ,STATISTICS ,T-test (Statistics) ,MEDICAL triage ,DATA analysis ,DATA analysis software - Abstract
Objective. To study the effects of applying an emergency department (ED) triage system, combined with extensive publicity in local media about the “right” use of emergency services, on the division of work between ED nurses and general practitioners (GPs). Design. An observational and quasi-experimental study based on before–after comparisons. Setting. Implementation of the ABCDE triage system in a Finnish combined ED where secondary care is adjacent, and in a traditional primary care ED where secondary care is located elsewhere. Subjects. GPs and nurses from two different primary care EDs. Main outcome measures. Numbers of monthly visits to different professional groups before and after intervention in the studied primary care EDs and numbers of monthly visits to doctors in the local secondary care ED. Results. The beginning of the triage process increased temporarily the number of independent consultations and patient record entries by ED nurses in both types of studied primary care EDs and reduced the number of patient visits to a doctor compared with previous years but had no effect on doctor visits in the adjacent secondary care ED. No further decrease in the number of nurse or GP visits was observed by inhibiting the entrance of non-urgent patients. Conclusion. The ABCDE triage system combined with public guidance may reduce non-urgent patient visits to doctors in different kinds of primary care EDs without increasing visits in the secondary care ED. However, the additional work to implement the ABCDE system is mainly directed to nurses, which may pose a challenge for staffing. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
30. Comment.
- Author
-
Schervish, Mark J.
- Subjects
- *
CALIBRATION , *FORECASTING , *RANKING (Statistics) , *MATHEMATICAL sequences , *PROBABILITY theory , *INDUCTION (Logic) , *UNCERTAINTY , *STATISTICS - Abstract
In this article, the author presents his views on a paper by David Oakes on statistical analysis. Oakes has shown that for each forecasting system, there exists a sequence of outcomes for which the system will not be calibrated. This implies that no forecasting system can be self-calibrating in the sense of A. P. Dawid. The situation with regard to the existence of calibrated forecasters is much more serious than either Dawid or Oakes makes it out to be. There are enough noncalibrable sequences so that noncalibrability is as much the normal state of affairs as calibrability. According to Oakes' theorem, the cardinality of the set of noncalibrable sequences is that of the continnum. There are at least two ways to interpret the preceding theorem and Oakes's result. The first is to claim that it is impossible to guarantee the existence of "empirically valid" (i.e., calibrated) sequential forecasts. A more reasonable interpretation is simply that these results are more evidence that (long run) calibration is not a useful measure of the goodness of forecasts.
- Published
- 1985
- Full Text
- View/download PDF
31. Comment.
- Author
-
Smith, Adrian F. M.
- Subjects
- *
BAYESIAN analysis , *CANCER , *CANCER risk factors , *METHODOLOGY , *STATISTICS , *PROBABILITY theory , *BIOMETRY , *BIOMATHEMATICS - Abstract
The article comments on the paper "Bayes Methods for Combining the Results of Cancer Studies in Humans and Other Species," by William H. DuMouchel and Jeffrey E. Harris. Anyone who engages in the process of extracting and summarizing information from complex data sets by means of statistical models is painfully aware of the dilemma involved in choosing the level of detail to be incorporated in the model. The author has no specific comments to make on the biological background to the problem or on the choice of the data sets. He would, however, like to discuss a couple of issues related to the hierarchical Bayesian methodology. There seem to be some contexts in which the assumption of a particular form for the second stage of a hierarchy is unobjectionable and commands widespread assent. There are other contexts, however,where a critical evaluation of the particular form of assumption adopted is of central importance in the analysis. In such situations, the author thinks it is important to search, in the appropriate stage of the hierarchy, for "elaborated" forms of representation.
- Published
- 1983
- Full Text
- View/download PDF
32. Comment: M.E. Thompson.
- Author
-
Thompson, M. E.
- Subjects
- *
STATISTICAL sampling , *STATISTICIANS , *ESTIMATION theory , *STATISTICS , *PROBABILITY theory , *DISTRIBUTION (Probability theory) - Abstract
Over the past few years, the work of R.M. Royall and his colleagues has had a significant influence on the way in which many of statisticians think about sampling inference. This paper in particular is likely to become the focal point for a great deal of discussion, especially because of its highly suggestive empirical results. The author presents his comments to what other authors have said about the role of randomization in estimation. The Randomization Principle as formulated by other authors states that "the process of random sampling creates the only probability distribution on which reliable statistical inferences can be based."
- Published
- 1981
- Full Text
- View/download PDF
33. Comment: Carl Erik Sarndal.
- Author
-
Särndal, Carl Erik
- Subjects
- *
STATISTICAL sampling , *PREDICTION theory , *STATISTICIANS , *DISTRIBUTION (Probability theory) , *PROBABILITY theory , *STATISTICS - Abstract
This paper adds to a growing list of important contributions likely to evoke fruitful discussion of opposing principles in the foundations of survey sampling. The contrast here is between the balanced sampling prediction theory (PT, for short) point of view of R.M. Royall and W.G. Cumberland and the design based view of traditional randomization theory (RT). Royall and Cumberland's incisive arguments seem to derive extra force from RT statisticians' hesitation to appeal to models. Author's comments intend to show how one could interpret some of the results of Royall and Cumberland if one were more willing than they to use the randomization distribution, and at the same time less unwilling to appeal to models than the typical RT statistician.
- Published
- 1981
- Full Text
- View/download PDF
34. Comment.
- Author
-
Lindley, D.V.
- Subjects
- *
MATHEMATICAL statistics , *STOCHASTIC processes , *RANDOM variables , *STATISTICS , *PROBABILITY theory , *ANALYSIS of variance , *RANDOM sets - Abstract
The article presents the author's comments on researcher D. Basu's paper related to randomization analysis of experimental data. Randomization is widely recognized as a basic principle of statistical experimentation. Researcher R.A. Fisher's classic text "The Design of Experiments" is the principal source of inspiration for a mode of data interpretation that may be characterized as randomization analysis of data. Basu uses Fisher's design of experiments for a mode of data interpretation that is usually characterized as randomization analysis. Two variants of randomization test are discussed. Basu concludes that the Fisher randomization test is not logically viable. According to the author, randomization test does not pass the test of common sense. The author is surprised why researchers have to randomize the data. Randomization is defined as the incorporation of a fully controlled bit of randomness in the process of data generation. Basu's definition of randomization is in terms of randomness. The prerandomization has a place in coherent analysis. Basu shows that postrandomization is incoherent.
- Published
- 1980
- Full Text
- View/download PDF
35. Comment.
- Author
-
Tukey, John W.
- Subjects
- *
EXTRAPOLATION , *STATISTICIANS , *DATA analysis , *STATISTICAL smoothing , *DISTRIBUTION (Probability theory) , *GRAPHIC methods , *PROBABILITY theory , *STATISTICS - Abstract
This article presents the views of author on a paper by Emanuel Parzen which introduced medians and hinges to reproducing kernel Hilbert spaces. The author believes that "exploratory data analysis" is an attitude, a state of flexibility, a willingness to look for those things that are believed not to be there, as well as for those that might be there. Except for emphasis on graphs, its tools are secondary to its purposes. The author agrees that there is a great need for the whole statistician in one body, for the analyst of data as well as for the probability model maker and the inferential theorist/practitioner. One cannot, however, make a whole man by claiming that one can subsume one important class of mental activity under another class whose style and purposes are not oniy different but incompatible. To be "whole statisticians" or to be "whole statistician-data analysts" means to be single persons who can take quite different views and adopt quite different styles as the needs change. The practitioner/theorist of statistical inference was once supposed to think like the probability modeler, but the rise of robust/resistant techniques and robust/resistant theory presages the day when both practitioners and theorists of statistical inference will speak and act as if the truth were, hopefully, somewhere "not too far" from their models.
- Published
- 1979
- Full Text
- View/download PDF
36. Comment.
- Author
-
Fraser, D. A. S.
- Subjects
- *
NUMERICAL analysis , *BAYES' theorem , *INCONSISTENCY (Logic) , *MATHEMATICS , *STATISTICS , *PROBABILITY theory , *MATHEMATICAL analysis - Abstract
Statistical researcher M. Stone continues in his diligent search for flaws in the Bayesian theory of statistical inference. In the present paper he considers two examples in which Bayesian strong inconsistency can occur with a flat prior. Only a relatively few statisticians believe that a single theory of inference can be the answer for all of statistics. The committed Bayesian's, however, are prominent among such believers. There are, of course, substantial arguments against the Bayesian theory as a single theory of inference, a summary of these arguments may be may be found in D.A.S. Fraser. These arguments are not primarily concerned with the Bayesian method as a tool in the statistician's tool bag rather, they are concerned with the catholic claim for Bayesian theory and with the meaning and consequences of the theory in scientific contexts. The two examples considered by Stone are some what remote from Bayesian theory in a scientific context indeed, the theory will not rise or fall on the basis of the examples. Nevertheless the examples are extremely interesting they are concerned with implications of the theory, and they are presented with the attractive flair that we expect from Stone.
- Published
- 1976
- Full Text
- View/download PDF
37. Bound and collapse Bayesian reject inference for credit scoring.
- Author
-
Chen, G. G. and Åstebro, T.
- Subjects
CREDIT scoring systems ,BAYESIAN analysis ,PROBABILITY theory ,PERFORMANCE evaluation ,ECONOMETRIC models ,ELECTRONIC data processing - Abstract
Reject inference is a method for inferring how a rejected credit applicant would have behaved had credit been granted. Credit-quality data on rejected applicants are usually missing not at random (MNAR). In order to infer credit-quality data MNAR, we propose a flexible method to generate the probability of missingness within a model-based bound and collapse Bayesian technique. We tested the method's performance relative to traditional reject-inference methods using real data. Results show that our method improves the classification power of credit scoring models under MNAR conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
38. Prevalence, awareness, treatment, and control of hypertension: Rule of thirds in the Skaraborg project.
- Author
-
Lindblad, Ulf, Ek, Johanna, Eckner, Jenny, Larsson, Charlotte A., Shan, Guangliang, and Råstam, Lennart
- Subjects
CARDIOVASCULAR disease diagnosis ,HYPERTENSION ,HYPERTENSION & psychology ,THERAPEUTICS ,HYPERTENSION epidemiology ,CONFIDENCE intervals ,EPIDEMIOLOGY ,HEALTH outcome assessment ,PROBABILITY theory ,RESEARCH funding ,STATISTICAL sampling ,SELF-evaluation ,STATISTICS ,SURVEYS ,LOGISTIC regression analysis ,DATA analysis ,TREATMENT effectiveness ,CROSS-sectional method ,DATA analysis software ,DESCRIPTIVE statistics ,EVALUATION - Abstract
Objective. To describe the prevalence, awareness, and control of hypertension in a Swedish population during the early 2000s to address implications for care and prevention. Design. A cross-sectional population survey. Setting. Primary health care in Skaraborg, a rural part of western Sweden. Subjects. Participants (n =2816) in a population survey of a random sample of men and women between 30 and 75 years of age in the municipalities of Vara (81% participation rate) and Skövde (70%), in western Sweden during 2001-2005. Main outcome measures. Anthropometric measures, blood pressure, leisure-time physical activity, current smoking, fasting glucose, and cholesterol. Hypertension was defined as ongoing treatment for hypertension, or three consecutive blood pressure readings ≥140 systolic and/or ≥90 mmHg diastolic. Hypertension was considered controlled when the blood pressure was <140/90 mm Hg (both). Results. The prevalence of hypertension was 20% in both men and women with a steep increase by age. Among hypertensive subjects, 33% were unaware, 36% aware but uncontrolled, and 31% aware and controlled, with no statistically significant differences between men and women. Patients with diabetes had a higher awareness (87% vs. 64%, p <0.001), but the same control rate (56% vs. 44%, p =0.133), when compared with those without diabetes. Conclusion. A large proportion of subjects with hypertension are still unaware of their condition, or aware but not controlled. It is important to emphasize population-based prevention to reduce the prevalence of hypertension, to perform screening to increase awareness, and to improve implementation of expert guidelines in clinical practice to improve control. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
39. Factors associated with undiagnosed obstructive sleep apnoea in hypertensive primary care patients.
- Author
-
Broström, Anders, Sunnergren, Ola, Årestedt, Kristofer, Johansson, Peter, Ulander, Martin, Riegel, Barbara, and Svanborg, Eva
- Subjects
ANALYSIS of variance ,CHI-squared test ,CONFIDENCE intervals ,STATISTICAL correlation ,MENTAL depression ,DIABETES ,EPIDEMIOLOGY ,HYPERTENSION ,PROBABILITY theory ,PSYCHOLOGICAL tests ,QUESTIONNAIRES ,RESEARCH funding ,SCALES (Weighing instruments) ,SLEEP apnea syndromes ,SOUND recordings ,STATISTICS ,T-test (Statistics) ,U-statistics ,LOGISTIC regression analysis ,DATA analysis ,CROSS-sectional method ,DATA analysis software ,DESCRIPTIVE statistics - Abstract
Objective. In hypertensive primary care patients below 65 years of age, (i) to describe the occurrence of undiagnosed obstructive sleep apnoea (OSA), and (ii) to identify the determinants of moderate/severe OSA. Design. Cross-sectional. Setting. Four primary care health centres in Sweden. Patients. 411 consecutive patients (52% women), mean age 57.9 years (SD 5.9 years), with diagnosed and treated hypertension (BP >140/90). Main outcome measures. Occurrence of OSA as measured by the apnoea hypopnoea index (AHI). Results. Mild (AHI 5-14.9/h) and moderate/severe (AHI > 15/h) OSA were seen among 29% and 30% of the patients, respectively. Comparing those without OSA with those with mild or moderate/severe OSA, no differences were found in blood pressure, pharmacological treatment (anti-hypertensive, anti-depressive, and hypnotics), sleep, insomnia symptoms, daytime sleepiness, or depressive symptoms. Obesity (BMI > 30 kg/m2) was seen in 30% and 68% of the patients with mild and moderate/severe OSA, respectively. Male gender, BMI > 30 kg/m2, snoring, witnessed apnoeas, and sleep duration >8 hours were determinants of obstructive sleep apnoea. Conclusion. Previously undiagnosed OSA is common among patients with hypertension in primary care. Obesity, snoring, witnessed apnoeas, long sleep duration, and male gender were the best predictors of OSA, even in the absence of daytime sleepiness and depressive symptoms. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
40. Aerobic performance and body composition changes during military service.
- Author
-
Mikkola, Ilona, Keinänen-Kiukaanniemi, Sirkka, Jokelainen, Jari, Peitso, Ari, Härkönen, Pirjo, Timonen, Markku, and Ikäheimo, Tiina
- Subjects
BODY composition ,AEROBIC exercises ,ANTHROPOMETRY ,STATISTICAL correlation ,PROBABILITY theory ,REGRESSION analysis ,RESEARCH funding ,RUNNING ,MILITARY personnel ,STATISTICS ,T-test (Statistics) ,DATA analysis ,DATA analysis software ,DESCRIPTIVE statistics - Abstract
Objective. To examine the association between aerobic performance and body composition changes by body mass index (BMI). Design. 6-12 months' follow-up during military service. Setting. Conscripts entering military service in 2005 in Sodankylä Jaeger Brigade (Finland). Subjects. 945 men (19 years, SD 1 years). Main outcome measures. Height, weight, waist circumference, BMI, and aerobic performance (Cooper test) were recorded. Body composition was measured by bioelectrical impedance analysis (BIA). The measured parameters were fat mass (FM), fat free mass (FFM), and visceral fat area (VFA). All the measurements were performed at the beginning and end of service. Results. On average, the military training period improved the running distance by 6.8% (169 m, p < 0.001) and the improvements were more pronounced in overweight (223.9 m/9.5%, p < 0.001) and obese (273.3 m/13.6 %, p < 0.001) conscripts. A strong inverse correlation between aerobic performance and body composition changes was observed, especially for weight (r = -0.305, p < 0.001) and VFA (r = -0.465, p < 0.001). A significant association between aerobic performance and changes in weight (p < 0.001), waist circumference (p < 0.001), FM (p < 0.001), and VFA (p < 0.001) by BMI was detected. The associated decrease in weight, waist circumference, FM, and VFA with improved aerobic performance was more substantial between overweight and obese compared with normal-weight subjects. Conclusions. Favourable changes in body composition are associated with improved aerobic performance during a physical training period such as military service. These findings are pronounced among overweight and obese men and can be applied at the population level in reducing obesity and co-morbidities. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
41. The perceptions of a GP's work among fifth-year medical students in Helsinki, Finland.
- Author
-
Kuikka, L., Nevalainen, M.K., Sjöberg, L., Salokekkilä, P., Karppinen, H., Torppa, M., Liira, H., Eriksson, J., and Pitkälä, K.H.
- Subjects
GENERAL practitioners ,CHI-squared test ,JOB evaluation ,PSYCHOLOGY of medical students ,PHYSICIAN-patient relations ,PROBABILITY theory ,QUESTIONNAIRES ,STATISTICS ,SURVEYS ,DATA analysis ,CROSS-sectional method ,DATA analysis software ,DESCRIPTIVE statistics - Abstract
Objective. To explore medical students' potential interest in family medicine in the future and their perceptions of a GP's work. Design. A cross-sectional survey in 2008-2010. Setting and subjects. Fifth-year medical students prior to their main course in General Practice at the University of Helsinki. Main outcome measures. The students' opinions regarding the GP's work and their perceptions of the main aims of a GP's work. Results. 309/359 medical students (mean age 25.7 years, 64% females) responded to the survey. Among the students, 76% considered the most attractive feature in the GP's work to be that it is versatile and challenging. The least attractive features included: too hasty, pressing work, too lonely work, and too many non-medical problems. The majority of the students considered the main aim of a GP's work as to identify serious diseases/disorders in order to refer those patients for specialized care (82%). Treatment of chronic diseases is an important responsibility of a GP's work according to 63% of the students. Only 38% considered health promotion to be an important aim. Conclusions. Medical students may have perceptions of the GP's work that influence their career choices to specialize in other fields. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
42. No physician gender difference in prescription of sick-leave certification: A retrospective study of the Skaraborg Primary Care Database.
- Author
-
Starzmann, Karin, Hjerpe, Per, Dalemo, Sofia, Björkelund, Cecilia, and Boström, Kristina Bengtsson
- Subjects
SICK leave ,ANALYSIS of variance ,PHYSICIANS ,PROBABILITY theory ,REGRESSION analysis ,RESEARCH funding ,SEX distribution ,STATISTICS ,DATA analysis ,RETROSPECTIVE studies ,DATA analysis software ,DESCRIPTIVE statistics - Abstract
Objective. The primary objective was to investigate how physicians' gender and level of experience affects the rate and length of sick-leave certificate prescription. The secondary objective was to study the physicians' gender and professional experience in relation to the diagnoses on the certificates. Design. Retrospective, cross-sectional study of computerized medical records from 24 health care centres in 2005. Setting. Primary care in Sweden. Subjects. Primary care physicians (n == 589) and patients (n == 88 780) aged 18-64 years. Main outcome measures. Rate and duration of sick leave certified by different categories of physicians and for different diagnoses and gender of patients. Results. Sick leave was certified in 9.0% (musculoskeletal (3%) and psychiatric (2.3%) diagnoses were most common) of all contacts and the mean duration was 32.2 days. Overall there was no difference between male and female physicians in the sick-leave certification prescription rate (9.1% vs. 9.0%) or duration of sick leave (32.1 vs. 32.6 days). The duration of sick leave was associated with the physician's level of professional experience in general practice (GPs (Distriktläkare) 37, GP trainees (ST-läkare) 26, interns (AT-läkare) 20 and locum (vikarier) 19 days, p < 0.001). Conclusion. Contrary to earlier studies we found no difference in sick-leave certification prescription rate and length between male and female physicians. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
43. The feasibility of antibiotic dosing four times per day: A prospective observational study in primary health care.
- Author
-
Eide, Torunn Bjerve, Hippe, Veslemøy Cathrine, and Brekke, Mette
- Subjects
CLINICAL drug trials ,PRIMARY health care ,ANTIBIOTICS ,CONFIDENCE intervals ,FISHER exact test ,INTERVIEWING ,LONGITUDINAL method ,SCIENTIFIC observation ,PATIENT compliance ,PROBABILITY theory ,RESEARCH funding ,STATISTICS ,T-test (Statistics) ,DATA analysis ,DATA analysis software ,DESCRIPTIVE statistics - Abstract
Objective. To investigate whether the increase in the number of doses of penicillin V from three times daily to four times daily for common infections, as recommended in the new Norwegian guidelines for antibiotic treatment in primary health care, would lead to reduced patient compliance. Design. Prospective observational study. Setting and subjects. Six general practitioners included all patients who were prescribed systemic antibiotic treatment regardless of indication during a 10-month period. A total of 270 patients provided data for the study. Methods. Telephone interview focusing on omitted antibiotic doses. Results. Some 17% of patients had poor compliance, defined as failing to take 5% or more of total antibiotic doses. Neither level of poor compliance nor number of omitted doses differed significantly when the number of daily doses increased from three to four. There were significantly fewer omitted doses in the group given two doses per day when compared with three doses (p == 0.04) and four doses per day (p == 0.01). Conclusion. We found no difference in compliance or omitted doses between antibiotic regimens of three and four doses per day. The new Norwegian guidelines for antibiotic treatment in primary health care appear feasible with regard to patient compliance. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
44. Career choice and place of graduation among physicians in Norway.
- Author
-
Wesnes, Stian Langeland, Aasland, Olaf, and Baerheim, Anders
- Subjects
VOCATIONAL guidance ,CONFIDENCE intervals ,EPIDEMIOLOGY ,GENERAL practitioners ,PROBABILITY theory ,STATISTICS ,LOGISTIC regression analysis ,DATA analysis ,RESEARCH personnel ,HOSPITALISTS ,DATA analysis software ,DESCRIPTIVE statistics ,PSYCHOLOGY - Abstract
Objective. To investigate to what extent a physician's place of graduation is associated with the physician choosing a career as a general practitioner (GP), and identify factors in the curriculum that could predict a general practice career. Design. Cross-sectional study based on the membership database of the Norwegian Medical Association. Setting. Physicians working in Norway who graduated from four domestic medical schools, five other countries, and three groups of countries. Physicians were categorized according to their main professional activity as GPs, hospital physicians, and researchers. Subjects. A total of 2836 medical physicians who were working in Norway during 2010 and graduated from medical school between 2002 and 2005. Main outcome measures. Percentage and odds ratio for subjects working as a GP in Norway during 2010. Descriptive data for pre-graduate general practice education in Norwegian medical schools were also analysed. Results. Compared with the University of Oslo, there was a significantly higher proportion of GPs among physicians who had graduated from Denmark (OR 2.9, 95% CI 1.9-4.5), Poland (OR 2.0, 95% CI 1.4-2.9), Sweden (OR 1.8, 95% CI 1.0-3.1), and Trondheim (Norway) (OR 1.5, 95% CI 1.1-2.0). Across the four Norwegian medical schools, there were significant associations between choosing a general practice career and the sum of pre-graduate educational hours regarding general practice, general practice preceptorship, and the number of GP teachers. Conclusion. The physician's place of graduation appears to be associated with career choice. The universities' total contribution in pre-graduate general practice education may be associated with future GP career choice. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
45. Estimation for Conformance Proportions in a Normal Variance Components Model.
- Author
-
Hsin-I Lee and Chen-Tuo Liao
- Subjects
MULTILEVEL models ,PROBABILITY theory ,STATISTICAL bootstrapping ,STATISTICS ,MATHEMATICS - Abstract
Two approaches are proposed for constructing one- and two-sided confidence limits for conformance proportions in a normal variance components model. One approach is based on the concepts of a generalized pivotal quantity, and the other is developed using the modified large-sample method for estimating linear combinations of variance components. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the empirical coverage probabilities for both methods are close to the claimed values and hence their performance is judged to be satisfactory. Nonetheless, the modified large-sample-based method might be recommended in practical applications due to its slightly better performance and computational ease. The framework established in this article can be applied to conformance proportion questions arising in arbitrary balanced mixed linear-model situations. The methods are illustrated using three real datasets. Finally, a bootstrap calibration approach is adopted to have empirical coverage probabilities sufficiently close to the nominal level for the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
46. The medico-professional quality of GP consultations assessed by analysing patient records.
- Author
-
Kuusela, Maisa, Koivisto, Anna-Liisa, Vainiomäki, Paula, Vahlberg, Tero, and Rautava, Päivi
- Subjects
HYPERTENSION ,THERAPEUTICS ,RESPIRATORY infection treatment ,MEDICAL referrals ,ANALYSIS of variance ,MEDICAL quality control ,ELECTRONIC health records ,GENERAL practitioners ,PROBABILITY theory ,STATISTICS - Abstract
Objective. To assess the medico-professional quality of consultations by analysing textual data from patient records. Design. Qualitative analyse of textual data. Setting. Four primary health care centres using electronic patient records (EPR) in Finland. Subjects. EPR and paired questionnaires of 175 consultations filled in by GPs and their patients independently. Main outcome measures. Medico-professional quality of consultations, quality of care of acute respiratory infections, and hypertension. Results. The medico-professional quality of the consultations was quite good. However, 9% of the records could not been assessed at all because of missing or poor documentation and 9% were assessed as poor. The treatment of acute respiratory infections and hypertension is not in line with current care guidelines. Smoking habits or other health behaviour or lifestyle factors were seldom recorded. Conclusions. The medico-professional quality of the consultation was quite good. Quality improvement is needed in the treatment of acute respiratory infections and hypertension. User-friendly EPR systems would improve the content of patient records. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
47. Consumer acceptance of yellow, provitamin A-biofortified maize in KwaZulu-Natal.
- Author
-
Pillay, K., Derera, J., Siwela, M., and Veldman, F. J.
- Subjects
ENRICHED foods ,ANALYSIS of variance ,CHI-squared test ,COLOR ,CUSTOMER satisfaction ,CORN ,FOCUS groups ,PROBABILITY theory ,REGRESSION analysis ,RESEARCH funding ,STATISTICAL sampling ,SCALES (Weighing instruments) ,STATISTICS ,TASTE ,VITAMIN A ,LOGISTIC regression analysis ,SAMPLE size (Statistics) ,DATA analysis ,MULTIPLE regression analysis ,CROSS-sectional method ,DATA analysis software - Abstract
Objectives: To assess the acceptance of popular maize food products (phutu, thin porridge and samp), prepared with yellow, provitamin A-biofortified maize varieties, in 212 subjects between the ages of three and 55 years, from rural KwaZulu-Natal. Design: A cross-sectional study. Method: Preschool, primary school and secondary school subjects were randomly selected from two primary schools and one secondary school, respectively, while adult subjects constituted a convenience sample. Pre- and primary school children completed a paired preference test. Secondary school and adult subjects completed a five-point facial hedonic and a preference ranking test. Focus group discussions were conducted using adult subjects. Results: Preschool children preferred yellow maize to white maize food products: phutu (81% vs. 19%; p-value < 0.001), thin porridge (75% vs. 25%; p-value < 0.001) and samp (73% vs. 27%; p-value < 0.001). There was no statistically significant difference in preference for white and yellow maize by primary school children. Secondary school and adult subjects preferred white maize to yellow maize. Focus group discussions confirmed the preference for white maize by the adults. Conclusion: The study findings suggest that yellow, provitamin A-biofortified maize has the potential to succeed as a new strategy of dealing with the serious problem of vitamin A deficiency, especially among children of preschool age. However, in older groups, this strategy is unlikely to be successful, unless other strategies are implemented, including intensive nutrition education programmes on the nutritional benefits of the maize, targeting the market price at which yellow maize is sold, increasing its availability in local grocery stores, and improving its sensory properties through breeding. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
48. Sun protection advice mediated by the general practitioner: An effective way to achieve long-term change of behaviour and attitudes related to sun exposure?
- Author
-
Falk, Magnus and Magnusson, Henrik
- Subjects
SUNBURN ,ANALYSIS of covariance ,ANALYSIS of variance ,CONCEPTUAL structures ,HEALTH behavior ,PATIENT psychology ,PHYSICIAN-patient relations ,PROBABILITY theory ,RESEARCH funding ,STATISTICS ,T-test (Statistics) ,DATA analysis ,DATA analysis software ,PREVENTION - Abstract
Objective. To investigate, in primary health care, differentiated levels of prevention directed at skin cancer, and how the propensity of the patients to change sun habits/sun protection behaviour and attitudes towards sunbathing were affected, three years after intervention. Additionally, the impact of the performance of a phototest as a complementary tool for prevention was evaluated. Design. Randomized controlled study. Setting and subjects. During three weeks in February, all patients ≥ 18 years of age registering at a primary health care centre in southern Sweden were asked to fill in a questionnaire mapping sun exposure habits, attitudes towards sunbathing, and readiness to increase sun protection according to the Transtheoretical Model of Behaviour Change (TTM) (n == 316). They were randomized into three intervention groups, for which sun protection advice was given, in Group 1 by means of a letter, and in Groups 2 and 3 orally during a personal GP consultation. Group 3 also underwent a phototest to demonstrate individual skin UV sensitivity. Main outcome measures. Change of sun habits/sun protection behaviour and attitudes, measured by five-point Likert scale scores and readiness to increase sun protection according to the TTM, three years after intervention, by a repeated questionnaire. Results. In the letter group, almost no improvement in sun protection occurred. In the two doctor's consultation groups, significantly increased sun protection was demonstrated for several items, but the difference compared with the letter group was significant only for sunscreen use. The performance of a phototest did not appear to reinforce the impact of intervention. Conclusion. Sun protection advice, mediated personally by the GP during a doctor's consultation, can lead to improvement in sun protection over a prolonged time period. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
49. The relationship between HbA1c level, symptoms and self-rated health in type 2 diabetic patients.
- Author
-
Nielsen, Anni B. S., Gannik, Dorte, Siersma, Volkert, and de Fine Olivarius, Niels
- Subjects
ANALYSIS of variance ,CHI-squared test ,PEOPLE with diabetes ,GLYCOSYLATED hemoglobin ,HEALTH status indicators ,TYPE 2 diabetes ,PROBABILITY theory ,RESEARCH funding ,SELF-evaluation ,STATISTICS ,DATA analysis ,CROSS-sectional method ,DATA analysis software ,SYMPTOMS ,PSYCHOLOGY - Abstract
Objective. Improving glycaemic control is generally supposed to reduce symptoms experienced by type 2 diabetic patients, but the relationships between glycated haemoglobin (HbA
1c ), diabetes-related symptoms, and self-rated health (SRH) are unclarified. This study explored the relationships between these aspects of diabetes control. Design. A cross-sectional study one year after diagnosis of type 2 diabetes. Subjects. A population-based sample of 606 type 2 diabetic patients, median age 65.6 years at diagnosis, regularly reviewed in primary care. Main outcome measures. The relationships between HbA1c , diabetes-related symptoms, and SRH. Results. The patients' median HbA1c was 7.8 (reference interval: 5.4-7.4 % at the time of the study). 270 (45.2%) reported diabetes-related symptoms within the past 14 days. SRH was associated with symptom score (γ == 0.30, p < 0.001) and HbA1c (γ == 0.17, p == 0.038) after correction for covariates. The relation between HbA1c and symptom score was explained by SRH together with other confounders, e.g. hypertension (γ == 0.02, p == 0.40). The relation between the symptom fatigue and SRH was not explained by symptom score and significantly modified the direct association between symptom score and SRH. Conclusions. Symptom relief may not occur even when HbA1c level is at its lowest average level in the natural history of diabetes, and symptoms and SRH are closely linked. Monitoring symptoms in the clinical encounter to extend information on disease severity, as measured e.g. by HbA1c , may help general practitioners and patients to understand the possible impact of treatments and of disease manifestations in order to obtain optimum disease control. [ABSTRACT FROM AUTHOR]- Published
- 2011
- Full Text
- View/download PDF
50. Effect of ''motivational interviewing'' on quality of care measures in screen detected type 2 diabetes patients: A one-year follow-up of an RCT, ADDITION Denmark.
- Author
-
Rubak, Sune, Sandbæk, Annelli, Lauritzen, Torsten, Borch-Johnsen, Knut, and Christensen, Bo
- Subjects
TYPE 2 diabetes treatment ,ANALYSIS of variance ,COMPUTER software ,GLYCOSYLATED hemoglobin ,MEDICAL quality control ,PATIENT compliance ,GENERAL practitioners ,PROBABILITY theory ,RESEARCH funding ,SELF-evaluation ,STATISTICS ,DATA analysis ,RANDOMIZED controlled trials ,MOTIVATIONAL interviewing ,INTER-observer reliability ,PSYCHOLOGY - Abstract
Objective. ''Motivational interviewing'' (MI) has shown to be broadly applicable in the management of behavioural problems and diseases. Only a few studies have evaluated the effect of MI on type 2 diabetes treatment and none has explored the effect of MI on target-driven intensive treatment. Methods. Patients were cluster-randomized by GPs, who were randomized to training in MI or not. Both groups received training in target-driven intensive treatment of type 2 diabetes. The intervention consisted of a 1½½-day residential course in MI with half-day follow-up twice during the first year. Blood samples, case record forms, national registry files, and validated questionnaires from patients were obtained. Results. After one year significantly improved metabolic status measured by HbA1c (p < 0.01) was achieved in both groups. There was no difference between groups. Medication adherence was close to 100% within both treatment groups. GPs in the intervention group did not use more than an average of 1.7 out of three possible MI consultations. Conclusion. The study found no effect of MI on metabolic status or on adherence of medication in people with screen detected type 2 diabetes. However, there was a significantly improved metabolic status and excellent medication adherence after one year within both study groups. An explanation may be that GPs in the control group may have taken up core elements of MI, and that GPs trained in MI used less than two out of three planned MI consultations. The five-year follow-up of this study will reveal whether MI has an effect over a longer period. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.