6,922 results
Search Results
2. Tukey's Paper after 40 Years
- Author
-
Mallows, Colin
- Published
- 2006
- Full Text
- View/download PDF
3. Electronic Versus Paper and Pencil Survey Administration Mode Comparison: 2019 Youth Risk Behavior Survey*.
- Author
-
Bryan, Leah N., Smith‐Grant, Jennifer, Brener, Nancy, Kilmer, Greta, Lo, Annie, Queen, Barbara, and Underwood, J. Michael
- Subjects
- *
RISK-taking behavior , *CLUSTER sampling , *STATISTICS , *SUBSTANCE abuse , *SAMPLE size (Statistics) , *TIME , *HUMAN sexuality , *NUTRITION , *VIOLENCE , *MENTAL health , *SURVEYS , *PHYSICAL activity , *PSYCHOLOGY of high school students , *QUESTIONNAIRES , *SEX customs , *ALCOHOL drinking , *DESCRIPTIVE statistics , *STATISTICAL sampling , *DATA analysis software , *PROBABILITY theory , *ADOLESCENCE - Abstract
BACKGROUND: Since the inception of the Youth Risk Behavior Surveillance System in 1991, all surveys have been conducted in schools, using paper and pencil instruments (PAPI). For the 2019 YRBSS, sites were offered the opportunity to conduct their surveys using electronic data collection. This study aimed to determine whether differences in select metrics existed between students who completed the survey electronically versus using PAPI. METHODS: Thirty risk behaviors were examined in this study. Data completeness, response rates and bivariate comparisons of risk behavior prevalence between administration modes were examined. RESULTS: Twenty‐nine of 30 questions examined had more complete responses among students using electronic surveys. Small differences were found for student and school response rates between modes. Twenty‐five of 30 adolescent risk behaviors showed no mode effect. CONCLUSIONS: Seven of 44 states and DC participated electronically. Because survey data were more complete; school and student response rates were consistent; and minor differences existed in risk behaviors between modes, the acceptability of collecting data electronically was demonstrated. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. The effects of uncoated paper on skin moisture and transepidermal water loss in bedridden patients.
- Author
-
Shin, Yong Soon, Kim, Hyun Jung, Moon, Nam-kyung, Ahn, Young Hee, and Kim, Kyoung-ok
- Subjects
- *
BEDDING , *BODY temperature , *BOWEL & bladder training , *COCCYX , *COMA , *COMPARATIVE studies , *CONTACT dermatitis , *STATISTICAL correlation , *CROSSOVER trials , *DIAPERS , *LENGTH of stay in hospitals , *HOSPITAL wards , *HUMIDITY , *LONGITUDINAL method , *MEDICAL supplies , *NURSING practice , *NURSING specialties , *PATIENT positioning , *PROBABILITY theory , *STATISTICAL sampling , *SKIN care , *SKIN physiology , *STATISTICAL hypothesis testing , *STATISTICS , *TEMPERATURE , *URINARY incontinence , *WATER-electrolyte balance (Physiology) , *STATISTICAL power analysis , *DATA analysis , *BODY mass index , *RANDOMIZED controlled trials , *SEVERITY of illness index , *DESCRIPTIVE statistics , *HOSPITAL nursing staff , *PREVENTION - Abstract
Aims and objectives. The aims of this study were to measure skin moisture and transepidermal water loss after application of uncoated paper and to compare skin moisture and transepidermal water loss after use of uncoated paper and disposable underpads. Study design. The study was a cross-over, prospective, open-labeled, randomized trial. Sample and setting. Bedridden patients aged ≥18 years at a medical center in Korea were included. Treatment order was randomly assigned using block randomization, with a block size of 4 and an assignment rate of one-by-one. Methods. Skin moisture was measured using a Corneometer 825 and transepidermal water loss was measured using a Tewameter 300. Results. Skin moisture after application of an uncoated paper was significantly lower than observed after application of a disposable underpad (mean 40·6 and SD 13·1 vs. mean 64·6 and SD 23·7, p < 0·001). Transepidermal water loss also showed greater health scores after using uncoated paper (mean 11·1 and SD 5·7 g/m2/hour) than after applying a disposable underpad (mean 23·2 and SD 11·1 g/m2/hour, p < 0·001). There were no statistical between-group differences in room temperature, relative humidity, and body temperature. Conclusion. We found that uncoated paper was helpful in avoiding excessive moisture without adverse effects. Relevance to clinical practice. As indicated by the results of this study, uncoated paper can be applied to bed-ridden patients who required incontinence care. Nurses may consider using uncoated paper as one of nursing methods in the routine care of bed-ridden patients for moisture control. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
5. Incongruence between test statistics and P values in medical papers.
- Author
-
García-Berthou, Emili and Alcaraz, Carles
- Subjects
- *
MEDICAL statistics , *STATISTICS , *MEDICAL literature , *UNIFORM distribution (Probability theory) , *PROBABILITY theory - Abstract
Background: Given an observed test statistic and its degrees of freedom, one may compute the observed P value with most statistical packages. It is unknown to what extent test statistics and P values are congruent in published medical papers. Methods: We checked the congruence of statistical results reported in all the papers of volumes 409-412 of Nature (2001) and a random sample of 63 results from volumes 322-323 of BMJ (2001). We also tested whether the frequencies of the last digit of a sample of 610 test statistics deviated from a uniform distribution (i.e., equally probable digits). Results: 11.6% (21 of 181) and 11.1% (7 of 63) of the statistical results published in Nature and BMJ respectively during 2001 were incongruent, probably mostly due to rounding, transcription, or type-setting errors. At least one such error appeared in 38% and 25% of the papers of Nature and BMJ, respectively. In 12% of the cases, the significance level might change one or more orders of magnitude. The frequencies of the last digit of statistics deviated from the uniform distribution and suggested digit preference in rounding and reporting. Conclusions: This incongruence of test statistics and P values is another example that statistical practice is generally poor, even in the most renowned scientific journals, and that quality of papers should be more controlled and valued. [ABSTRACT FROM AUTHOR]
- Published
- 2004
6. METHODOLOGICAL ISSUES IN NURSING RESEARCH Misrepresenting random sampling? A systematic review of research papers in the Journal of Advanced Nursing.
- Author
-
Williamson, Graham R.
- Subjects
- *
NURSING research , *STATISTICS , *STATISTICAL sampling , *PROBABILITY theory - Abstract
Williamson G.R. (2003) Journal of Advanced Nursing 44(3), 278–288 Misrepresenting random sampling? A systematic review of research papers in the Journal of Advanced Nursing This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
7. Detection of HBV-DNA in Dried Bloodstains on Filter Paper by Nested Polymerase Chain Reaction.
- Author
-
Jie Zhang, Ling Zhang, Manshu Song, and Wei Wang
- Subjects
- *
BLOOD collection , *HEPATITIS B , *POLYMERASE chain reaction methodology , *ENZYME-linked immunosorbent assay , *RESEARCH methodology , *PROBABILITY theory , *RESEARCH funding , *STATISTICS , *DATA analysis , *DIAGNOSIS - Abstract
Objective: To describe the technical performance of nested PCR for identifying hepatitis B viral (HBV) DNA. Methods: Hepatitis B viral DNA was extracted from a dried bloodstain on filter paper by a Chelex-100 method. Then the DNA fragment was amplified by nested polymerase chain reaction (PCR). The sensitivity and specificity of this method were also analyzed. The positive rate of the nested PCR-based method was compared with that of the enzyme-linked immunosorbent assays (ELISA) method. Results: The lowest detection limit of the test was 5 copies of HBV DNA per μL by this method. McNemar's test showed that the difference between the positive rates of these 2 methods was not statistically significant (P=0.289, P>0.05). Hepatitis B viral test results showed a good concordance between these 2 methods (kappa=0.727). Conclusion: A very small amount of the dried blood sample is required for the detection, which could overcome the shortage of the normal ELISA method that requires a relatively large amount of fresh blood samples. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
8. Probability and Statistics: A Tale of Two Worlds?
- Author
-
Genest, Christian
- Published
- 1999
9. Statistical significance testing and p-values: Defending the indefensible? A discussion paper and position statement.
- Author
-
Griffiths, Peter and Needleman, Jack
- Subjects
- *
AUTHORS , *CONFIDENCE intervals , *DISCUSSION , *INTERNATIONAL agencies , *LANGUAGE & languages , *NURSING research , *HEALTH outcome assessment , *PROBABILITY theory , *REPORT writing , *SERIAL publications , *STATISTICS , *DATA analysis , *PSYCHOLOGY of Research personnel ,RESEARCH evaluation - Abstract
Much statistical teaching and many research reports focus on the 'null hypothesis significance test'. Yet the correct meaning and interpretation of statistical significance tests is elusive. Misinterpretations are both common and persistent, leading many to question whether significance tests should be used at all. While most take aim at the arbitrary declaration of p < 0.05 as a threshold for determining 'significance', others extend the critique to suggest the 'p-value' should be dispensed with entirely. P-values and significance tests are still widely used as if they give a measure of the size and importance of relationships, even though this misunderstanding has been observed and discussed for many years. We argue that p-values and significance tests are intrinsically misleading. Point estimates of relationships and confidence intervals give direct information about the effect and the uncertainty of the estimate without recourse to interpreting how a particular p-value might have arisen or indeed referring to them at all. In this paper we briefly outline some of the problems with significance testing, offer a number of examples selected from a recent issue of the International Journal of Nursing Studies and discuss some proposed responses to these problems. We conclude by offering some guidance to authors reporting statistical tests in journals and present a position statement that has been adopted by the International Journal of Nursing Studies to guide its' authors in reporting the results of statistical analyses. While stopping short of calling for an outright ban on reporting p-values and significance tests we urge authors (and journals) to place more emphasis on measures of effect and estimates of precision/uncertainty and, following the position of the American Statistical Association, emphasise that authors (and readers) should avoid using 0.05 or any other cut off for a p-value as the basis for a decision about the meaningfulness/importance of an effect. If point estimates and confidence intervals are used, then the p-value may be redundant and can be omitted from reports. When authors talk about 'significance' they need to be explicit when referring to statistical significance and we recommend authors adopt the language of 'importance' when talking about effect sizes to avoid any confusion. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
10. Hardness amplification within NP
- Author
-
O'Donnell, Ryan
- Subjects
- *
ARITHMETIC mean , *PAPER , *STATISTICS , *PROBABILITY theory - Abstract
In this paper we investigate the following question: if NP is slightly hard on average, is it very hard on average? We give a positive answer: if there is a function in NP which is infinitely often balanced and
(1-1/poly(n)) -hard for circuits of polynomial size, then there is a function in NP which is infinitely often(1/2+n-1/2+#x03B5;) -hard for circuits of polynomial size. Our proof technique is to generalize the Yao XOR Lemma, allowing us to characterize nearly tightly the hardness of a composite functiong( f(x1),…,f(xn)) in terms of: (i) the original hardness off , and (ii) the expected bias of the functiong when subjected to random restrictions. The computational result we prove essentially matches an information-theoretic bound. [Copyright &y& Elsevier]- Published
- 2004
- Full Text
- View/download PDF
11. A Conversation with Milton Sobel
- Author
-
Mukhopadhyay, Nitis and Sobel, Milton
- Published
- 2000
12. The scaling relationship between citation-based performance and coauthorship patterns in natural sciences.
- Author
-
Ronda‐Pupo, Guillermo Armando and Katz, J. Sylvan
- Subjects
AUTHORSHIP ,INTERPROFESSIONAL relations ,PROBABILITY theory ,SCIENCE ,STATISTICS ,BIBLIOGRAPHIC databases ,DATA analysis ,CITATION analysis - Abstract
The aim of this paper is to extend our knowledge about the power-law relationship between citation-based performance and coauthorship patterns in papers in the natural sciences. We analyzed 829,924 articles that received 16,490,346 citations. The number of articles published through coauthorship accounts for 89%. The citation-based performance and coauthorship patterns exhibit a power-law correlation with a scaling exponent of 1.20 ± 0.07. Citations to a subfield's research articles tended to increase 2.
1.20 or 2.30 times each time it doubled the number of coauthored papers. The scaling exponent for the power-law relationship for single-authored papers was 0.85 ± 0.11. The citations to a subfield's single-authored research articles increased 2.0.85 or 1.89 times each time the research area doubled the number of single-authored papers. The Matthew Effect is stronger for coauthored papers than for single-authored. In fact, with a scaling exponent <1.0 the impact of single-authored papers exhibits a cumulative disadvantage or inverse Matthew Effect. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
13. COMPLETE CONVERGENCE AND COMPLETE MOMENT CONVERGENCE FOR WEIGHTED SUMS OF m–EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES.
- Author
-
XIANG HUANG and YONGFENG WU
- Subjects
STOCHASTIC convergence ,RANDOM variables ,PROBABILITY theory ,MATHEMATICS theorems ,STATISTICS - Abstract
The authors study the complete convergence and complete moment convergence for weighted sums of m-extended negatively dependent (m-END) random variables. The results obtained in this paper extend and improve the corresponding results of Wu, Zhai and Peng [Y. F. Wu, M. Q. Zhai and J. Y. Peng, On the complete convergence for weighted sums of extended negatively dependent random variables, Journal of Mathematical Inequalities, 13 (1) (2019), 251–260] and Zarei and Jabbari [H. Zarei and H. Jabbari, Complete convergence of weighted sums under negative dependence, Statistical Papers, 52 (2011), 413–418]. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Preface.
- Author
-
Okrasa, Włodzimierz and Rozkrut, Dominik
- Subjects
STATISTICS ,PROBABILITY theory - Published
- 2023
15. RESPONSE TO THE DISCUSSION OF THE PAPER 'CURRENT RESEARCH DIRECTIONS IN THE DEVELOPMENT OF EXPERT SYSTEMS BASED ON BELIEF NETWORKS'.
- Author
-
Cooper, Gregory F.
- Subjects
EXPERT systems ,COMPUTER systems ,PROBABILITY theory ,STATISTICS ,TESTING ,STOCHASTIC processes - Abstract
This article presents the author's reply to the discussion of the paper "Current Research Directions in the Development of Expert Systems Based on Belief Networks." The author says that he is grateful to the discussants for their many insightful comments on his paper. He says he wishes to emphasize that belief network research is a young field of study, and published empirical results are still few. Additional testing of expert systems based on belief networks is clearly needed and is taking place.
- Published
- 1989
- Full Text
- View/download PDF
16. Application of simple Bayesian statistics to a sample database for source correspondence
- Author
-
Kumar, Rajesh
- Subjects
- *
BAYESIAN analysis , *DATABASES , *PAPER , *BRAND name products , *PROBABILITY theory , *PAPER testing , *STATISTICS , *WRITING materials & instruments - Abstract
Abstract: Bayesian statistics was applied to a small sample database of the tensile properties of five different brands of writing paper which were perceptibly similar. The measured parameters in the database were found to overlap for the five brands. This posed a limitation to the application of the classical approach for “match” or “no match”. It was found that using Bayesian statistics for source correspondence, a mere 2–3 measurements corresponding to a particular brand raised the probabilities associated with that brand to as high a 72% and eliminating a couple of brands. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
17. Modified XLindley distribution: Properties, estimation, and applications.
- Author
-
Gemeay, Ahmed M., Beghriche, Abdelfateh, Sapkota, Laxmi Prasad, Zeghdoudi, Halim, Makumi, Nicholas, Bakr, M. E., and Balogun, Oluwafemi Samson
- Subjects
PROBABILITY theory ,STOCHASTIC orders ,INFERENTIAL statistics ,DATA visualization ,STATISTICS ,HAZARD function (Statistics) ,GOODNESS-of-fit tests - Abstract
This article aims to introduce the inverse new XLindley distribution, a further extension of the new XLindley distribution. The article explores various properties of the proposed model, such as the quantile function, stochastic orders, entropies, fuzzy reliability, moments, and stress–strength estimation. The paper also compares different methods of estimating the parameters of the proposed model and evaluates their performance using a simulation study. Moreover, the paper demonstrates the usefulness of the proposed model by applying it to two real datasets. The article shows that the proposed model fits the data better than seven existing models based on model selection criteria, goodness-of-fit test statistics, and graphical visualizations. The paper concludes that the new model can be a valuable tool for modeling and analyzing hazard functions or survival data in various fields and contributing to probability theory and statistical inferences. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Group-Based Trajectory Modeling (GBTM) of Citations in Scholarly Literature: Dynamic Qualities of "Transient" and "Sticky Knowledge Claims".
- Author
-
Baumgartner, Susanne E. and Leydesdorff, Loet
- Subjects
CHI-squared test ,CONFIDENCE intervals ,INFORMATION science ,INTELLECT ,PROBABILITY theory ,SCIENCE ,MULTIPLE regression analysis ,CITATION analysis ,DATA analysis software ,STATISTICAL models - Abstract
Group-based trajectory modeling (GBTM) is applied to the citation curves of articles in six journals and to all citable items in a single field of science (virology, 24 journals) to distinguish among the developmental trajectories in subpopulations. Can citation patterns of highly-cited papers be distinguished in an early phase as "fast-breaking" papers? Can "late bloomers" or "sleeping beauties" be identified? Most interesting, we find differences between "sticky knowledge claims" that continue to be cited more than 10 years after publication and "transient knowledge claims" that show a decay pattern after reaching a peak within a few years. Only papers following the trajectory of a "sticky knowledge claim" can be expected to have a sustained impact. These findings raise questions about indicators of "excellence" that use aggregated citation rates after 2 or 3 years (e.g., impact factors). Because aggregated citation curves can also be composites of the two patterns, fifth-order polynomials (with four bending points) are needed to capture citation curves precisely. For the journals under study, the most frequently cited groups were furthermore much smaller than 10%. Although GBTM has proved a useful method for investigating differences among citation trajectories, the methodology does not allow us to define a percentage of highly cited papers inductively across different fields and journals. Using multinomial logistic regression, we conclude that predictor variables such as journal names, number of authors, etc., do not affect the stickiness of knowledge claims in terms of citations but only the levels of aggregated citations (which are field-specific). [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
19. Data Analyses When Sample Sizes Are Small: Modern Advances for Dealing With Outliers, Skewed Distributions, and Heteroscedasticity.
- Author
-
Wilcox, Rand, Peterson, Travis J., and McNitt-Gray, Jill L.
- Subjects
CONCEPTUAL structures ,SCIENTIFIC observation ,PROBABILITY theory ,STATISTICS ,SAMPLE size (Statistics) ,DATA analysis ,MEASUREMENT errors - Abstract
The paper reviews advances and insights relevant to comparing groups when the sample sizes are small. There are conditions under which conventional, routinely used techniques are satisfactory. But major insights regarding outliers, skewed distributions, and unequal variances (heteroscedasticity) make it clear that under general conditions they provide poor control over the type I error probability and can have relatively poor power. In practical terms, important differences among groups can be missed and poorly characterized. Many new and improved methods have been derived that are aimed at dealing with the shortcomings of classic methods. To provide a conceptual basis for understanding the practical importance of modern methods, the paper reviews some modern insights related to why methods based on means can perform poorly. Then some strategies for dealing with nonnormal distributions and unequal variances are described. For brevity, the focus is on comparing 2 independent groups or 2 dependent groups based on the usual difference scores. The paper concludes with comments on issues to consider when choosing from among the methods reviewed in the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. BELIEF NETWORKS AND ADVERSE DRUG REACTIONS: DISCUSSION OF PAPERS.
- Author
-
Spiegelhalter, David J.
- Subjects
PROBABILITY theory ,STATISTICS ,DISTRIBUTION (Probability theory) ,STOCHASTIC processes - Abstract
This article presents a comment on the papers by statisticians G.F. Cooper and D.A. Lane. The author says that these two excellent papers reflect the increasing interest in structured probabilistic analysis, in which the complex relationships between variables are developed for a particular problem, instead of using standard statistical models for analysis. Cooper's paper discusses the increasing interest in probabilistic analysis in expert systems. Lane's analysis emphasizes coherence in using evidence on a particular case, and provides admirable detail concerning the source of the subjective probability assessments.
- Published
- 1989
- Full Text
- View/download PDF
21. CONNECTING STATISTICS, PROBABILITY, ALGEBRA AND DISCRETE MATHEMATICS.
- Author
-
LÓPEZ-BLÁZQUEZ, F., NÚÑEZ-VALDÉS, J., RECACHA, S., and VILLAR-LIÑÁN, M. T.
- Subjects
DISCRETE mathematics ,ALGEBRA ,MARKOV processes ,DIRECTED graphs ,STATISTICS ,PROBABILITY theory - Abstract
In this paper, we connect four different branches of Mathematics: Statistics, Probability, Algebra and Discrete Mathematics with the objective of introducing new results on Markov chains and evolution algebras obtained by following a relatively new line of research, already dealt with by several authors. It consists of the use of certain directed graphs to facilitate the study of Markov chains and evolution algebras, as well as to use each of the three objects to make easier the study of the other two. The results obtained can be useful, in turn, to link different scientific disciplines, such as Physics, Engineering and Mathematics, in which evolution algebras are considered very interesting tools. [ABSTRACT FROM AUTHOR]
- Published
- 2024
22. Integrating probability and big non-probability samples data to produce Official Statistics.
- Author
-
Golini, Natalia and Righi, Paolo
- Subjects
NONPROBABILITY sampling ,PARAMETERS (Statistics) ,STATISTICS ,SAMPLE size (Statistics) ,PROBABILITY theory ,DATA integration - Abstract
This paper introduces the pseudo-calibration estimators, a novel method that integrates a non-probability sample of big size with a probability sample, assuming both samples contain relevant information for estimating the population parameter. The proposed estimators share a structural similarity with the adjusted projection estimators and the difference estimators but they adopt a different inferential approach and informative setup. The pseudo-calibration estimators can be employed when the target variable is observed in the probability sample and, in the non-probability sample, it is observed correctly, observed with error, or predicted. This paper also introduces an original application of the jackknife-type method for variance estimation. A simulation study shows that the proposed estimators are robust and efficient compared to the regression data integration estimators that use the same informative setup. Finally, a further evaluation using real data is carried out. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Review of calculation of conditional power, predictive power and probability of success in clinical trials with continuous, binary and time-to-event endpoints.
- Author
-
Kundu, Madan G., Samanta, Sandipan, and Mondal, Shoubhik
- Subjects
STATISTICS ,CLINICAL trials ,MOBILE apps ,TIME ,TREATMENT effectiveness ,DESCRIPTIVE statistics ,DATA analysis ,PROGRAMMING languages ,PROBABILITY theory - Abstract
Assessment of study success using conditional power (CP), the predictive power of success (PPoS) and probability of success (PoS) is becoming increasingly common for resource optimization and adaption of trials in clinical investigation. Determination of these measures is often a non-trivial mathematical task. Further, the terminologies used across the literature are not consistent, and there is no consolidated presentation on this. We have made a structured presentation on these measures for both trial success and clinical success: first, we have summarized the expressions of CP, PPoS and PoS in a general setting and subsequently, expressions for these measures are obtained for continuous, binary, and time-to-event endpoints in single-arm and two-arm trial settings. Many of these expressions are previously published; however, some of the expressions are very new including the expressions for testing median of time-to-event endpoint in a single-arm trial. We have also shown that 1 / (no. of events) consistently underestimates the variance of log(median) and alternative expression for variance was derived. Examples are given along with the comparison of CP and PPoS. Expressions presented in this paper are implemented in LongCART package in R and in R shiny app https://ppos.shinyapps.io/public/. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. The use of time-to-event methods in dental research: a comparison based on five dental journals over a 11-year period.
- Author
-
Vähänikkilä, Hannu, Miettunen, Jouko, Tjäderhane, Leo, Larmas, Markku, and Nieminen, Pentti
- Subjects
BIBLIOMETRICS ,CHI-squared test ,DENTAL research ,PROBABILITY theory ,SERIAL publications ,STATISTICS ,SURVEYS ,SURVIVAL analysis (Biometry) ,PROPORTIONAL hazards models ,DATA analysis software - Abstract
Vähänikkilä H, Miettunen J, Tjäderhane L, Larmas M, Nieminen P. The use of time-to-event methods in dental research: a comparison based on five dental journals over a 11-year period. Community Dent Oral Epidemiol 2012; 40 (Suppl. 1): 36-42. © 2012 John Wiley & Sons A/S Abstract - Objectives: Time-to-event methods are used in multivariate data analysis to describe the relationship between patient variables and the timing of an outcome event. The aims of this study were to evaluate the reporting of statistical techniques and results in dental research papers with special reference to time-to-event (TTE) methods and to create guidelines for the appropriate reporting of these methods. Methods: All the original research reports published in five dental journals in 1996, 2001, 2005, 2006, and 2007 were reviewed. The evaluation covered 1985 articles that were based on the systematic collection and statistical analysis of research data. Differences between TTE approaches and others were assessed in terms of the justification for the number of cases, description of procedures, statistical references, software used, and statistical figures and tables provided. Results: Fifty-six papers (2.8% of the total) used time-to-event methods, the frequency of which increased slightly from 1996 to 2007 ( P = 0.061). Statistical procedures were described more extensively in the papers, which used TTE methods. Reporting of the statistical methodology in papers using other methods was in general inadequate. Conclusions: TTE methods are underused in dental research. Authors could well take heed of these results when designing their research, so as to make more use of such methods and to present the results in a manner that is in line with the policy and presentation of the leading dental journals. Authors could also improve their statistical reporting with the help of the guidelines presented here. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
25. The Transform-Transformer Approach: Unveiling the Odd Transmuted Rayleigh-X Family of Distributions.
- Author
-
Abdullahi, J., Gulumbe, S. U., Usman, U., and Garba, A. I.
- Subjects
DISTRIBUTION (Probability theory) ,RAYLEIGH model ,CHARACTERISTIC functions ,PROBABILITY theory ,STATISTICS - Abstract
The paper presents a novel class (family) of statistical distributions termed Odd Transmuted Rayleigh-X (OTRX) that was created through a transform-transformer (T-X) approach. The CDF and PDF of the OTR-X family were derived. The available statistical literature studied earlier highlighted that almost all generalized distributions (in which one or more parameters were added) performed well and have better presentation of data than their counterparts with less number of parameters. This has motivated us to developed new family that is capable of producing new distributions. The research paper also presented a clear mathematical formula for several characteristics of the OTR-X family, such as the ordinary moments, moment generating, quantile, and reliability function. In order to find the estimate of the corresponding parameters of the OTR-X family, the technique of maximum likelihood is used in the study. A new sub-model Odd Transmuted Rayleigh Inversed Exponential Distribution (OTRIED) was generated from the OTR-X class and compared its performance to Transmuted Inversed Exponential Distribution (TIED), Exponential Inversed Exponential Distribution (EIED), and Inversed Exponential Distribution using two different datasets. The results have shown that the proposed distribution out performed its competitors when using two different real-world datasets. Furthermore, the proposed distribution can be practicalized to any skewed dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. 人脸图像个体特征统计与价值评估.
- Author
-
黎智辉, 侯欣雨, and 谢兰迟
- Subjects
RACE ,IMAGE registration ,STATISTICS ,FORENSIC sciences ,PROBABILITY theory - Abstract
Copyright of Forensic Science & Technology is the property of Institute of Forensic Science, Ministry of Public Security and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2023
- Full Text
- View/download PDF
27. Self-measurement of upper extremity volume in women post-breast cancer: reliability and validity study.
- Author
-
Mori, Tal, Lustman, Alexander, and Katz-Leurer, Michal
- Subjects
DIAGNOSIS of edema ,LYMPHEDEMA diagnosis ,DISEASE relapse ,ANTHROPOMETRY ,BREAST tumors ,STATISTICAL correlation ,RESEARCH methodology ,PROBABILITY theory ,SELF-evaluation ,STATISTICS ,T-test (Statistics) ,SAMPLE size (Statistics) ,STATISTICAL reliability ,INTER-observer reliability ,CROSS-sectional method ,RESEARCH methodology evaluation ,ARM circumference ,DESCRIPTIVE statistics ,DISEASE complications ,DIAGNOSIS - Abstract
Background: Secondary lymphedema is a chronic swelling of the upper limb that may occur after treatment for breast cancer. During the acute phase, intensive treatment with a therapist is provided, while during the maintenance phase the patient needs to detect any re-swelling by self-examination. Objective: To assess the test-retest reliability and the concurrent validity of self-measurement upper limb volume among women post-breast cancer. Design: A cross-sectional study of 17 women post-breast cancer that experience a period of intensive unilateral upper limb lymphedema treatment in the past. Methods: On day 1 and day 10 at the clinic, the physiotherapist measured the volume of the upper limbs with the water displacement method (i.e. the 'gold standard' for volume measure) as well as with the more common method of plastic tape. The participants performed self-measurement twice with the paper tape under the supervision of a physiotherapist in the clinic. After a week the participants performed self-measurement at home with the paper tape. Results: The intra-class correlations measures indicated excellent values for the self-measure tape measurements on the operated side (0.97-0.99) as well as on the opposite arm (0.96-0.99). The self-measurement revealed a moderate association with the criterion measure, the water displacement ( r
p 0.59-0.68, ( p < 0.05)), and strong concurrent validity with therapist tape measurements ( rp 0.88-0.95, ( p < 0.05)). Conclusions: Women post-breast cancer can self-measure upper limb volume using a paper tape which is both, reliable and valid. [ABSTRACT FROM AUTHOR]- Published
- 2015
- Full Text
- View/download PDF
28. Bayesian Methods for Information Borrowing in Basket Trials: An Overview.
- Author
-
Zhou, Tianjian and Ji, Yuan
- Subjects
TUMOR treatment ,STATISTICS ,EXPERIMENTAL design ,CLINICAL trials ,CLINICAL medicine research ,DATA analysis ,STATISTICAL models ,DRUG development ,PROBABILITY theory - Abstract
Simple Summary: This paper provides a review of statistical methods for tumor-agnostic clinical trials. In particular, the review focuses on basket trials and provides methodological insights into various Bayesian approaches. The key concept of borrowing information through Bayesian hierarchical models is emphasized, and some novel trial designs are introduced. The review is expected to provide oncology and biostatistics researchers with more exposure to powerful Bayesian methods for the design and analysis of tumor-agnostic clinical trials. Basket trials allow simultaneous evaluation of a single therapy across multiple cancer types or subtypes of the same cancer. Since the same treatment is tested across all baskets, it may be desirable to borrow information across them to improve the statistical precision and power in estimating and detecting the treatment effects in different baskets. We review recent developments in Bayesian methods for the design and analysis of basket trials, focusing on the mechanism of information borrowing. We explain the common components of these methods, such as a prior model for the treatment effects that embodies an assumption of exchangeability. We also discuss the distinct features of these methods that lead to different degrees of borrowing. Through simulation studies, we demonstrate the impact of information borrowing on the operating characteristics of these methods and discuss its broader implications for drug development. Examples of basket trials are presented in both phase I and phase II settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. WASP (Write a Scientific Paper): Probability - Poisson and binomial distributions.
- Author
-
Grech, Victor and Calleja, Neville
- Subjects
- *
BINOMIAL distribution , *POISSON distribution , *BIOMETRY , *PROBABILITY theory , *EXTRAPOLATION , *TYPE 2 diabetes , *APPROXIMATION theory , *MEDICAL writing , *STATISTICS - Abstract
This paper outlines Binomial and Poisson distributions which are both used to measure the occurrence of a number of random events within a certain period. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Evaluation of an e-prescribing pilot in an inpatient recovery unit.
- Author
-
Greef, Sarah, Macpherson, Rob, Calciu, Claudia, Boniface, Judith, Jackson, Rachel, Foy, Chris, and Garton, Charles
- Subjects
COMPARATIVE studies ,COMPUTER software ,CONVALESCENCE ,MEDICAL prescriptions ,MENTAL health services ,PROBABILITY theory ,STATISTICS ,TIME ,PILOT projects ,DATA analysis ,EVALUATION research ,EVALUATION of human services programs ,DATA analysis software ,ELECTRONIC health records - Abstract
Aim To evaluate an e-prescribing pilot project that took place in a recovery unit in Gloucestershire. Method Nursing and medical staff recorded the time it took to prescribe and administer medication electronically and in paper form. A structured questionnaire was used to assess staff experience and attitudes to e-prescribing after the pilot had ended. Findings It took longer to prescribe and administer medication electronically as compared to using paper prescriptions. Staff identified benefits of e-prescribing, but had a greater number of concerns. Conclusion For e-prescribing to be used safely and for the benefit of patients in the future, the system needs to evolve to meet prescribing needs. Staff will need support and education to switch to this new way of working. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
31. Sojourn times of Gaussian and related random fields.
- Author
-
Dȩbicki, Krzysztof, Hashorva, Enkelejd, Peng Liu, and Michna, Zbigniew
- Subjects
PROBABILITY theory ,GAUSSIAN distribution ,CHI-squared test ,QUEUING theory ,STATISTICS - Abstract
This paper is concerned with the asymptotic analysis of sojourn times of random fields with continuous sample paths. Under a very general framework we show that there is an interesting relationship between tail asymptotics of sojourn times and that of supremum. Moreover, we establish the uniform double-sum method to derive the tail asymptotics of sojourn times. In the literature, based on the pioneering research of S. Berman the sojourn times have been utilised to derive the tail asymptotics of supremum of Gaussian processes. In this paper we show that the opposite direction is even more fruitful, namely knowing the asymptotics of supremum of random processes and fields (in particular Gaussian) it is possible to establish the asymptotics of their sojourn times. We illustrate our findings considering i) two dimensional Gaussian random fields, ii) chi-process generated by stationary Gaussian processes and iii) stationary Gaussian queueing processes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. PROBABILISTIC APPROACH TO STATISTICAL INDICES.
- Author
-
URDALETOVA, Anarkul, KYDYRALİEV, Syrgak, and KYDYRALİEVA, Kanaiym
- Subjects
CAPITALISM ,PROBABILITY theory ,STATISTICS ,DATA analysis ,DECISION making - Abstract
The purpose of this paper is to illustrate the importance of incorporating probabilistic methods into teaching modern statistics courses. Knowledge of statistical methods in today's conditions implies the ability to predict, which unfortunately is not included in most statistics courses. Students are usually taught to work with data within the framework of absolute certainty, but in real life we often have to make decisions in conditions of uncertainty. The market economy requires the skills to not only process the available statistical information, but also to understand what conclusions can be drawn from the information received, and to know how to predict future events. In this regard, in modern statistics it is impossible to do without knowledge of probabilistic methods. This paper discusses an approach to calculating the main statistical indices using the concept of probability. [ABSTRACT FROM AUTHOR]
- Published
- 2020
33. Two-tailed significance tests for 2 × 2 contingency tables: What is the alternative?
- Author
-
Prescott, Robin J
- Subjects
STATISTICAL hypothesis testing ,CONTINGENCY tables ,FISHER exact test ,CHI-squared test ,NULL hypothesis ,COMPUTER simulation ,PROBABILITY theory ,STATISTICS ,DATA analysis - Abstract
Two-tailed significance testing for 2 × 2 contingency tables has remained controversial. Within the medical literature, different tests are used in different papers and that choice may decide whether findings are adjudged to be significant or nonsignificant; a state of affairs that is clearly undesirable. In this paper, it is argued that a part of the controversy is due to a failure to recognise that there are two possible alternative hypotheses to the Null. It is further argued that, while one alternative hypothesis can lead to tests with greater power, the other choice is more applicable in medical research. That leads to the recommendation that, within medical research, 2 × 2 tables should be tested using double the one-tailed exact probability from Fisher's exact test or, as an approximation, the chi-squared test with Yates' correction for continuity. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
34. Evidence-based radiology: how to quickly assess the validity and strength of publications in the diagnostic radiology literature.
- Author
-
Dodd, Jonathan D., MacEneaney, Peter M., and Malone, Dermot E.
- Subjects
RADIOLOGY ,EVIDENCE-based medicine ,BILE ducts ,CLINICAL medicine ,DIAGNOSIS ,MEDICAL care ,MASS media ,COMPUTER software ,CONFIDENCE intervals ,MEDICAL specialties & specialists ,PROBABILITY theory ,RESEARCH evaluation ,STATISTICS ,DATA analysis ,ROUTINE diagnostic tests ,STANDARDS - Abstract
The aim of this study was to show how evidence-based medicine (EBM) techniques can be applied to the appraisal of diagnostic radiology publications. A clinical scenario is described: a gastroenterologist has questioned the diagnostic performance of magnetic resonance cholangiopancreatography (MRCP) in a patient who may have common bile duct (CBD) stones. His opinion was based on an article on MRCP published in "Gut." The principles of EBM are described and then applied to the critical appraisal of this paper. Another paper on the same subject was obtained from the radiology literature and was also critically appraised using explicit EBM criteria. The principles for assessing the validity and strength of both studies are outlined. All statistical parameters were generated quickly using a spreadsheet in Excel format. The results of EBM assessment of both papers are presented. The calculation and application of confidence intervals (CIs) and likelihood ratios (LRs) for both studies are described. These statistical results are applied to individual patient scenarios using graphs of conditional probability (GCP). Basic EBM principles are described and additional points relevant to radiologists discussed. Online resources for EBR practice are identified. The principles of EBM and their application to radiology are discussed. It is emphasized that sensitivity and specificity are point estimates of the "true" characteristics of a test in clinical practice. A spreadsheet can be used to quickly calculate CIs, LRs and GCPs. These give the radiologist a better understanding of the meaning of diagnostic test results in any patient or population of patients. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
35. Linear Algorithms for Robust and Scalable Nonparametric Multiclass Probability Estimation.
- Author
-
LIYUN ZENG and HAO HELEN ZHANG
- Subjects
NONPARAMETRIC estimation ,CONDITIONAL probability ,SUPPORT vector machines ,POLYNOMIAL time algorithms ,STATISTICS ,PROBABILITY theory ,COMPUTATIONAL complexity ,POLYNOMIAL chaos - Abstract
Multiclass probability estimation is the problem of estimating conditional probabilities of a data point belonging to a class given its covariate information. It has broad applications in statistical analysis and data science. Recently a class of weighted Support Vector Machines (wSVMs) has been developed to estimate class probabilities through ensemble learning for K-class problems (Wu et al., 2010; Wang et al., 2019), where K is the number of classes. The estimators are robust and achieve high accuracy for probability estimation, but their learning is implemented through pairwise coupling, which demands polynomial time in K. In this paper, we propose two new learning schemes, the baseline learning and the One-vs-All (OVA) learning, to further improve wSVMs in terms of computational efficiency and estimation accuracy. In particular, the baseline learning has optimal computational complexity in the sense that it is linear in K. Though not the most efficient in computation, the OVA is found to have the best estimation accuracy among all the procedures under comparison. The resulting estimators are distribution-free and shown to be consistent. We further conduct extensive numerical experiments to demonstrate their finite sample performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Strong convergence of ρ˜-mixing random sequences.
- Author
-
Zhong, Pingping, Song, Yuesheng, and Yang, Weiguo
- Subjects
RANDOM variables ,PROBABILITY theory ,LIMIT theorems ,LAW of large numbers ,STATISTICS - Abstract
The strong limit theory is one of the most important problems in the probability theory. Some results on the convergence of ρ ˜ -mixing random sequences have been presented. In this paper, we study the almost sure convergence for ρ ˜ -mixing random sequences. As a result, we generalize partial results of Wu and Jiang [Wu, Q. Y., and Y. Y. Jiang. 2008. Some strong limit theorems for q ˜ -mixing sequences of random variables. Statistics & Probability Letters 78:1017–23.]. We obtain the mainstream and some corresponding conclusions by using truncation methods and the generalized three-series theorem. A known result is generalized in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. Nurses' job burnout and its association with work environment, empowerment and psychological stress during COVID‐19 pandemic.
- Author
-
Al Sabei, Sulaiman Dawood, Al‐Rawajfah, Omar, AbuAlRub, Raeda, Labrague, Leodoro J., and Burney, Ikram Ali
- Subjects
PSYCHOLOGICAL burnout ,WORK environment ,WELL-being ,STATISTICS ,NURSES' attitudes ,ANALYSIS of variance ,CROSS-sectional method ,MULTIVARIATE analysis ,MENTAL health ,NURSING services administration ,REGRESSION analysis ,PSYCHOLOGY of nurses ,SELF-efficacy ,CRONBACH'S alpha ,QUESTIONNAIRES ,DESCRIPTIVE statistics ,SCALE analysis (Psychology) ,CHI-squared test ,RESEARCH funding ,STATISTICAL sampling ,DATA analysis software ,PSYCHOLOGICAL stress ,COVID-19 pandemic ,EMPLOYEE retention ,PROBABILITY theory - Abstract
Aim: The aim of this study was to assess the influence of perceived work environment, empowerment and psychological stress on job burnout among nurses working at the time of the COVID‐19 pandemic. Background: Nurses experienced high levels of job burnout during the pandemic, which impacted their mental health and well‐being. Studies investigating the influence of work environment, empowerment and stress on burnout during the time of COVID‐19 are limited. Design: The study utilized a cross‐sectional design. Methods: Data were collected from 351 nurses in Oman between January and March 2021. The Maslach Burnout Inventory, the Practice Environment Scale of the Nursing Work Index, the Conditions of Work Effectiveness Questionnaire and the Perceived Stress Scale were used to assess study variables. Results: About two‐thirds of the nurses (65.6%) reported high levels of job burnout. Nurse managers' ability, leadership and support; staffing and resources adequacy; and nurses' access to support were significant factors associated with a reduced level of burnout. Conclusion: Supporting nurses during the crisis, ensuring adequate staffing levels and providing sufficient resources are critical to lower job burnout. Creating a positive and empowered work environment is vital to enhance nurses' retention during the pandemic. Summary statement: What is already known about this topic? Nurses' perceived work environment and structural empowerment have been associated with job burnout. However, their impact during the COVID‐19 has not been examined.As nurses' psychological stress increases, their job burnout increases. What this paper adds? Nurses experienced high levels of job burnout during the COVID‐19 pandemic.Working in a favourable environment and having access to support can reduce nurses' job burnout during the crisis. The implications of this paper During pandemics, ensuring adequate staffing levels and providing sufficient resources are critical to lower job burnout and enhance nurse retention. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Probability cost function based weighted extreme learning machine.
- Author
-
Hyok, Ri Jong, Hyok, O Chung, and Hyok, Kim Chol
- Subjects
COST functions ,WEIGHT training ,PROBABILITY theory ,VECTOR data ,STATISTICS - Abstract
Standard extreme learning machine has good generalization performance and fast learning speed, but has the disadvantage of degrading performance for imbalance learning. Weighted extreme learning machine (WELM) is a kind of cost-sensitive learning method that significantly improves the classification performance for imbalanced data by adding extra weight to each training sample. In this paper, we present a novel WELM that defined by probability cost function concerned with the probability that given sample belong to each class. We propose learning network mapping input training data to a vector consisting of the probability value of given training sample belonging to each class. We define its cost functions to maximize the marginal distance between classes and the probability that each training sample will be accurately classified. We empirically show that our proposed algorithm obtains superior performance in general than some state-of-the-art imbalance learning approaches on 32 binary class and 14 multiclass imbalanced datasets. To further estimate the experimental results, we also provide statistical analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Harnessing causal forests for epidemiologic research: key considerations.
- Author
-
Shiba, Koichiro and Inoue, Kosuke
- Subjects
- *
STATISTICAL models , *DATA analysis , *RESEARCH funding , *PROBABILITY theory , *MOTIVATION (Psychology) , *STATISTICS , *RESEARCH methodology , *TREATMENT effect heterogeneity , *ATTRIBUTION (Social psychology) , *EPIDEMIOLOGICAL research - Abstract
Assessing heterogeneous treatment effects (HTEs) is an essential task in epidemiology. The recent integration of machine learning into causal inference has provided a new, flexible tool for evaluating complex HTEs: causal forest. In a recent paper, Jawadekar et al (Am J Epidemiol. 2023;192(7):1155-1165) introduced this innovative approach and offered practical guidelines for applied users. Building on their work, this commentary provides additional insights and guidance to promote the understanding and application of causal forest in epidemiologic research. We start with conceptual clarifications, differentiating between honesty and cross-fitting, and exploring the interpretation of estimated conditional average treatment effects. We then delve into practical considerations not addressed by Jawadekar et al, including motivations for estimating HTEs, calibration approaches, and ways to leverage causal forest output with examples from simulated data. We conclude by outlining challenges to consider for future advancements and applications of causal forest in epidemiologic research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Comments on the paper: `Heuristic and Special Case Algorithms for Dispersion Problems' by S. S...
- Author
-
Tamir, Are
- Subjects
HEURISTIC ,MATHEMATICAL optimization ,ALGORITHMS ,MATHEMATICAL statistics ,MATHEMATICS ,STATISTICS ,PROBABILITY theory - Abstract
This article presents comments by the author on a paper discussing heuristic and special case algorithms for dispersion problems. Problems discussed in this article are the subject of the paper in focus. The results presented in that paper include a simple heuristic for Max-Min Facility Dispersion (MMFD) which provides a performance guarantee of 1/2, and a similar heuristic for Max-Avg Facility Dispersion (MAFD) with a performance guarantee of 1/4. It is also proved there that obtaining a performance guarantee of more than 1/2 for MMFD is NP-hard. The paper also discussed the one- dimensional versions of MMFD and MAFD, where the vertex set V consists of a set of n points on the real line. Section 2 of the paper is devoted to the heuristic for MMPD. It should be noted that this heuristic was analyzed before. In fact, it is shown therein that the performance guarantee of 1/2, holds for a more general model where the selected set p is restricted to be in a compact subset of the network indiced by the edge distances.
- Published
- 1998
- Full Text
- View/download PDF
41. A method to improve the determination of ignition probability in buildings based on Bayesian network.
- Author
-
Hu, Jun, Shu, Xueming, Shen, Shifei, Yan, Jun, Tian, Fengshi, He, Sheng, and Ni, Xiaoyong
- Subjects
BAYESIAN analysis ,CONDITIONAL probability ,STATISTICS ,PROBABILITY theory - Abstract
Summary: The traditional research of building fire probability analysis is from statistics or fire science. This paper combines the two methods and aims to improve the statistical method of building ignition probability determination according to the research conclusion of fire science. The specific factors that affect the ignition probability are divided into three aspects: humans, ignition sources and combustibles and environments. On this basis, the Bayesian network of building ignition probability is constructed, the nodes and conditional probability table in the Bayesian network are introduced in detail, according to which the ignition probability of building can be calculated quantitatively and objectively. Then some typical buildings are chosen as examples for the application of the method, the posterior probability value is calculated by obtaining the relevant building information and substituting them into the Bayesian network. The ignition probability is dynamic, and the comparison with the statistical data of building fire also proves its rationality. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Current status and influencing factors of fatigue in patients with rheumatoid arthritis: A cross‐sectional study in China.
- Author
-
Gao, Lei, Sun, Yao, Pan, Li, Li, Yafang, Yuan, Jiqing, Cui, Xuejun, and Shi, Baoxin
- Subjects
STATISTICS ,SOCIAL support ,ACADEMIC medical centers ,STATISTICAL reliability ,RESEARCH evaluation ,CROSS-sectional method ,ONE-way analysis of variance ,MULTIPLE regression analysis ,VISUAL analog scale ,PHYSICAL activity ,CRONBACH'S alpha ,T-test (Statistics) ,PEARSON correlation (Statistics) ,RHEUMATOID arthritis ,MENTAL depression ,QUESTIONNAIRES ,DESCRIPTIVE statistics ,FATIGUE (Physiology) ,ANXIETY ,STATISTICAL sampling ,DATA analysis software ,DATA analysis ,PROBABILITY theory - Abstract
Aim: This study aimed to explore the level and influencing factors of fatigue in patients with rheumatoid arthritis. Methods: This cross‐sectional study was conducted in 243 patients with rheumatoid arthritis from April 2016 to March 2017. The Bristol Rheumatoid Arthritis Fatigue Multi‐Dimensional Questionnaire, Arthritis Self‐Efficacy Scale‐8, Visual Analogue Scale for pain, physical function subscale of Short Form 36‐Item Health Survey, Hospital Anxiety and Depression Scale, Perceived Social Support Scale, Pittsburgh Sleep Quality Index and a self‐designed demographic and disease‐related information questionnaire were used to collect the data. Stepwise linear multiple regression was used to clarify the impact of statistically significant variables (P < 0.05) in the independent sample t test, one‐way ANOVA and correlation analysis on the level of fatigue. Results: Stepwise linear multiple regression analyses showed that disease activity, self‐efficacy, physical function, pain, depression, duration of morning stiffness and anxiety were major factors influencing fatigue in patients with rheumatoid arthritis, which explained 59.5% of the total variance. Conclusion: Our study demonstrated a moderate level of fatigue in Chinese patients with rheumatoid arthritis. In clinical practice, nurses should explore individualized intervention programmes based on related predictors of fatigue to help patients relieve fatigue. Summary statement: What is already known about this topic? Fatigue is a common problem in patients with rheumatoid arthritis (RA).Little research has investigated what factors are associated with fatigue in patients with RA.Nurses need to understand the factors that potentially influence fatigue in patients with RA, in order to improve the quality of life. What this paper adds? Fatigue is moderate in Chinese patients with RA, which is lower than the results of western countries.Disease activity, self‐efficacy, physical function, pain, depression, duration of morning stiffness and anxiety are important factors affecting fatigue in patients with RA. The implications of this paper: Special attention should be paid to the RA patients with high level of fatigue.The findings of this study highlight the need to develop effective strategies to alleviate fatigue and eventually improve the quality of life in patients with RA. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. What is the proper way to apply the multiple comparison test?
- Author
-
Sangseok Lee and Dong Kyu Lee
- Subjects
HYPOTHESIS ,ERROR rates ,PROBABILITY theory ,STATISTICAL power analysis ,BONFERRONI correction ,ANALYSIS of variance - Abstract
Multiple comparisons tests (MCTs) are performed several times on the mean of experimental conditions. When the null hypothesis is rejected in a validation, MCTs are performed when certain experimental conditions have a statistically significant mean difference or there is a specific aspect between the group means. A problem occurs if the error rate increases while multiple hypothesis tests are performed simultaneously. Consequently, in an MCT, it is necessary to control the error rate to an appropriate level. In this paper, we discuss how to test multiple hypotheses simultaneously while limiting type I error rate, which is caused by a inflation. To choose the appropriate test, we must maintain the balance between statistical power and type I error rate. If the test is too conservative, a type I error is not likely to occur. However, concurrently, the test may have insufficient power resulted in increased probability of type II error occurrence. Most researchers may hope to find the best way of adjusting the type I error rate to discriminate the real differences between observed data without wasting too much statistical power. It is expected that this paper will help researchers understand the differences between MCTs and apply them appropriately. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
44. Stochastically Transitive Models for Pairwise Comparisons: Statistical and Computational Issues.
- Author
-
Shah, Nihar B., Balakrishnan, Sivaraman, Guntuboyina, Adityanand, and Wainwright, Martin J.
- Subjects
PAIRED comparisons (Mathematics) ,THRESHOLDING algorithms ,STOCHASTIC analysis ,PARAMETER estimation ,PROBABILITY theory - Abstract
There are various parametric models for analyzing pairwise comparison data, including the Bradley–Terry–Luce (BTL) and Thurstone models, but their reliance on strong parametric assumptions is limiting. In this paper, we study a flexible model for pairwise comparisons, under which the probabilities of outcomes are required only to satisfy a natural form of stochastic transitivity. This class includes parametric models, including the BTL and Thurstone models as special cases, but is considerably more general. We provide various examples of models in this broader stochastically transitive class for which classical parametric models provide poor fits. Despite this greater flexibility, we show that the matrix of probabilities can be estimated at the same rate as in standard parametric models up to logarithmic terms. On the other hand, unlike in the BTL and Thurstone models, computing the minimax-optimal estimator in the stochastically transitive model is non-trivial, and we explore various computationally tractable alternatives. We show that a simple singular value thresholding algorithm is statistically consistent but does not achieve the minimax rate. We then propose and study algorithms that achieve the minimax rate over interesting sub-classes of the full stochastically transitive class. We complement our theoretical results with thorough numerical simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
45. Teacher Training College Student Performance in Statistics and Probability Exams in Rwanda.
- Author
-
Dushimimana, Jean Claude and Uworwabayeho, Alphonse
- Subjects
TEACHER training ,PRIMARY education ,PROBABILITY theory ,CURRICULUM - Abstract
Teacher Training Colleges (TTCs) were established in order to produce qualified primary teachers in Rwanda. However, reviews on primary teacher training have consistently highlighted serious shortcomings in the quality and relevance of the courses offered. They argue for poor alignment of the teacher training curriculum with the school curriculum and lack of teaching experience of tutors. In this paper, we analyse TTC students' performance in statistics and probability and compare their performance in these specific areas with other mathematics topics areas. This is done through analysing their success rate in questions related to statistics and probability in national examinations over the period 2014-2016. Pearson coefficient reveals no relationship between students' performance in statistics and probability and other topic areas. Furthermore, some students performed better in other areas of mathematics but failed in statistics and probability questions and vice-versa. Although students are trained to teach mathematics in primary schools, they still poorly perform in national examinations, thus hypothetically leading to poorly teach this subject in primary education. [ABSTRACT FROM AUTHOR]
- Published
- 2021
46. Inverse Probability Weighting to Estimate Exposure Effects on the Burden of Recurrent Outcomes in the Presence of Competing Events.
- Author
-
Gaber, Charles E, Edwards, Jessie K, Lund, Jennifer L, Peery, Anne F, Richardson, David B, and Kinlaw, Alan C
- Subjects
NONPARAMETRIC statistics ,STATISTICS ,SCIENTIFIC observation ,SIMULATION methods in education ,TREATMENT effectiveness ,CONCEPTUAL structures ,ATTRIBUTION (Social psychology) ,DATA analysis ,PROBABILITY theory - Abstract
Recurrent events—outcomes that an individual can experience repeatedly over the course of follow-up—are common in epidemiologic and health services research. Studies involving recurrent events often focus on time to first occurrence or on event rates, which assume constant hazards over time. In this paper, we contextualize recurrent event parameters of interest using counterfactual theory in a causal inference framework and describe an approach for estimating a target parameter referred to as the mean cumulative count. This approach leverages inverse probability weights to control measured confounding with an existing (and underutilized) nonparametric estimator of recurrent event burden first proposed by Dong et al. in 2015. We use simulations to demonstrate the unbiased estimation of the mean cumulative count using the weighted Dong-Yasui estimator in a variety of scenarios. The weighted Dong-Yasui estimator for the mean cumulative count allows researchers to use observational data to flexibly estimate and contrast the expected number of cumulative events experienced per individual by a given time point under different exposure regimens. We provide code to ease application of this method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Epistemic Judgments are Insensitive to Probabilities.
- Author
-
Bricker, Adam Michael
- Subjects
STATISTICS ,PROBABILITY theory ,EPISTEMIC logic ,RESPECT - Abstract
Multiple epistemological programs make use of intuitive judgments pertaining to an individual's ability to gain knowledge from exclusively probabilistic/statistical information. This paper argues that these judgments likely form without deference to such information, instead being a function of the degree to which having knowledge is representative of an agent. Thus, these judgments fit the pattern of formation via a representativeness heuristic, like that famously described by Kahneman and Tversky to explain similar probabilistic judgments. Given this broad insensitivity to probabilistic/statistical information, it directly follows that these epistemic judgments are insensitive to a given agent's epistemic status. From this, the paper concludes that, breaking with common epistemological practice, we cannot assume that such judgments are reliable. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
48. Accurate Inference for the Mean of the Poisson-Exponential Distribution.
- Author
-
Wei Lin, Xiang Li, and Augustine Wong
- Subjects
PROBABILITY theory ,APPROXIMATION theory ,STATISTICS ,NUMERICAL analysis ,CORRECTION factors - Abstract
Although the random sum distribution has been well-studied in probability theory, inference for the mean of such distribution is very limited in the literature. In this paper, two approaches are proposed to obtain inference for the mean of the Poisson-Exponential distribution. Both proposed approaches require the log-likelihood function of the Poisson-Exponential distribution, but the exact form of the log-likelihood function is not available. An approximate form of the log-likelihood function is then derived by the saddlepoint method. Inference for the mean of the Poisson-Exponential distribution can either be obtained from the modified signed likelihood root statistic or from the Bartlett corrected likelihood ratio statistic. The explicit form of the modified signed likelihood root statistic is derived in this paper, and a systematic method to numerically approximate the Bartlett correction factor, hence the Bartlett corrected likelihood ratio statistic is proposed. Simulation studies show that both methods are extremely accurate even when the sample size is small. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
49. Comparing the inversion statistic for distribution-biased and distribution-shifted permutations with the geometric and the GEM distributions.
- Author
-
Pinsky, Ross G.
- Subjects
STATISTICS ,GEOMETRIC distribution ,PERMUTATIONS ,PROBABILITY theory ,STOCHASTIC convergence - Abstract
Given a probability distribution p := {p
k }k=1 ∞ on the positive integers, there are two natural ways to construct a random permutation in Sn or a random permutation of N from IID samples from p. One is called the p-biased construction and the other the p-shifted construction. In the first part of the paper we consider the case that the distribution p is the geometric distribution with parameter 1 - q ∈ (0, 1). In this case, the p-shifted random permutation has the Mallows distribution with parameter q. Let Pn b;Geo(1-q) and Pn s;Geo(1-q) denote the biased and the shifted distributions on Sn . The expected number of inversions of a permutation under Pn s;Geo(1-q) is greater than under Pn b;Geo(1-q) , and under either of these distributions, a permutation tends to have many fewer inversions than it would have under the uniform distribution. For fixed n, both Pn b;Geo(1-q) and Pn s;Geo(1-q) converge weakly as q → 1 to the uniform distribution on Sn . We compare the biased and the shifted distributions by studying the inversion statistic under ... and ... for various rates of convergence of qn to 1. In the second part of the paper we consider p-biased and p-shifted permutations for the case that the distribution p is itself random and distributed as a GEM(θ)-distribution. In particular, in both the GEM(θ)-biased and the GEM(θ)-shifted cases, the expected number of inversions behaves asymptotically as it does under the Geo(1 - q)-shifted distribution with θ = q / 1-q . This allows one to consider the GEM(θ)-shifted case as the random counterpart of the Geo(q)-shifted case. We also consider another p-biased distribution with random p for which the expected number of inversions behaves asymptotically as it does under the Geo(1 - q)-biased case with θ and q as above, and with θ → ∞ and q → 1 [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
50. Jackknife empirical likelihood for the mean of a zero-and-one inflated population.
- Author
-
Tian, Weizhong, Liu, Tingting, and Ning, Wei
- Subjects
- *
ASYMPTOTIC distribution , *SKEWNESS (Probability theory) , *STATISTICS , *PROBABILITY theory - Abstract
In many statistical analysis, a finite population may contain a large proportion of zero-and-one values that make the population distribution severely skew. Confidence intervals based on a normal approximation (NA) for such data may have low coverage probabilities. In this paper, we apply the methods of jackknife empirical likelihood (JEL) and adjusted jackknife empirical likelihood (AJEL) to discuss the confidence intervals for the mean of zero-and-one inflated population. Asymptotic distributions of the likelihood-type statistics are studied. Simulations are conducted to compare coverage probabilities with other methods under different distributions. Real data is given to illustrate the procedure of proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.