11,556 results
Search Results
2. Hand hygiene monitoring: Comparison between app and paper forms for direct observation.
- Author
-
Libero, Giulia, Bordino, Valerio, Garlasco, Jacopo, Vicentini, Costanza, and Maria Zotti, Carla
- Subjects
- *
STATISTICS , *SCIENTIFIC observation , *MOBILE apps , *TIME , *PATIENT monitoring , *CONTENT mining , *INFECTION control , *DESCRIPTIVE statistics , *HAND washing , *DATA analysis , *DATA analysis software - Abstract
Healthcare‐associated infections (HAIs) are a global public health threat. Italy is one of the countries with the highest prevalence of HAI. Hand hygiene (HH) is a pillar of infection prevention and control. Monitoring HH is necessary to improve HH compliance, and direct observation is considered the gold standard. Transcription and analysis of data collected during direct observation of HH compliance with the WHO paper form are time‐consuming. We collected, during a 9‐day observation period, HH opportunities and compliance both with a smartphone application (SpeedyAudit) and with the WHO paper form. Then, we investigated the difference in the required time for data transcription and analysis between the WHO paper form and the use of the app. The difference in the required time for data transcription and analysis was significant with a mean time of 2 s using the app and about 14–54 min/day using paper form (p =.004) while no significant difference was found in measured compliance rates between the two data collecting methods. HH monitoring with an app is time‐saving, and the app we used was easy to use. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Electronic Versus Paper and Pencil Survey Administration Mode Comparison: 2019 Youth Risk Behavior Survey*.
- Author
-
Bryan, Leah N., Smith‐Grant, Jennifer, Brener, Nancy, Kilmer, Greta, Lo, Annie, Queen, Barbara, and Underwood, J. Michael
- Subjects
- *
RISK-taking behavior , *CLUSTER sampling , *STATISTICS , *SUBSTANCE abuse , *SAMPLE size (Statistics) , *TIME , *HUMAN sexuality , *NUTRITION , *VIOLENCE , *MENTAL health , *SURVEYS , *PHYSICAL activity , *PSYCHOLOGY of high school students , *QUESTIONNAIRES , *SEX customs , *ALCOHOL drinking , *DESCRIPTIVE statistics , *STATISTICAL sampling , *DATA analysis software , *PROBABILITY theory , *ADOLESCENCE - Abstract
BACKGROUND: Since the inception of the Youth Risk Behavior Surveillance System in 1991, all surveys have been conducted in schools, using paper and pencil instruments (PAPI). For the 2019 YRBSS, sites were offered the opportunity to conduct their surveys using electronic data collection. This study aimed to determine whether differences in select metrics existed between students who completed the survey electronically versus using PAPI. METHODS: Thirty risk behaviors were examined in this study. Data completeness, response rates and bivariate comparisons of risk behavior prevalence between administration modes were examined. RESULTS: Twenty‐nine of 30 questions examined had more complete responses among students using electronic surveys. Small differences were found for student and school response rates between modes. Twenty‐five of 30 adolescent risk behaviors showed no mode effect. CONCLUSIONS: Seven of 44 states and DC participated electronically. Because survey data were more complete; school and student response rates were consistent; and minor differences existed in risk behaviors between modes, the acceptability of collecting data electronically was demonstrated. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Forecasting Commercial Paper Rates.
- Author
-
Lackman, Conway, Carlson, William, and Varick, Celia
- Subjects
ECONOMIC forecasting ,NEGOTIABLE instruments ,COMMERCIAL paper issues ,STATISTICS ,FINANCE - Abstract
A model previously developed by Lackman (C. L. Lackman, Forecasting commercial paper rates. Journal of Business Finance and Accounting 15 (1988) 499-524) for the period 1960 to 1985 is updated to include the 1990s and incorporate statistical techniques relating to tests for stationary conditions not available in 1988. As in the previous model, the demand for commercial paper by each institution (Households (HH), Life Insurance Companies (LIC), Non-Financial Corporations (CRP) and Finance Corporations (FC)) and the total demand is simulated. Simulations of the commercial paper rate are also generated--using just the demand equations (total supply exogenous) and then employing the entire model (supply endogenous) to determine the rate. Simulation periods are from 1960:2 to 2001:4 for all demand simulations. The dynamic simulation of the total demand for commercial paper performs well. The resulting root mean square error, 3.485, compares favourably with the Federal Reserve Boston-Massachusetts Institute of Technology (FRB-MIT) estimate of the commercial paper rate (deLeeuw and Granlich, 1968). [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
5. The weakening relationship between the impact factor and papers' citations in the digital age.
- Author
-
Lozano, George A., Larivière, Vincent, and Gingras, Yves
- Subjects
STATISTICAL correlation ,INTERNET ,SCIENCE ,SERIAL publications ,STATISTICS ,ELECTRONIC publications ,BIBLIOGRAPHIC databases ,DATA analysis ,CITATION analysis - Abstract
Historically, papers have been physically bound to the journal in which they were published; but in the digital age papers are available individually, no longer tied to their respective journals. Hence, papers now can be read and cited based on their own merits, independently of the journal's physical availability, reputation, or impact factor ( IF). We compare the strength of the relationship between journals' IFs and the actual citations received by their respective papers from 1902 to 2009. Throughout most of the 20th century, papers' citation rates were increasingly linked to their respective journals' IFs. However, since 1990, the advent of the digital age, the relation between IFs and paper citations has been weakening. This began first in physics, a field that was quick to make the transition into the electronic domain. Furthermore, since 1990 the overall proportion of highly cited papers coming from highly cited journals has been decreasing and, of these highly cited papers, the proportion not coming from highly cited journals has been increasing. Should this pattern continue, it might bring an end to the use of the IF as a way to evaluate the quality of journals, papers, and researchers. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
6. Is collaboration among scientists related to the citation impact of papers because their quality increases with collaboration? An analysis based on data from F1000Prime and normalized citation scores.
- Author
-
Bornmann, Lutz
- Subjects
INTERPROFESSIONAL relations ,REGRESSION analysis ,STATISTICS ,T-test (Statistics) ,DATA analysis ,PERIODICAL articles ,CITATION analysis ,IMPACT factor (Citation analysis) - Abstract
In recent years, the relationship of collaboration among scientists and the citation impact of papers have been frequently investigated. Most of the studies show that the two variables are closely related: An increasing collaboration activity (measured in terms of number of authors, number of affiliations, and number of countries) is associated with an increased citation impact. However, it is not clear whether the increased citation impact is based on the higher quality of papers that profit from more than one scientist giving expert input or other (citation-specific) factors. Thus, the current study addresses this question by using two comprehensive data sets with publications (in the biomedical area) including quality assessments by experts (F1000Prime member scores) and citation data for the publications. The study is based on more than 15,000 papers. Robust regression models are used to investigate the relationship between number of authors, number of affiliations, and number of countries, respectively, and citation impact-controlling for the papers' quality (measured by F1000Prime expert ratings). The results point out that the effect of collaboration activities on impact is largely independent of the papers' quality. The citation advantage is apparently not quality related; citation-specific factors (e.g., self-citations) seem to be important here. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
7. In vitro burn model illustrating heat conduction patterns using compressed thermal papers.
- Author
-
Lee, Jun Yong, Jung, Sung ‐ No, and Kwon, Ho
- Subjects
BURNS & scalds ,HEAT ,MATHEMATICAL models ,STATISTICS ,THEORY ,INTER-observer reliability ,IN vitro studies - Abstract
To date, heat conduction from heat sources to tissue has been estimated by complex mathematical modeling. In the present study, we developed an intuitive in vitro skin burn model that illustrates heat conduction patterns inside the skin. This was composed of tightly compressed thermal papers with compression frames. Heat flow through the model left a trace by changing the color of thermal papers. These were digitized and three-dimensionally reconstituted to reproduce the heat conduction patterns in the skin. For standardization, we validated K91HG-CE thermal paper using a printout test and bivariate correlation analysis. We measured the papers' physical properties and calculated the estimated depth of heat conduction using Fourier's equation. Through contact burns of 5, 10, 15, 20, and 30 seconds on porcine skin and our burn model using a heated brass comb, and comparing the burn wound and heat conduction trace, we validated our model. The heat conduction pattern correlation analysis (intraclass correlation coefficient: 0.846, p < 0.001) and the heat conduction depth correlation analysis (intraclass correlation coefficient: 0.93, p < 0.001) showed statistically significant high correlations between the porcine burn wound and our model. Our model showed good correlation with porcine skin burn injury and replicated its heat conduction patterns. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
8. Statistical validation of a global model for the distribution of the ultimate number of citations accrued by papers published in a scientific journal.
- Author
-
Stringer, Michael J., Sales-Pardo, Marta, and Amaral, Luís A. Nunes
- Subjects
- *
BIBLIOMETRICS , *BIBLIOGRAPHICAL citations , *SCIENTIFIC literature , *BIBLIOGRAPHICAL citation searching , *LOGNORMAL distribution , *SCIENCE publishing , *INFORMATION science , *ONLINE databases , *ELECTRONIC information resource searching - Abstract
A central issue in evaluative bibliometrics is the characterization of the citation distribution of papers in the scientific literature. Here, we perform a large-scale empirical analysis of journals from every field in Thomson Reuters' Web of Science database. We find that only 30 of the 2,184 journals have citation distributions that are inconsistent with a discrete lognormal distribution at the rejection threshold that controls the false discovery rate at 0.05. We find that large, multidisciplinary journals are over-represented in this set of 30 journals, leading us to conclude that, within a discipline, citation distributions are lognormal. Our results strongly suggest that the discrete lognormal distribution is a globally accurate model for the distribution of “eventual impact” of scientific papers published in single-discipline journal in a single year that is removed sufficiently from the present date. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
9. All that glisters...How to assess the‘value’ of a scientific paper.
- Author
-
Pandit, J. J. and Yentis, S. M.
- Subjects
- *
PHYSICIANS , *REPORT writing , *MEDICAL publishing , *STATISTICS ,MEDICAL literature reviews ,EDITORIALS - Abstract
The article discusses the lack of skills in doctors needed for evaluating and interpreting scientific papers. The purpose of a scientific publication, of which there are numerous types, is to communicate information. Editorials are summary or personal views, perhaps commenting on a specific paper. Reviews are longer, in-depth analyses of the literature. There are generally two aspects to excellence in an experimental study -- first, the study's conduct and second, its presentation. Particular attention should be given to the avoidance of surrogate measures, appropriate measurement tools, appropriate use of control groups, and appropriate application and interpretation of statistics.
- Published
- 2005
- Full Text
- View/download PDF
10. Guest editorial: the 2001 UK census: remarkable resource or bygone legacy of the ‘pencil and paper era’?
- Author
-
Boyle, Paul and Dorling, Danny
- Subjects
- *
CENSUS , *HUMAN rights , *HUMAN rights violations , *DEMOGRAPHIC surveys , *POPULATION , *STATISTICS - Abstract
National censuses are expensive. They are conducted infrequently. They collect information that some feel infringes their human rights, and people are required by law to complete them. The outputs are not perfect, and in some situations may be misleading. Some suggest that censuses hark back to a period when regularly collected administrative data were not available. These are some of the views held about national censuses. Why, then, would others argue that they are an essential resource? In this paper, we consider some of the pros and cons of conducting national censuses, before introducing a series of papers that draw on early data available from the 2001 UK census. We argue that these papers, and the wealth of research that will be conducted in the future with 2001 census data, make a strong case for supporting the compulsory collection of personal information about the ‘entire’ population every ten years. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
11. METHODOLOGICAL ISSUES IN NURSING RESEARCH Misrepresenting random sampling? A systematic review of research papers in the Journal of Advanced Nursing.
- Author
-
Williamson, Graham R.
- Subjects
- *
NURSING research , *STATISTICS , *STATISTICAL sampling , *PROBABILITY theory - Abstract
Williamson G.R. (2003) Journal of Advanced Nursing 44(3), 278–288 Misrepresenting random sampling? A systematic review of research papers in the Journal of Advanced Nursing This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
12. Uncertainty propagation in matrix population models: Gaps, importance and guidelines.
- Author
-
Simmonds, Emily G. and Jones, Owen R.
- Subjects
TRANSIENTS (Dynamics) ,POPULATION forecasting ,LIFE history theory ,POPULATION dynamics ,STATISTICS ,POPULATION viability analysis - Abstract
Matrix population models (MPMs), which describe the demographic behaviour of a population based on age or stage through discrete time, are popular in ecology, evolutionary biology and conservation biology. MPMs provide a tool for guiding management decisions and can give insight into life history trade‐offs, patterns of senescence, transient dynamics and population trajectories. These models are parameterised with estimates of demographic rates (e.g. survival and reproduction) and can have multiple layers of underlying statistical analyses, all of which introduce uncertainty. For accurate and transparent results, this uncertainty should be propagated through to quantities derived from the MPMs, such as population growth rates (λ). However, full propagation is not always achieved, leading to omitted uncertainty and negative consequences for the reliability of inferences drawn.We summarised the contemporary standards regarding demographic rate uncertainty reporting and propagation, by reviewing papers using MPMs from 2010 to 2019. We then used reported uncertainties as the basis for a simulation study to explore the impact of uncertainty omission on inferences drawn from the analysis of MPMs. We simulated four scenarios of demographic rate propagation and evaluated their impact on population growth rate estimates.Although around 78% of MPM papers report some kind of uncertainty in their findings, only half of those report uncertainty in all aspects. Additionally, only 31% of papers fully propagate uncertainty through to derived quantities. Our simulations demonstrate that, even with moderate levels of uncertainty, incomplete propagation introduces bias. Omitting uncertainty may substantially alter conclusions, particularly for results showing small changes in population size. Biased conclusions were most common when uncertainty in the most influential demographic rates for population growth were omitted.We suggest comprehensive guidelines for reporting and propagating uncertainty in MPMs. Standardising methods and reporting will increase the reliability of MPMs and enhance the comparability of different models. These guidelines will improve the accuracy, transparency and reliability of population projections, increasing our confidence in results that can inform conservation efforts, ultimately contributing to biodiversity preservation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Power-law link strength distribution in paper cocitation networks.
- Author
-
Zhao, Star X. and Ye, Fred Y.
- Subjects
- *
INFORMATION science , *RESEARCH funding , *STATISTICS , *DATA analysis , *CITATION analysis - Abstract
A network is constructed by nodes and links, thus the node degree and the link strength appear as underlying quantities in network analysis. While the power-law distribution of node degrees is verified as a basic feature of numerous real networks, we investigate whether the link strengths follow the power-law distribution in weighted networks. After testing 12 different paper cocitation networks with 2 methods, fitting in double-log scales and the Kolmogorov-Smirnov test ( K- S test), we observe that, in most cases, the link strengths also follow the approximate power-law distribution. The results suggest that the power-law type distribution could emerge not only in nodes and informational entities, but also in links and informational connections. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
14. The Ultimate Irrelevance Proposition in Finance?
- Author
-
Karolyi, G. Andrew
- Subjects
TRENDS ,STATISTICS ,ECONOMIC research ,DEMOGRAPHIC surveys ,SOCIAL participation ,FINANCE departments ,MEASURE theory - Abstract
I survey 457 published papers in top finance journals across two decades to assess whether these papers misuse tests of significance. More than 80% of published studies are diligent about distinguishing between statistical and economic significance and quantifying and interpreting the economic magnitudes of the statistical relationships they measure. Yet, only 10% of these acknowledge limits to the power of their tests and even fewer do anything about them. Recent demographic trends in publishing, such as larger co-author teams and increased participation by non-North American scholars, women, and those outside the top finance departments are not associated with these outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
15. DISCUSSION ON THE PAPER BY STEELE.
- Author
-
HAYNES, MICHELE
- Subjects
- *
SOCIAL surveys , *STATISTICS , *SOCIAL processes - Abstract
The article presents Michele Haynes' discussion of Fiona Steele's paper. It indicates how Steele presented a class of multilevel discrete-time event history that address some of the issues peculiar to the analysis of social process data and provided an thorough overview of how discrete-time event history models can be used to study the length of time before an individual experiences an event or a transition in state. It notes how Steele gave a comprehensive review of a range of multilevel, multiprocess models for analysing panel surveys.
- Published
- 2011
- Full Text
- View/download PDF
16. The uptake of Bayesian methods in biomedical meta‐analyses: A scoping review (2005–2016).
- Author
-
Grant, Robert L
- Subjects
BIOMEDICAL engineering ,EVIDENCE-based medicine ,MEDICAL periodicals ,BAYESIAN analysis ,META-analysis - Abstract
Aim: Bayesian statistical methods can allow for more complete and accurate incorporation of evidence in meta‐analyses. However, these methods remain under‐utilized. Methods: A scoping review was conducted to examine the proportion of biomedical meta‐analyses that used Bayesian methods in the period 2005–2016. The review also examined the reproducibility of the work, the cited sources, the reasons for it, its success or failure, the type of model and prior distributions, and whether a mixture of Bayesian and frequentist methods were employed. Results: We found that 1% of meta‐analyses are Bayesian and that the reporting and conduct of these were often poor. Data were published in 41% of analyses, and programs to run the analysis in 18%. Network meta‐analysis was the most common reason and became increasingly popular in recent years. In the majority of papers, models and distributions were either not reported or explained in such brief and ambiguous terms as to be uninformative. Conclusions: More use needs to be made of Bayesian meta‐analysis, and reporting needs to be improved. Greater awareness of these methods and access to training in them is essential. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
17. The emergence of problem structuring methods, 1950s–1989: An atlas of the journal literature.
- Author
-
Georgiou, Ion and Heck, Joaquim
- Subjects
STATISTICS ,STRATEGIC planning ,BIBLIOGRAPHIC databases ,SERIAL publications ,MATHEMATICAL models ,RESEARCH methodology ,INFORMATION resources management ,CITATION analysis ,THEORY ,SYSTEM analysis ,DATA analysis ,ELECTRONIC publications ,LITERATURE - Abstract
Researchers need maps to effectively navigate increasingly voluminous literatures. This is no less the case in the field of problem structuring methods (PSMs). This paper offers an atlas of the journal literature of the theoretical development of, what are currently acknowledged to be, the four main PSMs up to their consolidation in 1989. A thorough contextual appreciation of the structure and dynamics of this literature sets the stage for addressing some of its specific aspects, for which an atlas is especially effective as an orientation device. Substantiated suggestions for exploratory excursions, as well as potential pitfalls, are accentuated, the overall aim being to provide researchers with navigational support that may assist their research objectives. Based on evidence uncovered from the atlas, a number of issues current in the PSM field are discussed, including the use of the collective descriptor 'family', the extent to which PSMs find their origins, and belong, in the wider field of operational research and the identification of sources that have hitherto received little or no acknowledgment but which merit attention as precursors and promising contributors to PSM research. The paper is accompanied by an electronic supplement containing the basic data of the atlas from which additional maps may be designed and constructed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. How to analyze percentile citation impact data meaningfully in bibliometrics: The statistical analysis of distributions, percentile rank classes, and top-cited papers.
- Author
-
Bornmann, Lutz
- Subjects
- *
BIBLIOMETRICS , *SERIAL publications , *STATISTICS , *LOGISTIC regression analysis , *DATA analysis , *CITATION analysis , *DATA analysis software - Abstract
According to current research in bibliometrics, percentiles (or percentile rank classes) are the most suitable method for normalizing the citation counts of individual publications in terms of the subject area, the document type, and the publication year. Up to now, bibliometric research has concerned itself primarily with the calculation of percentiles. This study suggests how percentiles (and percentile rank classes) can be analyzed meaningfully for an evaluation study. Publication sets from four universities are compared with each other to provide sample data. These suggestions take into account on the one hand the distribution of percentiles over the publications in the sets (universities here) and on the other hand concentrate on the range of publications with the highest citation impact-that is, the range that is usually of most interest in the evaluation of scientific performance. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
19. Allied health professional research engagement and impact on healthcare performance: A systematic review protocol.
- Author
-
Chalmers, Sophie, Hill, James, Connell, Louise, Ackerley, Suzanne J., Kulkarni, Amit Arun, and Roddam, Hazel
- Subjects
RESEARCH ,MEDICAL quality control ,PSYCHOLOGY information storage & retrieval systems ,STATISTICS ,MEDICAL information storage & retrieval systems ,SYSTEMATIC reviews ,JOB involvement ,INTER-observer reliability ,JOB performance ,MEDLINE ,THEMATIC analysis ,STATISTICAL sampling ,ALLIED health personnel - Abstract
Background: Existing evidence suggests that clinician and organization engagement in research can improve healthcare processes of care and outcomes. However, current evidence has considered the relationship across all healthcare professions collectively. With the increase in allied health clinical academic and research activity, it is imperative for healthcare organizations, leaders and managers to understand engagement in research within these specific clinical fields. This systematic review aims to identify the effect of engagement in research by allied health professionals (AHPs) and organizations on healthcare performance. Methods: This systematic review has a two‐stage search strategy. The first stage will be to screen a previous systematic review examining the effectiveness of engagement in research in health and social care to identify relevant papers published pre‐2012. The search strategy used in the previous review will then be rerun, but with a specific focus on allied health. This multi‐database search will identify publications from 2012 to date. Only studies that assessed the effectiveness of allied health engagement in research will be included. All stages of the review will be conducted by two reviewers independently, plus documented discussions with the wider research team when discrepancies occur. This systematic review protocol follows the EQUATOR reporting guidelines of the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses for Protocols (PRISMA‐P). Discussion: The findings of this review will make a significant contribution to the evidence base around the effect of allied health engagement in research on healthcare performance. It will provide insights for clinicians and managers looking to understand the consequences of developing AHP research capability and capacity. The findings of this review will also aim to make recommendations for future evaluation approaches for engagement in research interventions. Trial registration: This systematic review protocol has been registered with PROSPERO, registration number CRD42021253461. What this paper adds: What is already known on the subject: This study will provide valuable evidence for professionals and policymakers seeking to understand engagement in research in the allied health disciplines. Where supported by the data, there may be recommendations for future research regarding specific variables to be considered when planning and evaluating engagement in research in allied health practice. What this paper adds to existing knowledge: A previous systematic review identified a positive association between clinician and organization engagement in research and improved processes of care and health outcomes. The reviews' findings have been used as a justification for clinicians and organizations to increase research capacity. That review evaluated literature published before 2012 and the studies that were identified predominantly reported on engagement in research by medics and nurses. An updated review is now required to include research published since 2012. This review will specifically focus on the effect of engagement in research within allied health disciplines. What are the potential or actual clinical implications of this work?: Research activity among AHPs is gaining momentum. Given this growth in AHP research activity and the rise in dedicated clinical academic roles, a contemporary review to identify the specific effect of AHP engagement in research on healthcare performance is prudent. The findings will inform clinicians, clinical managers and leaders of the potential impact of research activities by AHP clinicians and organizations. This will support the planning and development of initiatives focused on research capacity, capability and culture within allied health. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. How much does the expected number of citations for a publication change if it contains the address of a specific scientific institute? A new approach for the analysis of citation data on the institutional level based on regression models.
- Author
-
Bornmann, Lutz
- Subjects
SCIENCE associations ,CONFIDENCE intervals ,REGRESSION analysis ,STATISTICS ,DATA analysis ,PERIODICAL articles ,CITATION analysis ,IMPACT factor (Citation analysis) ,STATISTICAL models - Abstract
Citation data for institutes are generally provided as numbers of citations or as relative citation rates (as, for example, in the Leiden Ranking). These numbers can then be compared between the institutes. This study aims to present a new approach for the evaluation of citation data at the institutional level, based on regression models. As example data, the study includes all articles and reviews from the Web of Science for the publication year 2003 ( n = 886,416 papers). The study is based on an in-house database of the Max Planck Society. The study investigates how much the expected number of citations for a publication changes if it contains the address of an institute. The calculation of the expected values allows, on the one hand, investigating how the citation impact of the papers of an institute appears in comparison with the total of all papers. On the other hand, the expected values for several institutes can be compared with one another or with a set of randomly selected publications. Besides the institutes, the regression models include factors which can be assumed to have a general influence on citation counts (e.g., the number of authors). [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. Inconsistencies of recently proposed citation impact indicators and how to avoid them.
- Author
-
Schreiber, Michael
- Subjects
SCIENTISTS ,STATISTICS ,DATA analysis ,CITATION analysis - Abstract
It is shown that under certain circumstances in particular for small data sets, the recently proposed citation impact indicators I3(6PR) and R(6, k) behave inconsistently when additional papers or citations are taken into consideration. Three simple examples are presented, in which the indicators fluctuate strongly and the ranking of scientists in the evaluated group is sometimes completely mixed up by minor changes in the database. The erratic behavior is traced to the specific way in which weights are attributed to the six percentile rank classes, specifically for the tied papers. For 100 percentile rank classes, the effects will be less serious. For the six classes, it is demonstrated that a different way of assigning weights avoids these problems, although the nonlinearity of the weights for the different percentile rank classes can still lead to (much less frequent) changes in the ranking. This behavior is not undesired because it can be used to correct for differences in citation behavior in different fields. Remaining deviations from the theoretical value R(6, k) = 1.91 can be avoided by a new scoring rule: the fractional scoring. Previously proposed consistency criteria are amended by another property of strict independence at which a performance indicator should aim. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
22. Statistical recommendations for papers submitted to Developmental Medicine & Child Neurology.
- Author
-
RIGBY, ALAN S.
- Subjects
- *
DIAGNOSIS , *STATISTICS , *PERIODICALS , *STATISTICIANS , *CLINICAL trials - Abstract
The use of statistics in medical diagnoses and biomedical research may affect whether an individual may live or die, whether their health is protected or jeopardized. Because society depends on sound statistical practice, all practitioners of statistics, whatever their training or occupation, have social obligations to perform their work in a professional, competent, and ethical manner. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
23. Determinants of Citations to the Agricultural and Applied Economics Association Journals.
- Author
-
Hilmer, Christiana E. and Lusk, Jayson L.
- Subjects
AGRICULTURAL economics ,PERIODICALS ,PUBLISHING ,MARKETING ,STATISTICS - Abstract
This paper investigates citations to articles published in the Review of Agricultural Economics and in the American Journal of Agricultural Economics to better understand the impact of articles published in these journals and to evaluate recent policy decisions made by the Agricultural and Applied Economics Association. The biggest factors affecting non-self citations are self-citations and whether the article received at least one citation in the year after publication, suggesting that “advertising” and “signaling” play important roles in the extent to which a paper is cited. Principal papers and comments/replies are associated with significantly fewer citations for both journals. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
24. INVITED DISCUSSION OF THE PAPER BY DOMIJAN, JORGENSEN & REID.
- Subjects
- *
NONLINEAR statistical models , *CASE studies , *STATISTICS , *MATHEMATICAL models , *PARAMETERS (Statistics) - Abstract
The article presents a discussion of the "Semi-Mechanistic Modelling in Nonlinear Regression: A Case Study," by Katarina Domijani, Murray Jorgensen and Jeff Reid. The study raises an issue concerning the place of statistics in applied research, as well as detailing more technical, statistical matters. The authors present approaches to detect and remedy parameter identifiability issues in realistic but very complex nonlinear models.
- Published
- 2006
- Full Text
- View/download PDF
25. Invited Paper: SUGGESTIONS FOR PRESENTING THE RESULTS OF DATA ANALYSES.
- Author
-
Anderson, David R., Link, William A., Johnson, Douglas H., and Burnham, Kenneth P.
- Subjects
- *
RESEARCH , *BAYESIAN analysis , *STATISTICAL hypothesis testing , *STATISTICS - Abstract
Gives suggestions for the presentation of research results from frequentist, information-theoretic and Bayesian analysis paradigms. How the methods offer alternative approaches to data analysis and inference compared to traditionally used methods; Guidelines on presentation of results using the alternative procedures; Recommendations on the reporting of the results of statistical tests of null hypotheses.
- Published
- 2001
- Full Text
- View/download PDF
26. Obituary: Sir David Cox.
- Subjects
STATISTICS ,PROPORTIONAL hazards models - Abstract
His name has been attached to the Cox process, a stochastic process model he developed in a 1955 paper1 and, most prominently, to the Cox model,2 a semi-parametric regression framework for identifying factors that influence the time to an event occurring. This paper, published 9 years after his PhD, and after publishing 44 more specialised papers and a book, perhaps represents David Cox's first work on purely theoretical statistics. Discussion: IBS-BIR, 40 years of the Cox Model, March 8. 2013. https://www.youtube.com/watch?v=y16ZxKs PTM&list=PL9jArM9qlWA-JJhrt3kwttDvmhluB z37&index=8 10 Cox DR. MRC Biostatistics Unit Armitage Lecture, 2016 Sir David Cox, who died on January 18, 2022, was arguably the most influential statistician of the latter half of the 20th Century. [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
27. Early Treatment and IL1RN Single‐Nucleotide Polymorphisms Affect Response to Anakinra in Systemic Juvenile Idiopathic Arthritis.
- Author
-
Pardeo, Manuela, Rossi, Marianna Nicoletta, Pires Marafon, Denise, Sacco, Emanuela, Bracaglia, Claudia, Passarelli, Chiara, Caiello, Ivan, Marucci, Giulia, Insalaco, Antonella, Perrone, Chiara, Tulone, Anna, Prencipe, Giusi, and De Benedetti, Fabrizio
- Subjects
STATISTICS ,SEQUENCE analysis ,PAPER chromatography ,SINGLE nucleotide polymorphisms ,MULTIVARIATE analysis ,JUVENILE idiopathic arthritis ,ANTIRHEUMATIC agents ,TREATMENT effectiveness ,GENE expression ,GENOTYPES ,DESCRIPTIVE statistics ,HAPLOTYPES ,MESSENGER RNA ,DISEASE duration ,POLYMERASE chain reaction ,EARLY medical intervention - Abstract
Objective: To evaluate the impact of early treatment and IL1RN genetic variants on the response to anakinra in systemic juvenile idiopathic arthritis (JIA). Methods: Response to anakinra was defined as achievement of clinically inactive disease (CID) at 6 months without glucocorticoid treatment. Demographic, clinical, and laboratory characteristics of 56 patients were evaluated in univariate and multivariate analyses as predictors of response to treatment. Six single‐nucleotide polymorphisms (SNPs) in the IL1RN gene, previously demonstrated to be associated with a poor response to anakinra, were genotyped by quantitative polymerase chain reaction (qPCR) or Sanger sequencing. Haplotype mapping was performed with Haploview software. IL1RN messenger RNA (mRNA) expression in whole blood from patients, prior to anakinra treatment initiation, was assessed by qPCR. Results: After 6 months of anakinra treatment, 73.2% of patients met the criteria for CID without receiving glucocorticoids. In the univariate analysis, the variable most strongly related to the response was disease duration from onset to initiation of anakinra treatment, with an optimal cutoff at 3 months (area under the curve 84.1%). Patients who started anakinra treatment ≥3 months after disease onset had an 8‐fold higher risk of nonresponse at 6 months of treatment. We confirmed that the 6 IL1RN SNPs were inherited as a common haplotype. We found that homozygosity for ≥1 high‐expression SNP correlated with higher IL1RN mRNA levels and was associated with a 6‐fold higher risk of nonresponse, independent of disease duration. Conclusion: Our findings on patients with systemic JIA confirm the important role of early interleukin‐1 inhibition and suggest that genetic IL1RN variants predict nonresponse to therapy with anakinra. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
28. Iranians' contribution to world literature on neuroscience.
- Author
-
Ashrafi, Farzad, Mohammadhassanzadeh, Hafez, Shokraneh, Farhad, Valinejadi, Ali, Johari, Karim, Saemi, Nazanin, Zali, Alireza, Mohaghegh, Niloofar, and Ashayeri, Hassan
- Subjects
AUTHORS ,BIBLIOMETRICS ,DATABASE searching ,INTERPROFESSIONAL relations ,MEDICAL librarians ,MEDICAL research ,NEUROSCIENCES ,RESEARCH funding ,SERIAL publications ,STATISTICS ,DATA analysis software - Abstract
Objective: The purpose of this study is to analyse Iranian scientific publications in the neuroscience subfields by librarians and neuroscientists, using Science Citation Index Expanded (SCIE) via Web of Science data over the period, 2002-2008. Methods: Data were retrieved from the SCIE. Data were collected from the 'subject area' of the database and classified by neuroscience experts into 14 subfields. To identify the citation patterns, we applied the 'impact factor' and the 'number of publication'. Data were also analysed using HISTCITE, Excel 2007 and SPSS. Results: Seven hundred and thirty-four papers have been published by Iranian between 2002 and 2008. Findings showed a growing trend of neuroscience papers in the last 3 years with most papers (264) classified in the neuropharmacology subfield. There were fewer papers in neurohistory, psychopharmacology and artificial intelligence. International contributions of authors were mostly in the neurology subfield, and 'Collaboration Coefficient' for the neuroscience subfields in Iran was 0.686 which is acceptable. Most international collaboration between Iranians and developed countries was from USA. Eighty-seven percent of the published papers were in journals with the impact factor between 0 and 4; 25% of papers were published by the researchers affiliated to Tehran University of Medical Sciences. Conclusion: Progress of neuroscience in Iran is mostly seen in the neuropharmacology and the neurology subfields. Other subfields should also be considered as a research priority by health policymakers. As this study was carried out by the collaboration of librarians and neuroscientists, it has been proved valuable for both librarians and policymakers. This study may be encouraging for librarians from other developing countries. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
29. Trust management of services (TMoS): Investigating the current mechanisms.
- Author
-
Hayyolalam, Vahideh, Pourghebleh, Behrouz, and Pourhaji Kazem, Ali Asghar
- Subjects
TRUST ,INTERNET of things ,STATISTICS ,CLOUD computing - Abstract
Generally, service interactions in dynamic environments comprising cloud computing, Internet of Things (IoT), and etc. occur in an unspecified situation causing trust management a significant aspect. The widespread usage of services in today's vastly high‐tech world reveals the rising necessity of trust management in terms of services. In dynamic environments, there are not enough confidences for users to recognize trustworthy service providers. Hence, evaluating and managing the trustworthiness of services is a vital challenge for enabling customers to select trustworthy resources in dynamic environments. To the extent of our knowledge, in spite of the vital role of Trust Management of Services (TMoS), there is not any thorough and systematic work in this scope with a specific focus on services. Thus, this research investigates the current methods of trust management in terms of services, which are published up to February 2020. We have identified 68 papers that are diminished to 22 mostly qualified papers through the article selection process. Also, we have examined and compared the selected papers in terms of their merits and demerits, considering the important parameters in this field. The investigation results point out that the security, scalability, and dynamicity are essential to almost all research. Furthermore, privacy and reliability are the most essential parameters in the field of TMoS. Moreover, the statistical results and information can contribute to future works; also, we have conducted some open issues and future work suggestions that could be an effective roadmap for future researchers in this area. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Letter to the Editor: Authors' Response to Pitiphat et al.
- Author
-
Jervøe-Storrn, Pia-Merete, AlAhdab, Hazem, Koltzscher, Max, Fimmers, Rolf, Jepsen, Søren, Ioannidou, E, Malekzadeh, T, and Dongari-Bagtzoglou, A
- Subjects
MICROBIOLOGY ,BACTERIA ,POLYMERASE chain reaction ,PERIODONTITIS ,STATISTICS ,PATIENTS - Abstract
Background: The outcome of microbiological diagnostics may depend on the sampling technique. It was the aim of the present study to compare two widely used sampling techniques for subgingival bacteria using quantitative real-time polymerase chain reaction. Methods: Twenty patients with chronic periodontitis were randomized into two groups. [n group A, samples were taken first with a paper point and then with a curet at the same site (single-rooted teeth with probing depth >5 mm) before scaling and root planing and after 10 weeks. The sampling sequence was reversed in group B. The analysis enabled the quantification of Actinobacillus actinomycetemcomitans, Fusobacterium nucleatum, Porphyromonas gingivaIis, Prevotella intermedia, Treponema denticola, and Tannerella forsythensis and total bacteria! counts (TBCs). Statistical analysis included t test, kappa, and Spearman correlation. Results: Higher TBC was harvested with curets than by paper points (P = 0.008). The plaque composition with regard to total target pathogens was similar for both sampling techniques. A strong positive correlation was found between curet and paper point samples for TBC and single target bacteria. Conclusions: Overall, there was a relatively good agreement for the results of paper point and curet sampling. Thus. both techniques seem to be suitable for microbiological diagnostics. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
31. P‐4.5: A video data compression algorithm based on bit plane.
- Author
-
Chengyu, Wang and Limin, Yan
- Subjects
VIDEO compression ,DATA compression ,ALGORITHMS ,IMAGE compression ,RANGE of motion of joints ,DATA transmission systems ,STATISTICS - Abstract
High resolution and high refresh rate increase the pressure of wide transmission. Usually, video compression is used to solve this problem. However, traditional video compression methods cannot be performed in bit‐plane, which will increase the cost of storage area. For this reason, this paper first introduces a bit just noticeable difference (BJND) model based on bit‐plane. By analyzing the relationship between the bit‐plane, frequency and eccentricity, the bit just noticeable difference threshold of viewpoint is calculated. Finally, the just noticeable difference model applied to bit‐plane is obtained. Then this paper proposes a video compression scheme based on bit motion estimation algorithm, which optimizes the search range of motion estimation into two small rhombuses of time dimension and gray‐scale dimension. According to human visual system and probability statistical analysis, supplementary matching blocks are added to replace residual data, so that the compression ratio will be constant. The experimental results show that the proposed scheme has the best comprehensive effect on the compression of the lower five‐bit plane. The compression ratio is 1.385, the data transmission is constant, and there is no obvious difference between the restored image and the original image, which conforms to the intuitive perception of the human eye. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Response to 'remarks on the paper by a. De Visscher, 'what does the g-index really measure?' '.
- Author
-
De Visscher, Alex
- Subjects
- *
AUTHORSHIP , *SCIENCE , *SCIENTISTS , *STATISTICS , *DATA analysis , *CITATION analysis - Abstract
A response by Alex De Visscher to a letter to the editor about his article "What Does the G-Index Really Measure?" in a 2011 issue is presented.
- Published
- 2013
- Full Text
- View/download PDF
33. Discussion of the papers of Professors Pyke and McLeish.
- Author
-
Kulperger, R. J.
- Subjects
- *
STATISTICS , *STOCHASTIC convergence , *MATHEMATICAL functions , *RANDOM variables , *MARKOV processes - Abstract
The article examines papers and lectures on statistical applications of methods of weak convergence presented in a Statistical Services Centre (SSC) session. One lecturer discussed two kinds of data which are d-dimensional random variables and random variables indexed by Zd. It adds that the so-called Theorem 2 of another lecturer and its method of proof can be used in statistical applications related to aggregate data resulting from irreducible positive recurrent Markov chains.
- Published
- 1984
- Full Text
- View/download PDF
34. The Jeffrey S. Tanaka Occasional Papers in Quantitative Methods for Personality.
- Subjects
- *
PERSONALITY , *STATISTICS , *METHODOLOGY - Abstract
The article announces the series of papers to be published in the journal using a statistical and methodology approach to personality commemorating the scholar Jeffrey S. Tanaka., including an article within the issue on this topic.
- Published
- 2019
- Full Text
- View/download PDF
35. A Multidimensional Investigation of the Effects of Publication Retraction on Scholarly Impact.
- Author
-
Shuai, Xin, Rollins, Jason, Moulinier, Isabelle, Custis, Tonya, Edmunds, Mathilda, and Schilder, Frank
- Subjects
EXPERIMENTAL design ,FRAUD in science ,SCHOLARLY method ,SCIENCE ,STATISTICS ,BIBLIOGRAPHIC databases ,MEDICAL coding ,MANN Whitney U Test - Abstract
During the past few decades, the rate of publication retractions has increased dramatically in academia. In this study, we investigate retractions from a quantitative perspective, aiming to answer two fundamental questions. One, how do retractions influence the scholarly impact of retracted papers, authors, and institutions? Two, does this influence propagate to the wider academic community through scholarly associations? Specifically, we analyzed a set of retracted articles indexed in Thomson Reuters Web of Science (WoS), and ran multiple experiments to compare changes in scholarly impact against a control set of nonretracted articles, authors, and institutions. We further applied the Granger Causality test to investigate whether different scientific topics are dynamically affected by retracted papers occurring within those topics. Our results show two key findings: first, the scholarly impact of retracted papers and authors significantly decreases after retraction, and the most severe impact decrease correlates with retractions based on proven, purposeful scientific misconduct; second, this retraction penalty does not seem to spread through the broader scholarly social graph, but instead has a limited and localized effect. Our findings may provide useful insights for scholars or science committees to evaluate the scholarly value of papers, authors, or institutions related to retractions. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
36. Comments on the three papers by the FDA/CDER research team on the regulatory perspective of the missing data problem.
- Author
-
Shih, Weichung Joe
- Subjects
EXPERIMENTAL design ,RESEARCH ,STATISTICS ,DATA analysis - Abstract
This communication comments on the three papers by the FDA CDER research team on the regulatory perspective of the missing data problem. The focus is on two topics: causal estimand and sensitivity analysis. Copyright © 2016 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
37. Interrater Reliability and Convergent Validity of F1000Prime Peer Review.
- Author
-
Bornmann, Lutz
- Subjects
CONFIDENCE intervals ,DATABASES ,MEDICAL literature ,PROFESSIONAL peer review ,RESEARCH evaluation ,STATISTICS ,LOGISTIC regression analysis ,BIBLIOGRAPHIC databases ,INTER-observer reliability ,PERIODICAL articles ,CITATION analysis ,IMPACT factor (Citation analysis) ,DATA analysis software ,STATISTICAL models ,INTRACLASS correlation - Abstract
Peer review is the backbone of modern science. F1000Prime is a postpublication peer review system of the biomedical literature (papers from medical and biological journals). This study is concerned with the interrater reliability and convergent validity of the peer recommendations formulated in the F1000Prime peer review system. The study is based on about 100,000 papers with recommendations from faculty members. Even if intersubjectivity plays a fundamental role in science, the analyses of the reliability of the F1000Prime peer review system show a rather low level of agreement between faculty members. This result is in agreement with most other studies that have been published on the journal peer review system. Logistic regression models are used to investigate the convergent validity of the F1000Prime peer review system. As the results show, the proportion of highly cited papers among those selected by the faculty members is significantly higher than expected. In addition, better recommendation scores are also associated with higher performing papers. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
38. Student diversity and e‐exam acceptance in higher education.
- Author
-
Froehlich, Laura, Sassenberg, Kai, Jonkmann, Kathrin, Scheiter, Katharina, and Stürmer, Stefan
- Subjects
STATISTICS ,ANALYSIS of variance ,CONFIDENCE intervals ,COMPUTER assisted testing (Education) ,AGE distribution ,CROSS-sectional method ,SELF-evaluation ,TIME ,CULTURAL pluralism ,HEALTH outcome assessment ,SEX distribution ,PRE-tests & post-tests ,ACADEMIC achievement ,EDUCATIONAL technology ,QUESTIONNAIRES ,FACTOR analysis ,DESCRIPTIVE statistics ,STUDENT attitudes ,PSYCHOLOGICAL adaptation ,SOCIODEMOGRAPHIC factors ,ANXIETY ,DATA analysis software - Abstract
Background: The use of e‐exams in higher education is increasing. However, the role of student diversity in the acceptance of e‐exams is an under‐researched topic. In the current study, we considered student diversity in terms of three sociodemographic characteristics (age, gender, and second language) and three dispositional student characteristics (computer anxiety, test anxiety, and technology openness). Objectives: The main objective of this study was to investigate the relationship between student diversity and acceptance of e‐exams. Methods: Our research combined cross‐sectional analyses (N = 1639) with data from a natural experiment on the introduction of e‐exams versus the established paper‐pencil exams (N = 626) and used both self‐report and institutional data. Sociodemographic and dispositional characteristics were indirectly related to pre‐exam acceptance via expectancy variables from the Technology Acceptance Model framework. Results and Conclusions: Comparisons of post‐exam acceptance showed that practical experience with the e‐exam led to a significant increase in e‐exam acceptance, and that students with low openness toward technology particularly benefited from this effect. Students' exam performance (i.e., grades) was unrelated to the exam format or their pre‐exam acceptance of the e‐exam format, and this was true across students' sociodemographic and dispositional characteristics. Takeaway: Student diversity plays a role in e‐exam acceptance, but its influence is mitigated by first‐hand experience with e‐exams. The practical implications for higher education institutions aiming to implement e‐exams are discussed. Lay Description: What is already known about this topic: The use of e‐exams in higher education is increasingThe role of student diversity for e‐exam acceptance is unclearTechnology acceptance is predicted by expectancies towards new system What this paper adds: We investigated students' sociodemographic and dispositional diversityDiversity predicted e‐exam acceptance via the expectanciesIn a natural experiment, first‐hand experience increased e‐exam acceptanceNo difference between performance in e‐exams and paper‐pencil‐exams found Implications of the study findings for practitioners: Higher education institutions implementing e‐exams should consider diversitySupport for older students and students with low technology openness neededNo student groups systematically disadvantaged by e‐exam implementationPractice rooms can increase experience with new system before exam [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Discussion of 'Is designed data collection still relevant in the Big Data era?'.
- Author
-
King, Caleb and Jones, Bradley
- Subjects
ACQUISITION of data ,BIG data ,SOCIAL processes ,EXPERIMENTAL design - Abstract
Given the popularity of Big Data (BD), there can be an impression that fields such as design of experiments (DOE) are now irrelevant. We would like to thank the authors for starting the conversation about the possible relationship between these two fields. A key contribution of this paper is in showing how DOE principles, as summarized under the name designed data collection (DDC), can be applied throughout the BD process. This name is quite appropriate, demonstrating that these principles apply not just to designed experiments, but to any form of data collection. This is especially important for situations where designed experiments are either impossible (i.e., assessing how a country's economy may impact certain responses) or unethical (i.e., certain sensitive types of medical studies). It shows that DOE is more than a particular choice of design type, but is rather a methodology for approaching data collection. One that seeks to extract the most relevant information from the data while also taking into account the various nuances and constraints of physical and social processes, which are ever present, even in massive datasets. The paper divides BD efforts into three general phases: Before BD, During BD, and After BD. As such, we have grouped our discussion accordingly, with general comments provided for the suggested contributions of DDC in each phase. We then close with some additional thoughts. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
40. Don't judge a book by its cover, don't judge a study by its abstract. Common statistical errors seen in medical papers.
- Author
-
Choi, S. W. and Cheung, C. W.
- Subjects
STATISTICAL errors ,RESEARCH ,DATA analysis ,STATISTICAL reliability ,T-test (Statistics) ,CHI-squared test ,ABSTRACTING & indexing services ,ANESTHESIOLOGY ,NEWSLETTERS ,STATISTICS - Abstract
The article discusses the common statistical errors seen in published scientific studies or review papers. It highlights the two of the myriad problems identified in science reporting such as the misuse of statistics and the overinterpretation of data. The most often used statistical tests seen in the medical literature are also explored including the t-test and Chi-square test.
- Published
- 2016
- Full Text
- View/download PDF
41. Why do statistics journals have low impact factors?
- Author
-
van Nierop, Erjen
- Subjects
SEPARATION (Technology) ,PROPERTIES of matter ,SOLUTION (Chemistry) ,SEMICONDUCTOR doping ,SOLID solutions ,STATISTICS ,MATHEMATICS - Abstract
In this paper, we answer the question why statistics journals get lower impact factors than journals in other disciplines. We analyze diffusion patterns of papers in several journals in various academic fields. To obtain insights into the diffusion of the citation counts of the papers, the data are analysed with the Bass model, leading to values for the time-to-peak that can be used to compare the speeds of diffusion paper citations of the different disciplines. Estimation results show that for statistics journals, it takes significantly more years to reach their peak. To further investigate diffusion, we also compute the percentages of the total number of citations a paper has after 2 or 3 years. Again, it appears that statistics journals have slower citation diffusion than journals in other disciplines. We conclude with some suggestions to reduce this disparity. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
42. Reported methodological quality of split-mouth studies.
- Author
-
Lesaffre, Emmanuel, Garcia Zattera, Maria-José, Redmond, Carol, Huber, Heidi, and Needleman, Ian
- Subjects
SYSTEMATIC reviews ,EVIDENCE-based medicine ,STATISTICS ,PERIODONTICS ,DENTISTRY - Abstract
Background/Aim: Hujoel & Moulton previously questioned the reported quality of split-mouth studies. Since then, there has been little enquiry into the methodology of this study design. The aim was to conduct a systematic review of the reported methodology of clinical studies using a split-mouth design published in dental journals over a 1-year period (2004). Material and Methods: An extension of the CONSORT guidelines for cluster-randomized designs was used to evaluate quality. We evaluated the methods used and quality of reporting split-mouth studies. Results: Thirty-four studies were eligible for this review. The results showed that many papers lack essential qualities of good reporting, e.g. five of 34 papers gave the rationale for choosing a split-mouth design, 19 of 34 (56%) used appropriate analytical statistical methods and only one of 34 presented an appropriate sample size calculation. Of the five studies that used survival analysis, none of them used a paired approach. Conclusions: Despite some progress in statistical analysis, if the reporting of studies represents the actual methodology of the trial, this review has identified important aspects of split-mouth study design and analysis that would benefit from development. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
43. (Disrupting) Continuities between Eugenics and Statistics: A Critical Study of Regression Analysis.
- Author
-
Dodson, Samuel and Bartley, Jane
- Subjects
- *
EUGENICS , *REGRESSION analysis , *INFORMATION science students , *LIBRARY school students - Abstract
This paper critically examines the intertwined history of statistics and eugenics through the work of Francis Galton, whose statistical inventions were guided by his problematic belief in eugenics. The paper highlights the historical development of regression analysis, arguing that acknowledging the discriminatory origins of this method is crucial for understanding historical and contemporary injustices in data‐driven decision‐making. The paper also considers the ethical implications of other statistical techniques, emphasizing the need for library and information science (LIS) students and practitioners to be aware of the societal implications of "objective" data analysis methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Community nestedness and the proper way to assess statistical significance by Monte-Carlo tests: some comments on Worthen and Rohde's (1996) paper
- Author
-
Hugueny, B. and Guegan, J.-F.
- Subjects
- *
MATHEMATICAL analysis , *NITROGEN , *PARASITOLOGY , *STATISTICS , *ECOLOGY - Published
- 1997
- Full Text
- View/download PDF
45. The scaling relationship between citation-based performance and coauthorship patterns in natural sciences.
- Author
-
Ronda‐Pupo, Guillermo Armando and Katz, J. Sylvan
- Subjects
AUTHORSHIP ,INTERPROFESSIONAL relations ,PROBABILITY theory ,SCIENCE ,STATISTICS ,BIBLIOGRAPHIC databases ,DATA analysis ,CITATION analysis - Abstract
The aim of this paper is to extend our knowledge about the power-law relationship between citation-based performance and coauthorship patterns in papers in the natural sciences. We analyzed 829,924 articles that received 16,490,346 citations. The number of articles published through coauthorship accounts for 89%. The citation-based performance and coauthorship patterns exhibit a power-law correlation with a scaling exponent of 1.20 ± 0.07. Citations to a subfield's research articles tended to increase 2.
1.20 or 2.30 times each time it doubled the number of coauthored papers. The scaling exponent for the power-law relationship for single-authored papers was 0.85 ± 0.11. The citations to a subfield's single-authored research articles increased 2.0.85 or 1.89 times each time the research area doubled the number of single-authored papers. The Matthew Effect is stronger for coauthored papers than for single-authored. In fact, with a scaling exponent <1.0 the impact of single-authored papers exhibits a cumulative disadvantage or inverse Matthew Effect. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
46. Multidimensional Scaling of Varietal Data in Sedimentary Provenance Analysis.
- Author
-
Vermeesch, P., Lipp, A. G., Hatzenbühler, D., Caracciolo, L., and Chew, D.
- Subjects
MULTIDIMENSIONAL scaling ,PROVENANCE (Geology) ,COMPOSITION of sediments ,PRINCIPAL components analysis ,MINERAL properties ,SPHENE - Abstract
Varietal studies of sedimentary provenance use the properties of individual minerals or mineral groups. These are recorded as lists of numerical tables that can be difficult to interpret. Multidimensional Scaling (MDS) is a popular multivariate ordination technique for analyzing other types of provenance data based on, for example, detrital geochronology or petrography. Applying MDS to varietal data would allow them to be treated on an equal footing with those other provenance proxies. MDS requires a method to quantify the dissimilarity between two samples. This paper introduces three ways to do so. The first method ("treatment‐by‐row") turns lists of (compositional) data tables into lists of vectors, using principal component analysis. These lists of vectors can then be treated as "distributional" data and subjected to MDS analysis using dissimilarity measures such as the Kolmogorov‐Smirnov statistic. The second method ("treatment‐by‐column") turns lists of compositional data tables into multiple lists of vectors, each representing a single component of the varietal data. These multiple distributional data sets are subsequently subjected to Procrustes analysis or 3‐way MDS. The third method uses the Wasserstein‐2 distance to jointly compare the rows and columns of varietal data. This arguably makes the best use of the data but acts more like a "black box" than the other two methods. Applying the three methods to a detrital titanite data set from Colombia yields similar results. After converting varietal data to dissimilarity matrices, they can be combined with other types of provenance data, again using Procrustes analysis or 3‐way MDS. Plain Language Summary: The source of modern or ancient sediment can be determined by examining either the overall characteristics of the sediment or the chemical composition of individual sediment particles. With the help of recent analytical advancements, geologists can now analyze the complete chemical makeup of single grains of sand or silt. These types of data sets, known as "varietal" data sets, have the ability to uncover differences between sediments that are not visible through traditional methods. However, varietal data are incompatible with the statistical methods that geologists typically use to determine the origin of sediment. This paper addresses this issue by presenting three methods for quantifying the differences between varietal data sets, which is a crucial step in any further statistical analysis. Testing these methods on a varietal data set from Colombia shows similar outcomes. By using the techniques described in this paper, varietal data can now be combined with other conventional methods for determining sediment origin. Key Points: Varietal data are defined as lists of compositional tablesGiven an appropriate dissimilarity measure, varietal data can be subjected to multidimensional scalingThis paper introduces three ways to quantify the pairwise dissimilarity of varietal data [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Statistical Considerations in Otolaryngology Journals.
- Author
-
Blakley, Brian W. and Janzen, Bryan
- Abstract
Statistics can be intimidating for clinicians and reviewers. Statistics are often important and useful but can mislead. Elaborate statistics can support conclusions that contradict clinical experience. This article explores some statistically related insights. Statistical reasons for rejecting papers were collated, and the frequency and complexity of statistical tests in accepted, published papers in otolaryngology journals were then studied. Most statistical errors in papers are logical misinterpretations of information rather than lack of understanding of statistics. Otolaryngology papers tend to employ relatively straightforward statistics that should be useful for clinicians. Although evidence-based medicine has changed medical publishing, clinical knowledge is more important that statistical knowledge for clinical applications of statistics. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
48. Impact of using interactive devices in Spanish early childhood education public schools.
- Author
-
Martín, Estefanía, Roldán‐Alvarez, David, Haya, Pablo A., Fernández‐Gaullés, Cristina, Guzmán, Cristian, and Quintanar, Hermelinda
- Subjects
ALTERNATIVE education evaluation ,ALTERNATIVE education ,COMPUTER assisted instruction ,CONCEPTUAL structures ,EDUCATIONAL technology ,INTELLECT ,SCIENTIFIC observation ,PORTABLE computers ,STATISTICS ,DATA analysis ,EMPIRICAL research ,TEACHING methods ,RANDOMIZED controlled trials ,PRE-tests & post-tests ,DESCRIPTIVE statistics - Abstract
The adoption of Information and Communication Technologies in early childhood education is crucial for adapting traditional classrooms to the digital era. Over time, young children are increasingly using touch screen technologies such as tablets at home and in early childhood settings. However, the literature shows that there is a significant gap in knowledge of using this technology in early childhood education. Most researchers have focused on the pedagogical theory behind using touch screen devices, but there have not been many empirical studies about how these technologies affect students' learning processes. This paper presents three learning experiences where early childhood students perform educational activities using tablet computers, interactive whiteboards, and paper cards. The results show that students who used the technology were more motivated and achieved better results that those who used paper cards. Lay Description: What is already known about this topic: The use of ICT in childhood education offers new possibilities for teachers to provide new and more visual digital learning content to their students.Touch technologies seem suitable for young students. Because their motor skills are not fully developed yet, interacting with computers through a mouse and a keyboard is more difficult than doing it through natural gestures.There are many studies in higher education levels that demonstrate that proper use of ICT in education can lead to an increase of the students' motivation and learning. What this paper adds: Three empirical studies in three different childhood education classrooms to shed some light about how ICT can impact positively students learning.A comparison of interactive whiteboards and tablet computers benefits regarding more traditional methodologies in childhood education.Insights about how hardware and software can be combined to provide students a suitable learning environment in childhood education classrooms. Implications for practice and/or policy: Teachers should consider how they will create the workgroups. It is highly advisable to create heterogeneous groups taking into account the students' skills. This way, members of the same group can help each other, creating a richer learning scenario.Several teachers who participated in our studies said that they were not able to use ICT in their classrooms because they did not have enough digital competences. Our studies and the results obtained led them consider integrating technology in their classrooms. The good results obtained by the students who worked with technology changed the teachers' perspective about the use of technology in the classroom.Students showed a great interest in the use of tablet computers and interactive whiteboards, which translated into higher motivation compared with the students who solved the activities on paper. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
49. Counting methods, country rank changes, and counting inflation in the assessment of national research productivity and impact.
- Author
-
Huang, Mu-Hsuan, Lin, Chi-Shiou, and Chen, Dar-Zen
- Subjects
ANALYSIS of variance ,STATISTICAL correlation ,INTERPROFESSIONAL relations ,PHYSICS ,RESEARCH ,SERIAL publications ,STATISTICS ,DATA analysis ,CITATION analysis - Abstract
The counting of papers and citations is fundamental to the assessment of research productivity and impact. In an age of increasing scientific collaboration across national borders, the counting of papers produced by collaboration between multiple countries, and citations of such papers, raises concerns in country-level research evaluation. In this study, we compared the number counts and country ranks resulting from five different counting methods. We also observed inflation depending on the method used. Using the 1989 to 2008 physics papers indexed in ISI's Web of Science as our sample, we analyzed the counting results in terms of paper count (research productivity) as well as citation count and citation-paper ratio (CP ratio) based evaluation (research impact). The results show that at the country-level assessment, the selection of counting method had only minor influence on the number counts and country rankings in each assessment. However, the influences of counting methods varied between paper count, citation count, and CP ratio based evaluation. The findings also suggest that the popular counting method (whole counting) that gives each collaborating country one full credit may not be the best counting method. Straight counting that accredits only the first or the corresponding author or fractional counting that accredits each collaborator with partial and weighted credit might be the better choices. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
50. Scientific research ability of specialist nurses in Guangxi Zhuang Autonomous Region, China: A cross‐sectional study.
- Author
-
Huang, Ziwei, Liu, Yuanfang, Lei, Yi, Wei, Yiping, Chen, Xiaomei, and Lan, Yuansong
- Subjects
NATIONAL competency-based educational tests ,STATISTICS ,CROSS-sectional method ,NURSING research ,CLINICAL competence ,QUESTIONNAIRES ,SCALE analysis (Psychology) ,DESCRIPTIVE statistics ,RESEARCH funding ,NURSE practitioners ,DATA analysis software ,LOGISTIC regression analysis - Abstract
Aim: This study investigated the scientific research ability of Chinese specialist nurses (SNs) in the Guangxi Zhuang Autonomous Region and its influencing factors. Design: A cross‐sectional design. Methods: A total of 652 SNs in the Guangxi Zhuang Autonomous Region were investigated from March to October 2021. The nursing scientific research ability level was measured using the Nursing Research Competence of Nurses Self‐evaluation Scale. Descriptive statistics, univariate analysis and ordinal logistic regression analysis were used to evaluate factors affecting the scientific research ability of SNs. Results: The median score of scientific research ability of SNs was 31 (interquartile range: 19–41). Approximately 74.8% of clinical speciality nurses had low scientific research ability. Educational background, working hospital level, being the first author of a published paper and successful application for scientific research projects were identified as factors influencing scientific research ability score. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.