16 results on '"meta research"'
Search Results
2. Reporting and misreporting of sex differences in the biological sciences
- Author
-
Yesenia Garcia-Sifuentes and Donna L Maney
- Subjects
Male ,sex differences ,Biomedical Research ,QH301-705.5 ,Science ,Pooling ,meta-research ,methodological weakness ,Biological Science Disciplines ,General Biochemistry, Genetics and Molecular Biology ,Sex Factors ,study design ,Meta research ,None ,Animals ,Humans ,sex inclusion ,Biology (General) ,Biological sciences ,Sex Characteristics ,General Immunology and Microbiology ,Biological variable ,General Neuroscience ,Statistics ,Reproducibility of Results ,Contrast (statistics) ,General Medicine ,United States ,Test (assessment) ,Increasing risk ,National Institutes of Health (U.S.) ,Medicine ,Female ,Insight ,Psychology ,Statistical evidence ,Neuroscience ,Clinical psychology - Abstract
As part of an initiative to improve rigor and reproducibility in biomedical research, the U.S. National Institutes of Health now requires the consideration of sex as a biological variable in preclinical studies. This new policy has been interpreted by some as a call to compare males and females with each other. Researchers testing for sex differences may not be trained to do so, however, increasing risk for misinterpretation of results. Using a list of recently published articles curated by Woitowich et al. (eLife, 2020; 9:e56344), we examined reports of sex differences and non-differences across nine biological disciplines. Sex differences were claimed in the majority of the 147 articles we analyzed; however, statistical evidence supporting those differences was often missing. For example, when a sex-specific effect of a manipulation was claimed, authors usually had not tested statistically whether females and males responded differently. Thus, sex-specific effects may be over-reported. In contrast, we also encountered practices that could mask sex differences, such as pooling the sexes without first testing for a difference. Our findings support the need for continuing efforts to train researchers how to test for and report sex differences in order to promote rigor and reproducibility in biomedical research.Biomedical research has a long history of including only men or male laboratory animals in studies. To address this disparity, the United States National Institutes of Health (NIH) rolled out a policy in 2016 called Sex as a Biological Variable (or SABV). The policy requires researchers funded by the NIH to include males and females in every experiment unless there is a strong justification not to, such as studies of ovarian cancer. Since then, the number of research papers including both sexes has continued to grow. Although the NIH does not require investigators to compare males and females, many researchers have interpreted the SABV policy as a call to do so. This has led to reports of sex differences that would otherwise have been unrecognized or ignored. However, researchers may not be trained on how best to test for sex differences in their data, and if the data are not analyzed appropriately this may lead to misleading interpretations. Here, Garcia-Sifuentes and Maney have examined the methods of 147 papers published in 2019 that included both males and females. They discovered that more than half of these studies had reported sex differences, but these claims were not always backed by statistical evidence. Indeed, in a large majority (more than 70%) of the papers describing differences in how males and females responded to a treatment, the impact of the treatment was not actually statistically compared between the sexes. This suggests that sex-specific effects may be over-reported. In contrast, Garcia-Sifuentes and Maney also encountered instances where an effect may have been masked due to data from males and females being pooled together without testing for a difference first. These findings reveal how easy it is to draw misleading conclusions from sex-based data. Garcia-Sifuentes and Maney hope their work raises awareness of this issue and encourages the development of more training materials for researchers.
- Published
- 2021
- Full Text
- View/download PDF
3. Sex difference analyses under scrutiny
- Author
-
Colby J. Vorland
- Subjects
Male ,sex differences ,Scrutiny ,Biomedical Research ,QH301-705.5 ,Science ,methodological weakness ,meta-research ,General Biochemistry, Genetics and Molecular Biology ,study design ,Meta research ,Statistical analyses ,None ,Humans ,sex inclusion ,Biology (General) ,Sex Characteristics ,General Immunology and Microbiology ,General Neuroscience ,General Medicine ,Research Personnel ,statistics ,Research Design ,Medicine ,Female ,Psychology ,Clinical psychology ,Research Article ,Neuroscience - Abstract
As part of an initiative to improve rigor and reproducibility in biomedical research, the U.S. National Institutes of Health now requires the consideration of sex as a biological variable in preclinical studies. This new policy has been interpreted by some as a call to compare males and females with each other. Researchers testing for sex differences may not be trained to do so, however, increasing risk for misinterpretation of results. Using a list of recently published articles curated by Woitowich et al. (eLife, 2020; 9:e56344), we examined reports of sex differences and non-differences across nine biological disciplines. Sex differences were claimed in the majority of the 147 articles we analyzed; however, statistical evidence supporting those differences was often missing. For example, when a sex-specific effect of a manipulation was claimed, authors usually had not tested statistically whether females and males responded differently. Thus, sex-specific effects may be over-reported. In contrast, we also encountered practices that could mask sex differences, such as pooling the sexes without first testing for a difference. Our findings support the need for continuing efforts to train researchers how to test for and report sex differences in order to promote rigor and reproducibility in biomedical research., eLife digest Biomedical research has a long history of including only men or male laboratory animals in studies. To address this disparity, the United States National Institutes of Health (NIH) rolled out a policy in 2016 called Sex as a Biological Variable (or SABV). The policy requires researchers funded by the NIH to include males and females in every experiment unless there is a strong justification not to, such as studies of ovarian cancer. Since then, the number of research papers including both sexes has continued to grow. Although the NIH does not require investigators to compare males and females, many researchers have interpreted the SABV policy as a call to do so. This has led to reports of sex differences that would otherwise have been unrecognized or ignored. However, researchers may not be trained on how best to test for sex differences in their data, and if the data are not analyzed appropriately this may lead to misleading interpretations. Here, Garcia-Sifuentes and Maney have examined the methods of 147 papers published in 2019 that included both males and females. They discovered that more than half of these studies had reported sex differences, but these claims were not always backed by statistical evidence. Indeed, in a large majority (more than 70%) of the papers describing differences in how males and females responded to a treatment, the impact of the treatment was not actually statistically compared between the sexes. This suggests that sex-specific effects may be over-reported. In contrast, Garcia-Sifuentes and Maney also encountered instances where an effect may have been masked due to data from males and females being pooled together without testing for a difference first. These findings reveal how easy it is to draw misleading conclusions from sex-based data. Garcia-Sifuentes and Maney hope their work raises awareness of this issue and encourages the development of more training materials for researchers.
- Published
- 2021
4. Weak evidence of country- and institution-related status bias in the peer review of abstracts
- Author
-
Mathias Wullum Nielsen, Michael Bang Petersen, Jens Peter Andersen, Emer Brady, and Christine Friis Baker
- Subjects
medicine.medical_specialty ,Universities ,Abstracting and Indexing ,none ,QH301-705.5 ,Astronomy ,media_common.quotation_subject ,Science ,Materials Science ,Mixed regression ,Cardiology ,050109 social psychology ,meta-research ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,0302 clinical medicine ,Meta research ,Surveys and Questionnaires ,None ,medicine ,Institution ,Humans ,Psychology ,0501 psychology and cognitive sciences ,Social science ,survey experiment ,Biology (General) ,media_common ,Geography ,General Immunology and Microbiology ,General Neuroscience ,Public health ,Feature Article ,05 social sciences ,status bias ,General Medicine ,Survey experiment ,halo effect ,Laboratory Personnel ,Work (electrical) ,High status ,Linear Models ,Medicine ,Public Health ,Publication Bias ,030217 neurology & neurosurgery - Abstract
Research suggests that scientists based at prestigious institutions receive more credit for their work than scientists based at less prestigious institutions, as do scientists working in certain countries. We examined the extent to which country- and institution-related status signals drive such differences in scientific recognition. In a preregistered survey experiment, we asked 4,147 scientists from six disciplines (astronomy, cardiology, materials science, political science, psychology and public health) to rate abstracts that varied on two factors: (i) author country (high status vs lower status in science); (ii) author institution (high status vs lower status university). We found only weak evidence of country- or institution-related status bias, and mixed regression models with discipline as random-effect parameter indicated that any plausible bias not detected by our study must be small in size.
- Published
- 2021
- Full Text
- View/download PDF
5. A retrospective analysis of the peer review of more than 75,000 Marie Curie proposals between 2007 and 2018
- Author
-
Ivan Buljan, Ana Marušić, Darko Hren, and David G Pina
- Subjects
0301 basic medicine ,Process (engineering) ,QH301-705.5 ,Science ,Marie Skłodowska-Curie Actions ,Outcome (game theory) ,General Biochemistry, Genetics and Molecular Biology ,research funding ,Marie curie ,Meta-Research ,03 medical and health sciences ,Meta research ,Political science ,None ,Retrospective analysis ,media_common.cataloged_instance ,grant evaluation ,European union ,Biology (General) ,media_common ,General Immunology and Microbiology ,business.industry ,General Neuroscience ,05 social sciences ,Feature Article ,reviewer agreement ,General Medicine ,Public relations ,030104 developmental biology ,Medicine ,0509 other social sciences ,050904 information & library sciences ,business - Abstract
Most funding agencies rely on peer review to evaluate grant applications and proposals, but research into the use of this process by funding agencies has been limited. Here we explore if two changes to the organization of peer review for proposals submitted to various funding actions by the European Union has an influence on the outcome of the peer review process. Based on an analysis of more than 75,000 applications to three actions of the Marie Curie programme over a period of 12 years, we find that the changes – a reduction in the number of evaluation criteria used by reviewers and a move from in-person to virtual meetings – had little impact on the outcome of the peer review process. Our results indicate that other factors, such as the type of grant or area of research, have a larger impact on the outcome.
- Published
- 2021
6. Questionable research practices may have little effect on replicability
- Author
-
Jeff Miller and Rolf Ulrich
- Subjects
QH301-705.5 ,base rate of true effects ,Science ,050109 social psychology ,meta-research ,Biological Science Disciplines ,050105 experimental psychology ,General Biochemistry, Genetics and Molecular Biology ,Meta research ,None ,False positive paradox ,replicability ,false positives ,P-hacking ,0501 psychology and cognitive sciences ,mathematical modelling of research process ,Positive economics ,Biology (General) ,General Immunology and Microbiology ,General Neuroscience ,Feature Article ,Publications ,05 social sciences ,General Medicine ,Replicate ,Research Design ,p-hacking ,Medicine ,Psychology - Abstract
This article examines why many studies fail to replicate statistically significant published results. We address this issue within a general statistical framework that also allows us to include various questionable research practices (QRPs) that are thought to reduce replicability. The analyses indicate that the base rate of true effects is the major factor that determines the replication rate of scientific results. Specifically, for purely statistical reasons, replicability is low in research domains where true effects are rare (e.g., search for effective drugs in pharmacology). This point is under-appreciated in current scientific and media discussions of replicability, which often attribute poor replicability mainly to QRPs.
- Published
- 2020
7. International authorship and collaboration across bioRxiv preprints
- Author
-
Elizabeth M. Adamowicz, Ran Blekhman, and Richard J. Abdill
- Subjects
0301 basic medicine ,Biomedical Research ,Internationality ,Demographics ,QH301-705.5 ,Science ,Datasets as Topic ,Library science ,meta-research ,Bibliometrics ,050905 science studies ,Online Systems ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,Meta research ,Phenomenon ,Political science ,None ,bioRxiv ,Biology (General) ,China ,Publication ,General Immunology and Microbiology ,business.industry ,General Neuroscience ,Feature Article ,05 social sciences ,General Medicine ,scientific publishing ,Authorship ,Research Personnel ,030104 developmental biology ,Preprints as Topic ,preprints ,Position (finance) ,Medicine ,bibliometrics ,0509 other social sciences ,Scientific publishing ,050904 information & library sciences ,business ,Scientific communication ,Computational and Systems Biology - Abstract
As preprints become more integrated into the conventional avenues of scientific communication, it is critical to understand who is being included and who is not. However, little is known about which countries are participating in the phenomenon or how they collaborate with each other. Here, we present an analysis of 67,885 preprints posted to bioRxiv from 2013 through 2019 that includes the first comprehensive dataset of country-level affiliations for all preprint authors. We find the plurality of preprints (37%) come from the United States, more than three times as many as the next-most prolific country, the United Kingdom (10%). We find some countries are overrepresented on bioRxiv relative to their overall scientific output: The U.S. and U.K. are again at the top of the list, with other countries such as China, India and Russia showing much lower levels of bioRxiv adoption despite comparatively high numbers of scholarly publications. We describe a subset of “contributor countries” including Uganda, Croatia, Thailand, Greece and Kenya, which appear on preprints almost exclusively as part of international collaborations and seldom in the senior author position. Lastly, we find multiple journals that disproportionately favor preprints from some countries over others, a dynamic that almost always benefits manuscripts with a senior author affiliated with the United States.
- Published
- 2020
- Full Text
- View/download PDF
8. A 10-year follow-up study of sex inclusion in the biological sciences
- Author
-
Nicole C. Woitowich, Annaliese K. Beery, and Teresa K. Woodruff
- Subjects
0301 basic medicine ,Male ,sex differences ,Bibliometric analysis ,medicine ,Biomedical Research ,none ,QH301-705.5 ,Science ,meta-research ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,0302 clinical medicine ,Meta research ,Human biology ,None ,Humans ,sex bias ,sex inclusion ,Sex Distribution ,Biology (General) ,Human Biology and Medicine ,Biological sciences ,General Immunology and Microbiology ,Biological variable ,10 year follow up ,General Neuroscience ,Feature Article ,Publications ,human biology ,General Medicine ,meta-analysis ,030104 developmental biology ,Bibliometrics ,Meta-analysis ,Medicine ,Female ,Generic health relevance ,Biochemistry and Cell Biology ,Psychology ,Inclusion (education) ,030217 neurology & neurosurgery ,Demography ,Follow-Up Studies - Abstract
In 2016, to address the historical overrepresentation of male subjects in biomedical research, the US National Institutes of Health implemented a policy requiring investigators to consider sex as a biological variable. In order to assess the impact of this policy, we conducted a bibliometric analysis across nine biological disciplines for papers published in 34 journals in 2019, and compared our results with those of a similar study carried out by Beery and Zucker in 2009. There was a significant increase in the proportion of studies that included both sexes across all nine disciplines, but in eight of the disciplines there was no change in the proportion studies that included data analyzed by sex. The majority of studies failed to provide rationale for single-sex studies or the lack of sex-based analyses, and those that did relied on misconceptions surrounding the hormonal variability of females. Together, these data demonstrate that while sex-inclusive research practices are more commonplace, there are still gaps in analyses and reporting of data by sex in many biological disciplines.
- Published
- 2020
9. Dataset decay and the problem of sequential analyses on open datasets
- Author
-
Jessey Wright, William Hedley Thompson, Russell A. Poldrack, and Patrick G. Bissett
- Subjects
Time Factors ,multiple comparison correction ,QH301-705.5 ,Computer science ,Science ,Systems biology ,Datasets as Topic ,open data ,Reuse ,Machine learning ,computer.software_genre ,050105 experimental psychology ,General Biochemistry, Genetics and Molecular Biology ,Access to Information ,Meta-Research ,03 medical and health sciences ,0302 clinical medicine ,Meta research ,False positive paradox ,Humans ,Computer Simulation ,False Positive Reactions ,0501 psychology and cognitive sciences ,Biology (General) ,Statistical hypothesis testing ,multiple comparisons ,sequential testing ,General Immunology and Microbiology ,Information Dissemination ,business.industry ,General Neuroscience ,Feature Article ,05 social sciences ,General Medicine ,Open data ,Sequential analysis ,Data Interpretation, Statistical ,Multiple comparisons problem ,Medicine ,Artificial intelligence ,Periodicals as Topic ,business ,computer ,030217 neurology & neurosurgery ,Computational and Systems Biology ,Neuroscience ,Human - Abstract
Open data allows researchers to explore pre-existing datasets in new ways. However, if many researchers reuse the same dataset, multiple statistical testing may increase false positives. Here we demonstrate that sequential hypothesis testing on the same dataset by multiple researchers can inflate error rates. We go on to discuss a number of correction procedures that can reduce the number of false positives, and the challenges associated with these correction procedures.
- Published
- 2020
- Full Text
- View/download PDF
10. Reader engagement with medical content on Wikipedia
- Author
-
Ryan M. Steinberg, John Willinsky, Tiziano Piccardi, and Lauren A. Maggio
- Subjects
medicine ,020205 medical informatics ,scholarly communication ,QH301-705.5 ,Science ,02 engineering and technology ,Bibliometrics ,Scholarly communication ,General Biochemistry, Genetics and Molecular Biology ,World Wide Web ,Meta-Research ,03 medical and health sciences ,0302 clinical medicine ,Meta research ,None ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,030212 general & internal medicine ,Biology (General) ,Content (Freudian dream analysis) ,Human Biology and Medicine ,Internet ,General Immunology and Microbiology ,Health professionals ,students ,General Neuroscience ,Feature Article ,General Medicine ,Psychology ,Wikipedia - Abstract
Articles on Wikipedia about health and medicine are maintained by WikiProject Medicine (WPM), and are widely used by health professionals, students and others. We have compared these articles, and reader engagement with them, to other articles on Wikipedia. We found that WPM articles are longer, possess a greater density of external links, and are visited more often than other articles on Wikipedia. Readers of WPM articles are more likely to hover over and view footnotes than other readers, but are less likely to visit the hyperlinked sources in these footnotes. Our findings suggest that WPM readers appear to use links to external sources to verify and authorize Wikipedia content, rather than to examine the sources themselves.
- Published
- 2020
11. A survey-based analysis of the academic job market
- Author
-
Orsolya Symmons, Vikas Pejaver, Nafisa M. Jadavji, Natalie M. Niemi, Christopher T. Smith, Ariangela J. Kozik, Amanda Haage, Sarvenaz Sarabipour, Alex S. Holehouse, Jason D Fernandes, and Alexandre W. Bisson Filho
- Subjects
0301 basic medicine ,Male ,Computer science ,Outcome (game theory) ,0302 clinical medicine ,Surveys and Questionnaires ,Lack of knowledge ,Marketing ,Biology (General) ,Multidisciplinary ,Career Choice ,Scope (project management) ,General Neuroscience ,General Medicine ,scientific publishing ,Faculty ,Research Personnel ,Knowledge ,Publishing ,Transparency (graphic) ,careers in science ,Medicine ,Female ,Psychology ,Human ,Universities ,Process (engineering) ,QH301-705.5 ,Science ,MEDLINE ,meta-research ,Job market ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,Meta research ,Humans ,early-career researchers ,General Immunology and Microbiology ,ComputingMilieux_THECOMPUTINGPROFESSION ,business.industry ,Research ,Feature Article ,Achievement ,Data science ,Career Mobility ,030104 developmental biology ,Job Application ,tenure ,research culture ,Scientific publishing ,business ,030217 neurology & neurosurgery - Abstract
Many postdoctoral researchers apply for faculty positions knowing relatively little about the hiring process or what is needed to secure a job offer. To address this lack of knowledge about the hiring process we conducted a survey of applicants for faculty positions: the survey ran between May 2018 and May 2019, and received 317 responses. We analyzed the responses to explore the interplay between various scholarly metrics and hiring outcomes. We concluded that, above a certain threshold, the benchmarks traditionally used to measure research success – including funding, number of publications or journals published in – were unable to completely differentiate applicants with and without job offers. Respondents also reported that the hiring process was unnecessarily stressful, time-consuming, and lacking in feedback, irrespective of outcome. Our findings suggest that there is considerable scope to improve the transparency of the hiring process.
- Published
- 2019
12. A synthetic dataset primer for the biobehavioural sciences to promote reproducibility and hypothesis generation
- Author
-
Daniel Quintana
- Subjects
0301 basic medicine ,Biometry ,Computer science ,QH301-705.5 ,Science ,Datasets as Topic ,Disclosure ,meta-research ,General Biochemistry, Genetics and Molecular Biology ,03 medical and health sciences ,0302 clinical medicine ,Open research ,Meta research ,None ,Confidentiality ,Biology (General) ,Human Biology and Medicine ,Data exploration ,General Immunology and Microbiology ,Information Dissemination ,General Neuroscience ,General Medicine ,Biobehavioral Sciences ,Data science ,Economic benefits ,Tools and Resources ,Open data ,R package ,030104 developmental biology ,ComputingMethodologies_PATTERNRECOGNITION ,data ,statistics ,Medicine ,030217 neurology & neurosurgery - Abstract
Open research data provide considerable scientific, societal, and economic benefits. However, disclosure risks can sometimes limit the sharing of open data, especially in datasets that include sensitive details or information from individuals with rare disorders. This article introduces the concept of synthetic datasets, which is an emerging method originally developed to permit the sharing of confidential census data. Synthetic datasets mimic real datasets by preserving their statistical properties and the relationships between variables. Importantly, this method also reduces disclosure risk to essentially nil as no record in the synthetic dataset represents a real individual. This practical guide with accompanying R script enables biobehavioural researchers to create synthetic datasets and assess their utility via the synthpop R package. By sharing synthetic datasets that mimic original datasets that could not otherwise be made open, researchers can ensure the reproducibility of their results and facilitate data exploration while maintaining participant privacy., eLife digest It is becoming increasingly common for scientists to share their data with other researchers. This makes it possible to independently verify reported results, which increases trust in research. Sometimes it is not possible to share certain datasets because they include sensitive information about individuals. In psychology and medicine, scientists have tried to remove identifying information from datasets before sharing them by, for example, adding minor artificial errors. But, even when researchers take these steps, it may still be possible to identify individuals, and the introduction of artificial errors can make it harder to verify the original results. One potential alternative to sharing sensitive data is to create ‘synthetic datasets’. Synthetic datasets mimic original datasets by maintaining the statistical properties of the data but without matching the original recorded values. Synthetic datasets are already being used, for example, to share confidential census data. However, this approach is rarely used in other areas of research. Now, Daniel S. Quintana demonstrates how synthetic datasets can be used in psychology and medicine. Three different datasets were studied to ensure that synthetic datasets performed well regardless of the type or size of the data. Quintana evaluated freely available software that could generate synthetic versions of these different datasets, which essentially removed any identifying information. The results obtained by analysing the synthetic datasets closely mimicked the original results. These tools could allow researchers to verify each other’s results more easily without jeopardizing the privacy of participants. This could encourage more collaboration, stimulate ideas for future research, and increase data sharing between research groups.
- Published
- 2019
13. Releasing a preprint is associated with more attention and citations for the peer-reviewed article
- Author
-
Jacob J. Hughey and Darwin Y. Fu
- Subjects
0301 basic medicine ,QH301-705.5 ,Science ,Library science ,General Biochemistry, Genetics and Molecular Biology ,Meta-Research ,03 medical and health sciences ,0302 clinical medicine ,Meta research ,Meta-Analysis as Topic ,None ,030212 general & internal medicine ,Biology (General) ,030304 developmental biology ,0303 health sciences ,Actuarial science ,General Immunology and Microbiology ,Impact factor ,General Neuroscience ,05 social sciences ,Feature Article ,General Medicine ,scientific publishing ,Popularity ,citations ,030104 developmental biology ,Preprints as Topic ,preprints ,Medicine ,Regression Analysis ,Observational study ,Preprint ,0509 other social sciences ,Scientific publishing ,Periodicals as Topic ,050904 information & library sciences ,Computational and Systems Biology - Abstract
Preprints in biology are becoming more popular, but only a small fraction of the articles published in peer-reviewed journals have previously been released as preprints. To examine whether releasing a preprint on bioRxiv was associated with the attention and citations received by the corresponding peer-reviewed article, we assembled a dataset of 74,239 articles, 5,405 of which had a preprint, published in 39 journals. Using log-linear regression and random-effects meta-analysis, we found that articles with a preprint had, on average, a 49% higher Altmetric Attention Score and 36% more citations than articles without a preprint. These associations were independent of several other article- and author-level variables (such as scientific subfield and number of authors), and were unrelated to journal-level variables such as access model and Impact Factor. This observational study can help researchers and publishers make informed decisions about how to incorporate preprints into their work.
- Published
- 2019
14. Why we need to report more than 'Data were Analyzed by t-tests or ANOVA'
- Author
-
Stacey J. Winham, Tracey L. Weissgerber, Vesna D. Garovic, Natasa Milic, and Oscar Garcia-Valencia
- Subjects
0301 basic medicine ,Research Report ,QH301-705.5 ,Science ,Statistics as Topic ,Sample (statistics) ,Original research ,General Biochemistry, Genetics and Molecular Biology ,Meta-Research ,03 medical and health sciences ,0302 clinical medicine ,Meta research ,systematic review ,Statistics ,None ,Humans ,Biology (General) ,Human Biology and Medicine ,Statistical hypothesis testing ,transparency ,Analysis of Variance ,General Immunology and Microbiology ,General Neuroscience ,Research ,Feature Article ,General Medicine ,t-test ,3. Good health ,Test (assessment) ,030104 developmental biology ,statistics ,Research Design ,Medicine ,Analysis of variance ,Psychology ,030217 neurology & neurosurgery - Abstract
Transparent reporting is essential for the critical evaluation of studies. However, the reporting of statistical methods for studies in the biomedical sciences is often limited. This systematic review examines the quality of reporting for two statistical tests, t-tests and ANOVA, for papers published in a selection of physiology journals in June 2017. Of the 328 original research articles examined, 277 (84.5%) included an ANOVA or t-test or both. However, papers in our sample were routinely missing essential information about both types of tests: 213 papers (95% of the papers that used ANOVA) did not contain the information needed to determine what type of ANOVA was performed, and 26.7% of papers did not specify what post-hoc test was performed. Most papers also omitted the information needed to verify ANOVA results. Essential information about t-tests was also missing in many papers. We conclude by discussing measures that could be taken to improve the quality of reporting.
- Published
- 2018
15. Adequate statistical power in clinical trials is associated with the combination of a male first author and a female last author.
- Author
-
Otte WM, Tijdink JK, Weerheim PL, Lamberink HJ, and Vinkers CH
- Subjects
- Female, Humans, Male, Physicians, Research Personnel, Research Report, Authorship, Clinical Trials as Topic, Publishing
- Abstract
Clinical trials have a vital role in ensuring the safety and efficacy of new treatments and interventions in medicine. A key characteristic of a clinical trial is its statistical power. Here we investigate whether the statistical power of a trial is related to the gender of first and last authors on the paper reporting the results of the trial. Based on an analysis of 31,873 clinical trials published between 1974 and 2017, we find that adequate statistical power was most often present in clinical trials with a male first author and a female last author (20.6%, 95% confidence interval 19.4-21.8%), and that this figure was significantly higher than the percentage for other gender combinations (12.5-13.5%; P<0.0001). The absolute number of female authors in clinical trials gradually increased over time, with the percentage of female last authors rising from 20.7% (1975-85) to 28.5% (after 2005). Our results demonstrate the importance of gender diversity in research collaborations and emphasize the need to increase the number of women in senior positions in medicine., Competing Interests: WO, JT, PW, HL, CV No competing interests declared, (© 2018, Otte et al.)
- Published
- 2018
- Full Text
- View/download PDF
16. [Untitled]
- Subjects
General Immunology and Microbiology ,General Neuroscience ,media_common.quotation_subject ,education ,05 social sciences ,Sentiment analysis ,Applied psychology ,Sign (semiotics) ,General Medicine ,050905 science studies ,Morality ,Tone (literature) ,General Biochemistry, Genetics and Molecular Biology ,Meta research ,Scale (social sciences) ,0509 other social sciences ,Language analysis ,050904 information & library sciences ,Psychology ,Function (engineering) ,media_common - Abstract
Peer review is often criticized for being flawed, subjective and biased, but research into peer review has been hindered by a lack of access to peer review reports. Here we report the results of a study in which text-analysis software was used to determine the linguistic characteristics of 472,449 peer review reports. A range of characteristics (including analytical tone, authenticity, clout, three measures of sentiment, and morality) were studied as a function of reviewer recommendation, area of research, type of peer review and reviewer gender. We found that reviewer recommendation had the biggest impact on the linguistic characteristics of reports, and that area of research, type of peer review and reviewer gender had little or no impact. The lack of influence of research area, type of review or reviewer gender on the linguistic characteristics is a sign of the robustness of peer review.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.