745 results on '"metascience"'
Search Results
2. Results reporting for clinical trials led by medical universities and university hospitals in the nordic countries was often missing or delayed
- Author
-
Nilsonne, Gustav, Wieschowski, Susanne, DeVito, Nicholas J., Salholz-Hillel, Maia, Ahnström, Love, Bruckner, Till, Klas, Katarzyna, Suljic, Tarik, Yerunkar, Samruddhi, Olsson, Natasha, Cruz, Carolina, Strzebonska, Karolina, Småbrekke, Lars, Wasylewski, Mateusz T., Bengtsson, Johan, Ringsten, Martin, Schuster, Aminul, Krawczyk, Tomasz, Paraskevas, Themistoklis, Raittio, Eero, Herczeg, Luca, Hesselberg, Jan-Ole, Karlsson, Sofia, Borana, Ronak, Bruschettini, Matteo, Mulinari, Shai, Lizárraga, Karely, Siebert, Maximilian, Hildebrand, Nicole, Ramakrishnan, Shreya, Janiaud, Perrine, Zavalis, Emmanuel, Franzen, Delwen, Boesen, Kim, Hemkens, Lars G., Naudet, Florian, Possmark, Sofie, Willén, Rebecca M., Ioannidis, John P.A., Strech, Daniel, and Axfors, Cathrine
- Published
- 2025
- Full Text
- View/download PDF
3. Science of science: A multidisciplinary field studying science
- Author
-
Krauss, Alexander
- Published
- 2024
- Full Text
- View/download PDF
4. Does neuroscience research change behaviour? A scoping review and case study in obesity neuroscience
- Author
-
Wang, Joshua, Chehrehasa, Fatemeh, Moody, Hayley, and Beecher, Kate
- Published
- 2024
- Full Text
- View/download PDF
5. Same data, different analysts: variation in effect sizes due to analytical decisions in ecology and evolutionary biology.
- Author
-
Gould, Elliot, Fraser, Hannah, Parker, Timothy, Nakagawa, Shinichi, Griffith, Simon, Vesk, Peter, Fidler, Fiona, Hamilton, Daniel, Abbey-Lee, Robin, Abbott, Jessica, Aguirre, Luis, Alcaraz, Carles, Aloni, Irith, Altschul, Drew, Arekar, Kunal, Atkins, Jeff, Atkinson, Joe, Baker, Christopher, Barrett, Meghan, Bell, Kristian, Bello, Suleiman, Beltrán, Iván, Berauer, Bernd, Bertram, Michael, Billman, Peter, Blake, Charlie, Blake, Shannon, Bliard, Louis, Bonisoli-Alquati, Andrea, Bonnet, Timothée, Bordes, Camille, Bose, Aneesh, Botterill-James, Thomas, Boyd, Melissa, Boyle, Sarah, Bradfer-Lawrence, Tom, Bradham, Jennifer, Brand, Jack, Brengdahl, Martin, Bulla, Martin, Bussière, Luc, Camerlenghi, Ettore, Campbell, Sara, Campos, Leonardo, Caravaggi, Anthony, Cardoso, Pedro, Carroll, Charles, Catanach, Therese, Chen, Xuan, Chik, Heung, Choy, Emily, Christie, Alec, Chuang, Angela, Chunco, Amanda, Clark, Bethany, Contina, Andrea, Covernton, Garth, Cox, Murray, Cressman, Kimberly, Crotti, Marco, Crouch, Connor, DAmelio, Pietro, de Sousa, Alexandra, Döbert, Timm, Dobler, Ralph, Dobson, Adam, Doherty, Tim, Drobniak, Szymon, Duffy, Alexandra, Duncan, Alison, Dunn, Robert, Dunning, Jamie, Dutta, Trishna, Eberhart-Hertel, Luke, Elmore, Jared, Elsherif, Mahmoud, English, Holly, Ensminger, David, Ernst, Ulrich, Ferguson, Stephen, Fernandez-Juricic, Esteban, Ferreira-Arruda, Thalita, Fieberg, John, Finch, Elizabeth, Fiorenza, Evan, Fisher, David, Fontaine, Amélie, Forstmeier, Wolfgang, Fourcade, Yoan, Frank, Graham, Freund, Cathryn, Fuentes-Lillo, Eduardo, Gandy, Sara, Gannon, Dustin, García-Cervigón, Ana, Garretson, Alexis, Ge, Xuezhen, Geary, William, Géron, Charly, and Gilles, Marc
- Subjects
Analytical heterogeneity ,Many-analyst ,Metascience ,Replication crisis ,Reproducibility ,Ecology ,Biological Evolution ,Animals ,Passeriformes ,Eucalyptus - Abstract
Although variation in effect sizes and predicted values among studies of similar phenomena is inevitable, such variation far exceeds what might be produced by sampling error alone. One possible explanation for variation among results is differences among researchers in the decisions they make regarding statistical analyses. A growing array of studies has explored this analytical variability in different fields and has found substantial variability among results despite analysts having the same data and research question. Many of these studies have been in the social sciences, but one small many analyst study found similar variability in ecology. We expanded the scope of this prior work by implementing a large-scale empirical exploration of the variation in effect sizes and model predictions generated by the analytical decisions of different researchers in ecology and evolutionary biology. We used two unpublished datasets, one from evolutionary ecology (blue tit, Cyanistes caeruleus, to compare sibling number and nestling growth) and one from conservation ecology (Eucalyptus, to compare grass cover and tree seedling recruitment). The project leaders recruited 174 analyst teams, comprising 246 analysts, to investigate the answers to prespecified research questions. Analyses conducted by these teams yielded 141 usable effects (compatible with our meta-analyses and with all necessary information provided) for the blue tit dataset, and 85 usable effects for the Eucalyptus dataset. We found substantial heterogeneity among results for both datasets, although the patterns of variation differed between them. For the blue tit analyses, the average effect was convincingly negative, with less growth for nestlings living with more siblings, but there was near continuous variation in effect size from large negative effects to effects near zero, and even effects crossing the traditional threshold of statistical significance in the opposite direction. In contrast, the average relationship between grass cover and Eucalyptus seedling number was only slightly negative and not convincingly different from zero, and most effects ranged from weakly negative to weakly positive, with about a third of effects crossing the traditional threshold of significance in one direction or the other. However, there were also several striking outliers in the Eucalyptus dataset, with effects far from zero. For both datasets, we found substantial variation in the variable selection and random effects structures among analyses, as well as in the ratings of the analytical methods by peer reviewers, but we found no strong relationship between any of these and deviation from the meta-analytic mean. In other words, analyses with results that were far from the mean were no more or less likely to have dissimilar variable sets, use random effects in their models, or receive poor peer reviews than those analyses that found results that were close to the mean. The existence of substantial variability among analysis outcomes raises important questions about how ecologists and evolutionary biologists should interpret published results, and how they should conduct analyses in the future.
- Published
- 2025
6. Benchmarking Scholarship in Consumer Research: The p-Index of Thought Leadership.
- Author
-
Pham, Michel Tuan, Wu, Alisa Yinghao, and Wang, Danqi
- Subjects
CONSUMER research ,SCHOLARS ,SCHOLARLY method ,SCIENTOMETRICS ,CITATION analysis ,ACADEMIC discourse ,LEADERSHIP - Abstract
The assessment of consumer scholarship must move beyond a mere counting of the number of "A"s on a researcher's CV to include at least some measure of impact. To facilitate a broader assessment of scholarship in consumer research, we provide detailed statistics on the productivity and citation impact of the field's 340 main gatekeepers: the editors, associate editors, and editorial board members of the Journal of Consumer Research and the Journal of Consumer Psychology. In addition, we introduce a new metric, called the p -index, which can be interpreted as an indicator of a researcher's propensity for thought leadership. Using this metric, we show that productivity and thought leadership do not necessarily go hand in hand in consumer research and that a combination of the two is a good predictor of the level of esteem that consumer scholars enjoy among their peers and of the receipt of major career awards. Our analyses provide greater transparency into how productivity, citation impact, and propensity for thought leadership are currently distributed among prominent consumer scholars. Furthermore, the detailed descriptive statistics reported can serve as useful benchmarks against which other consumer researchers' records may be meaningfully compared. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. How to Produce, Identify, and Motivate Robust Psychological Science: A Roadmap and a Response to Vize et al.
- Author
-
Klonsky, E. David
- Abstract
Some wish to mandate preregistration as a response to the replication crisis, while I and others caution that such mandates inadvertently cause harm and distract from more critical reforms. In this article, after briefly critiquing a recently published defense of preregistration mandates, I propose a three-part vision for cultivating a robust and cumulative psychological science. First, we must know how to produce robust rather than fragile findings. Key ingredients include sufficient sample sizes, valid measurement, and honesty/transparency. Second, we must know how to identify robust (and non-robust) findings. To this end, I reframe robustness checks broadly into four types: across analytic decisions, across measures, across samples, and across investigative teams. Third, we must be motivated to produce and care about robust science. This aim requires marshaling sociocultural forces to support, reward, and celebrate the production of robust findings, just as we once rewarded flashy but fragile findings. Critically, these sociocultural reinforcements must be tied as closely as possible to rigor and robustness themselves—rather than cosmetic indicators of rigor and robustness, as we have done in the past. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. The replication crisis is less of a "crisis" in Lakatos' philosophy of science than it is in Popper's.
- Author
-
Rubin, Mark
- Abstract
Popper's (1983, 2002) philosophy of science has enjoyed something of a renaissance in the wake of the replication crisis, offering a philosophical basis for the ensuing science reform movement. However, adherence to Popper's approach may also be at least partly responsible for the sense of "crisis" that has developed following multiple unexpected replication failures. In this article, I contrast Popper's approach with that of Lakatos (1978) as well as with a related but problematic approach called naïve methodological falsificationism (NMF; Lakatos, 1978). The Popperian approach is powerful because it is based on logical refutations, but its theories are noncausal and, therefore, potentially lacking in scientific value. In contrast, the Lakatosian approach considers causal theories, but it concedes that these theories are not logically refutable. Finally, NMF represents a hybrid approach that subjects Lakatosian causal theories to Popperian logical refutations. However, its tactic of temporarily accepting a ceteris paribus clause during theory testing may be viewed as scientifically inappropriate, epistemically inconsistent, and "completely redundant" (Lakatos, 1978, p. 40). I conclude that the replication "crisis" makes the most sense in the context of the Popperian and NMF approaches because it is only in these two approaches that the failure to replicate a previously corroborated theory represents a logical refutation of that theory. In contrast, such replication failures are less problematic in the Lakatosian approach because they do not logically refute theories. Indeed, in the Lakatosian approach, replication failures can be temporarily ignored or used to motivate theory development. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Automating the practice of science: Opportunities, challenges, and implications.
- Author
-
Musslick, Sebastian, Bartlett, Laura K., Chandramouli, Suyog H., Dubova, Marina, Gobet, Fernand, Griffiths, Thomas L., Hullman, Jessica, King, Ross D., Kutz, J. Nathan, Lucas, Christopher G., Mahesh, Suhas, Pestilli, Franco, Sloman, Sabina J., and Holmes, William R.
- Subjects
- *
SCIENTIFIC method , *SCIENTIFIC discoveries , *RESEARCH personnel , *AUTOMATION , *ARTIFICIAL intelligence - Abstract
Automation transformed various aspects of our human civilization, revolutionizing industries and streamlining processes. In the domain of scientific inquiry, automated approaches emerged as powerful tools, holding promise for accelerating discovery, enhancing reproducibility, and overcoming the traditional impediments to scientific progress. This article evaluates the scope of automation within scientific practice and assesses recent approaches. Furthermore, it discusses different perspectives to the following questions: where do the greatest opportunities lie for automation in scientific practice?; What are the current bottlenecks of automating scientific practice?; and What are significant ethical and practical consequences of automating scientific practice? By discussing the motivations behind automated science, analyzing the hurdles encountered, and examining its implications, this article invites researchers, policymakers, and stakeholders to navigate the rapidly evolving frontier of automated scientific practice. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Alternative models of funding curiosity-driven research.
- Author
-
Gigerenzer, Gerd, Allen, Colin, Gaillard, Stefan, Goldstone, Robert L., Haaf, Julia, Holmes, William R., Yoshihisa Kashima, Motz, Benjamin, Musslick, Sebastian, and Stefan, Angelika
- Subjects
- *
TECHNOLOGICAL innovations , *REGIONAL disparities , *GENDER inequality , *TWENTIETH century , *RESEARCH funding - Abstract
Funding of curiosity-driven science is the lifeblood of scientific and technological innovation. Various models of funding allocation became institutionalized in the 20th century, shaping the present landscape of research funding. There are numerous reasons for scientists to be dissatisfied with current funding schemes, including the imbalance between funding for curiosity-driven and mission-directed research, regional and country disparities, path-dependency of who gets funded, gender and race disparities, low inter-reviewer reliability, and the trade-off between the effort and time spent on writing or reviewing proposals and doing research. We discuss possible alternative models for dealing with these issues. These alternatives include incremental changes such as placing more weight on the proposals or on the investigators and representative composition of panel members, along with deeper reforms such as distributed or concentrated funding and partial lotteries in response to low inter-reviewer reliability. We also consider radical alternatives to current funding schemes: the removal of political governance and the introduction of international competitive applications to a World Research Council alongside national funding sources. There is likely no single best way to fund curiosity-driven research; we examine arguments for and against the possibility of systematically evaluating alternative models empirically. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
11. Effect Size Magnification: No Variable Is as Important as the One You're Thinking About—While You're Thinking About It.
- Author
-
Gandhi, Linnea, Manning, Benjamin S., and Duckworth, Angela L.
- Subjects
- *
HUMAN behavior , *STATISTICAL power analysis , *PSYCHOLOGICAL research , *EXPLANATION , *HUMAN beings - Abstract
The goal of psychological science is to discover truths about human nature, and the typical form of empirical insights is a simple statement of the form x relates to y. We suggest that such "one-liners" imply much larger x - y relationships than those we typically study. Given the multitude of factors that compete and interact to influence any human outcome, small effect sizes should not surprise us. And yet they do—as evidenced by the persistent and systematic underpowering of research studies in psychological science. We suggest an explanation. Effect size magnification is the tendency to exaggerate the importance of the variable under investigation because of the momentary neglect of others. Although problematic, this attentional focus serves a purpose akin to that of the eye's fovea. We see a particular x-y relationship with greater acuity when it is the center of our attention. Debiasing remedies are not straightforward, but we recommend (a) recalibrating expectations about the effect sizes we study, (b) proactively exploring moderators and boundary conditions, and (c) periodically toggling our focus from the x variable we happen to study to the non- x variables we do not. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Reporting Bias, Not External Focus: A Robust Bayesian Meta-Analysis and Systematic Review of the External Focus of Attention Literature.
- Author
-
McKay, Brad, Corson, Abbey E., Seedu, Jeswende, De Faveri, Celeste S., Hasan, Hibaa, Arnold, Kristen, Adams, Faith C., and Carter, Michael J.
- Subjects
- *
MOTOR ability , *PUBLICATION bias , *RESEARCH personnel , *ELECTROMYOGRAPHY , *MOTOR learning , *HETEROGENEITY - Abstract
Evidence has ostensibly been accumulating over the past 2 decades suggesting that an external focus on the intended movement effect (e.g., on the golf club during a swing) is superior to an internal focus on body movements (e.g., on your arms during a swing) for skill acquisition. Seven previous meta-studies have all reported evidence of external focus superiority. The most comprehensive of these concluded that an external focus enhances motor skill retention, transfer, and performance and leads to reduced eletromyographic activity during performance and that more distal external foci are superior to proximal external foci for performance. Here, we reanalyzed these data using robust Bayesian meta-analyses that included several plausible models of publication bias. We found moderate to strong evidence of publication bias for all analyses. After correcting for publication bias, estimated mean effects were negligible: g = 0.01 (performance), g = 0.15 (retention), g = 0.09 (transfer), g = 0.06 (electromyography), and g = −0.01 (distance effect). Bayes factors indicated data favored the null for each analysis, ranging from BF01 = 1.3 (retention) to 5.75 (performance). We found clear evidence of heterogeneity in each analysis, suggesting the impact of attentional focus depends on yet unknown contextual factors. Our results contradict the existing consensus that an external focus is always more effective than an internal focus. Instead, focus of attention appears to have a variety of effects that we cannot account for, and, on average, those effects are small to nil. These results parallel previous metascience suggesting publication bias has obfuscated the motor learning literature. Public Significance Statement: A robust Bayesian meta-analysis showed that directing learners to focus their attention on their intended movement effects—often called an external focus—may have little-to-no effect on motor performance and learning on average. Although the consensus among researchers and practitioners has been that an external focus is superior to focusing on one's own body during practice, the present results suggest this may depend on unknown factors, and our current understanding has been distorted by publication bias. These results highlight that a more cautious approach is necessary when recommending the use of external foci in applied settings until a more reliable body of literature can be established using preregistration, Registered Reports, and well-powered designs through multisite collaborations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Studying Adherence to Reporting Standards in Kinesiology: A Post-publication Peer Review Brief Report.
- Author
-
WATSON, NIKKI M. and THOMAS, JAFRĀ D.
- Subjects
KINESIOLOGY ,SPORTS sciences ,STAKEHOLDERS ,REPRODUCIBLE research ,CONTENT analysis - Abstract
To demonstrate how post-publication peer reviews—using journal article reporting standards—could improve the design and write-up of kinesiology research, the authors performed a post-publication peer review on one systematic literature review published in 2020. Two raters (1
st & 2nd authors) critically appraised the case article between April and May 2021. The latest Journal Article Reporting Standards by the American Psychological Association relevant to the review were used: i.e., Table 1 (quantitative research standards) and Table 9 (research synthesis standards). A standard fully met was deemed satisfactory. Per Krippendorff’s alpha-coefficient, inter-rater agreement was moderate for Table 1 (k-alpha = .57, raw-agreement = 72.2%) and poor for Table 9 (k-alpha = .09, raw-agreement = 53.6%). A 100% consensus was reached on all discrepancies. Results suggest the case article’s Abstract, Methods, and Discussion sections required clarification or more detail. Per Table 9 standards, four sections were largely incomplete: i.e., Abstract (100%- incomplete), Introduction (66%-incomplete), Methods (75%-incomplete), and Discussion (66%-incomplete). Case article strengths included tabular summary of studies analyzed in the systematic review and a cautionary comment about the review’s generalizability. The article’s write-up gave detail to help the reader understand the scope of the study and decisions made by the authors. However, adequate detail was not provided to assess the credibility of all claims made in the article. This could affect readers’ ability to obtain critical and nuanced understanding of the article’s topics. The results of this critique should encourage (continuing) education on journal article reporting standards for diverse stakeholders (e.g., authors, reviewers). [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
14. Same model, same data, but different outcomes: Evaluating the impact of method choices in structural equation modeling.
- Author
-
Sarstedt, Marko, Adler, Susanne J., Ringle, Christian M., Cho, Gyeongcheol, Diamantopoulos, Adamantios, Hwang, Heungsun, and Liengaard, Benjamin D.
- Subjects
STRUCTURAL equation modeling ,RESEARCH personnel ,DECISION making ,ORGANIZATIONAL change ,REPRODUCIBLE research - Abstract
Scientific research demands robust findings, yet variability in results persists due to researchers' decisions in data analysis. Despite strict adherence to state‐of the‐art methodological norms, research results can vary when analyzing the same data. This article aims to explore this variability by examining the impact of researchers' analytical decisions when using different approaches to structural equation modeling (SEM), a widely used method in innovation management to estimate cause–effect relationships between constructs and their indicator variables. For this purpose, we invited SEM experts to estimate a model on absorptive capacity's impact on organizational innovation and performance using different SEM estimators. The results show considerable variability in effect sizes and significance levels, depending on the researchers' analytical choices. Our research underscores the necessity of transparent analytical decisions, urging researchers to acknowledge their results' uncertainty, to implement robustness checks, and to document the results from different analytical workflows. Based on our findings, we provide recommendations and guidelines on how to address results variability. Our findings, conclusions, and recommendations aim to enhance research validity and reproducibility in innovation management, providing actionable and valuable insights for improved future research practices that lead to solid practical recommendations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. A libraries reproducibility hackathon: connecting students to University research and testing the longevity of published code [version 1; peer review: awaiting peer review]
- Author
-
Chasz Griego, Kristen Scotti, Elizabeth Terveen, Joseph Chan, Daisy Sheng, Alfredo González-Espinoza, and Christopher Warren
- Subjects
Case Study ,Articles ,Reproducibility ,Hackathon ,Academic Libraries ,Open Science ,Metascience ,Digital Humanities ,Computational Research ,Software ,Community Engagement - Abstract
Reproducibility is a basis of scientific integrity, yet it remains a significant challenge across disciplines in computational science. This reproducibility crisis is now being met with an Open Science movement, which has risen to prominence within the scientific community and academic libraries especially. To address the need for reproducible computational research and promote Open Science within the community, members of the Open Science and Data Collaborations Program at Carnegie Mellon University Libraries organized a single-day hackathon centered around reproducibility. Partnering with a faculty researcher in English and Digital Humanities, this event allowed several students an opportunity to interact with real research outputs, test the reproducibility of data analyses with code, and offer feedback for improvements. With Python code and data shared by the researcher in an open repository, we revealed that students could successfully reproduce most of the data visualizations, but they required completing some manual setup and modifications to address depreciated libraries to successfully rerun the code. During the event, we also investigated the option of using ChatGPT to debug and troubleshoot rerunning this code. By interacting with a ChatGPT API in the code, we found and addressed the same roadblocks and successfully reproduced the same figures as the participating students. Assessing a second option, we also collaborated with the researcher to publish a compute capsule in Code Ocean. This option presented an alternative to manual setup and modifications, an accessible option for more limited devices like tablets, and a simple solution for outside researchers to modify or build on existing research code.
- Published
- 2024
- Full Text
- View/download PDF
16. Heterogeneity in effect size estimates.
- Author
-
Holzmeister, Felix, Johannesson, Magnus, Böhm, Robert, Dreber, Anna, Huber, Jürgen, and Kirchler, Michael
- Subjects
- *
PATH analysis (Statistics) , *ERROR rates , *HETEROGENEITY , *EXPERIMENTAL design , *EMPIRICAL research - Abstract
A typical empirical study involves choosing a sample, a research design, and an analysis path. Variation in such choices across studies leads to heterogeneity in results that introduce an additional layer of uncertainty, limiting the generalizability of published scientific findings. We provide a framework for studying heterogeneity in the social sciences and divide heterogeneity into population, design, and analytical heterogeneity. Our framework suggests that after accounting for heterogeneity, the probability that the tested hypothesis is true for the average population, design, and analysis path can be much lower than implied by nominal error rates of statistically significant individual studies. We estimate each type's heterogeneity from 70 multilab replication studies, 11 prospective meta-analyses of studies employing different experimental designs, and 5 multianalyst studies. In our data, population heterogeneity tends to be relatively small, whereas design and analytical heterogeneity are large. Our results should, however, be interpreted cautiously due to the limited number of studies and the large uncertainty in the heterogeneity estimates. We discuss several ways to parse and account for heterogeneity in the context of different methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Journal self-citations trends in sport sciences: an analysis of disciplinary journals from 2013 to 2022.
- Author
-
Bennett, Hunter, Singh, Ben, and Slattery, Flynn
- Abstract
This study reports on the yearly rate of journal self-citation (JSC) in sport sciences, how it changes over time, and its association with journal impact factor (JIF). Citations made by all 87 journals in "sport sciences" from 2013 to 2022 were extracted, as was their 2022 JIF. JSC rates were calculated using a Poisson distribution method. A mixed-effects negative binomial regression examined changes in yearly JSC rates over time. The association between average JSC rates and JIF were compared using a negative binomial regression. The median JSC rate was 6.3 self-citations per 100 citations. JSC rates are increasing in sport sciences by ~ 10% per year (incidence rate ratio [IRR] = 1.1, 95% CI 1.1–1.2; trivial effect). There was a significant negative association between JSC rate and JIF (IRR = 0.9, 95% CI 0.9, 1.0; trivial effect). Contrary to observations made in prior literature examining broader disciplines, the increasing JSC rate in sport sciences may be attributed to the growing maturity of this novel discipline. As sport-science topic areas become more established and appear in discipline specific journals, more JSCs may occur due to an increasing body of literature in these journals. The negative association between JSC rate and JIF may be due to specialized and less visible journals having a naturally lower JIF, as their impact is confined to a narrower field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. The evolving hierarchy of naturalized philosophy: A metaphilosophical sketch.
- Author
-
Rivelli, Luca
- Subjects
- *
MACHINE learning , *ARTIFICIAL intelligence , *THEORY of knowledge , *NORMATIVITY (Ethics) , *SCHOLARS - Abstract
Some scholars claim that epistemology of science and machine learning are actually overlapping disciplines studying induction, respectively affected by Hume's problem of induction and its formal machine‐learning counterpart, the "no‐free‐lunch" (NFL) theorems, to which even advanced AI systems such as LLMs are not immune. Extending Kevin Korb's view, this paper envisions a hierarchy of disciplines where the lowermost is a basic science, and, recursively, the metascience at each level inductively learns which methods work best at the immediately lower level. Due to Hume's dictum and NFL theorems, no exact metanorms for the good performance of each object science can be obtained after just a finite number of levels up the hierarchy, and the progressive abstractness of each metadiscipline and consequent ill‐definability of its methods and objects makes science—as defined by a minimal standard of scientificity—cease to exist above a certain metalevel, allowing for a still rational style of inquiry into science that can be called "philosophical." Philosophical levels, transitively reflecting on science, peculiarly manifest a non–empirically learned urge to self‐reflection constituting the properly normative aspect of philosophy of science. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Selective reporting of placebo tests in top economics journals.
- Author
-
Dreber, Anna, Johannesson, Magnus, and Yang, Yifan
- Subjects
- *
PLACEBOS , *NULL hypothesis - Abstract
Placebo tests provide incentives to underreport statistically significant tests, a form of reversed p‐hacking. We test for such underreporting in 11 top economics journals between 2009 and 2021 based on a pre‐registered analysis plan. If the null hypothesis is true in all tests, 2.5% of them should be significant at the 5% level with an effect in the same direction as the main test (and 5% in total). The actual fraction of statistically significant placebo tests with an effect in the same direction is 1.29% (95% CI [0.83, 1.63]), and the overall fraction of statistically significant placebo tests is 3.10% (95% CI [2.2, 4.0]). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Cognition of Time and Thinking Beyond
- Author
-
Bi, Zedong, Crusio, Wim E., Series Editor, Dong, Haidong, Series Editor, Radeke, Heinfried H., Series Editor, Rezaei, Nima, Series Editor, Steinlein, Ortrud, Series Editor, Xiao, Junjie, Series Editor, Merchant, Hugo, editor, and de Lafuente, Victor, editor
- Published
- 2024
- Full Text
- View/download PDF
21. Advances in Methods and Practices in Psychological Science
- Subjects
psychology ,psychological science ,research methods ,replication ,metascience ,registered replication report ,Psychology ,BF1-990 - Published
- 2024
22. Replication of the natural selection of bad science
- Author
-
Kohrt, Florian, Smaldino, Paul E, McElreath, Richard, and Schönbrodt, Felix
- Subjects
Information and Computing Sciences ,Philosophy and Religious Studies ,History and Philosophy Of Specific Fields ,agent-based model ,replication ,metascience ,cultural evolution ,incentives - Abstract
This study reports an independent replication of the findings presented by Smaldino and McElreath (Smaldino, McElreath 2016 R. Soc. Open Sci. 3, 160384 (doi:10.1098/rsos.160384)). The replication was successful with one exception. We find that selection acting on scientist's propensity for replication frequency caused a brief period of exuberant replication not observed in the original paper due to a coding error. This difference does not, however, change the authors' original conclusions. We call for more replication studies for simulations as unique contributions to scientific quality assurance.
- Published
- 2023
23. Can a Good Theory Be Built Using Bad Ingredients?
- Author
-
Field, Sarahanne M., Volz, Leonhard, Kaznatcheev, Artem, and van Dongen, Noah
- Published
- 2024
- Full Text
- View/download PDF
24. What Makes a Good Theory, and How Do We Make a Theory Good?
- Author
-
Guest, Olivia
- Published
- 2024
- Full Text
- View/download PDF
25. Americans harbor much less favorable explicit sentiments toward young adults than toward older adults.
- Author
-
Frandoli, Stéphane P., Shakeri, Angela, and North, Michael S.
- Subjects
- *
OLDER people , *YOUNG adults , *AGE discrimination , *SOCIAL scientists , *AGE groups - Abstract
Public and academic discourse on ageism focuses primarily on prejudices targeting older adults, implicitly assuming that this age group experiences the most age bias. We test this assumption in a large, preregistered study surveying Americans' explicit sentiments toward young, middle-aged, and older adults. Contrary to certain expectations about the scope and nature of ageism, responses from two crowdsourced online samples matched to the US adult population (N = 1,820) revealed that older adults garner the most favorable sentiments and young adults, the least favorable ones. This pattern held across a wide range of participant demographics and outcome variables, in both samples. Signaling derogation of young adults more than benign liking of older adults, participants high on SDO (i.e., a key antecedent of group prejudice) expressed even less favorable sentiments toward young adults--and more favorable ones toward older adults. In two follow-up, preregistered, forecasting surveys, lay participants (N = 500) were generally quite accurate at predicting these results; in contrast, social scientists (N = 241) underestimated how unfavorably respondents viewed young adults and how favorably they viewed older adults. In fact, the more expertise in ageism scientists had, the more biased their forecasts. In a rapidly aging world with exacerbated concerns over older adults' welfare, young adults also face increasing economic, social, political, and ecological hardship. Our findings highlight the need for policymakers and social scientists to broaden their understanding of age biases and develop theory and policies that ponder discriminations targeting all age groups. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Communicating Study Design Trade-offs in Software Engineering.
- Author
-
Robillard, Martin P., Arya, Deeksha M., Ernst, Neil A., Guo, Jin L. C., Lamothe, Maxime, Nassif, Mathieu, Novielli, Nicole, Serebrenik, Alexander, Steinmacher, Igor, and Stol, Klaas-Jan
- Subjects
DESIGN software ,EXPERIMENTAL design ,SOFTWARE architecture ,RESEARCH personnel ,OPPORTUNITY costs ,SOFTWARE engineering - Abstract
Reflecting on the limitations of a study is a crucial part of the research process. In software engineering studies, this reflection is typically conveyed through discussions of study limitations or threats to validity. In current practice, such discussions seldom provide sufficient insight to understand the rationale for decisions taken before and during the study, and their implications. We revisit the practice of discussing study limitations and threats to validity and identify its weaknesses. We propose to refocus this practice of self-reflection to a discussion centered on the notion of trade-offs. We argue that documenting trade-offs allows researchers to clarify how the benefits of their study design decisions outweigh the costs of possible alternatives. We present guidelines for reporting trade-offs in a way that promotes a fair and dispassionate assessment of researchers' work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Reducing Racial Bias in Scientific Communication: Journal Policies and Their Influence on Reporting Racial Demographics.
- Author
-
Auelua-Toomey, Sakaria Laisene, Mortenson, Elizabeth, and Roberts, Steven Othello
- Subjects
- *
PREVENTION of racism , *CLINICAL psychology , *JOB involvement , *GOVERNMENT policy , *DOCTORAL programs , *LOGISTIC regression analysis , *AUTHORSHIP , *DESCRIPTIVE statistics , *SCHOLARLY communication , *ODDS ratio , *PUBLISHING , *DATA analysis software , *CONFIDENCE intervals - Abstract
Research titles with White samples, compared to research titles with samples of color, have been less likely to include the racial identity of the sample. This unequal writing practice has serious ramifications for both the history and future of psychological science, as it solidifies in the permanent scientific record the false notion that research with White samples is more generalizable and valuable than research with samples of color. In the present research, we experimentally tested the extent to which PhD students (63% White students, 27% students of color) engaged in this unequal writing practice, as well as the extent to which this practice might be disrupted by journal policies. In Study 1, PhD students who read about research conducted with a White sample, compared to those who read about the exact same research conducted with a Black sample, were significantly less likely to mention the sample's racial identity when generating research titles, keywords, and summaries. In Study 2, PhD students instructed to mention the racial identity of their samples, and PhD students instructed to not mention the identity of their samples (though to a lesser extent), were less likely to write about the White versus Black samples unequally. Across both studies, we found that PhD students were overall supportive of a policy to make the racial demographics of samples more transparent, believing that it would help to reduce racial biases in the field. Public Significance Statement: We discovered that, when left to their own discretion, PhD students were less likely to specify the racial demographics of a research sample in their scientific writing when the sample was White compared to when it was Black. Such a White-centric bias could imply to readers that research with White samples is inherently more valuable and generalizable. However, our findings also indicate that a journal policy mandating the mention of racial demographics in research samples can mitigate this racial inequality in communication. This policy proved more effective than an alternative colorblind policy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Rhetoric of psychological measurement theory and practice.
- Author
-
Slaney, Kathleen L., Graham, Megan E., Dhillon, Ruby S., and Hohn, Richard E.
- Subjects
PSYCHOMETRICS ,THEORY-practice relationship ,PSYCHOLOGICAL literature ,RHETORIC ,SCIENTIFIC language - Abstract
Metascience scholars have long been concerned with tracking the use of rhetorical language in scientific discourse, oftentimes to analyze the legitimacy and validity of scientific claim-making. Psychology, however, has only recently become the explicit target of such metascientific scholarship, much of which has been in response to the recent crises surrounding replicability of quantitative research findings and questionable research practices. The focus of this paper is on the rhetoric of psychological measurement and validity scholarship, in both the theoretical and methodological and empirical literatures. We examine various discourse practices in published psychological measurement and validity literature, including: (a) clear instances of rhetoric (i.e., persuasion or performance); (b) common or rote expressions and tropes (e.g., perfunctory claims or declarations); (c) metaphors and other “literary” styles; and (d) ambiguous, confusing, or unjustifiable claims. The methodological approach we use is informed by a combination of conceptual analysis and exploratory grounded theory, the latter of which we used to identify relevant themes within the published psychological discourse. Examples of both constructive and useful or misleading and potentially harmful discourse practices will be given. Our objectives are both to contribute to the critical methodological literature on psychological measurement and connect metascience in psychology to broader interdisciplinary examinations of science discourse. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. BIBLIOMETRIC ANALYSIS OF PHD, RESIDENCY DISSERTATIONS AND MASTER'S THESES IN PUBLIC HEALTH DEPARTMENTS IN TÜRKİYE BETWEEN 1970-2022.
- Author
-
DENİZLİ, Yasemin, UÇAR, Abdullah, UÇAR, Mahmut Talha, and TUNCA, Muhammet Yunus
- Subjects
CROSS-sectional method ,PATIENT education ,DATA mining ,MEDICAL personnel ,DATA analysis ,INTERNSHIP programs ,UNIVERSITIES & colleges ,TRAVEL hygiene ,HYGIENE ,DESCRIPTIVE statistics ,ACADEMIC dissertations ,NON-communicable diseases ,DEPARTMENTS ,BIBLIOMETRICS ,RESEARCH methodology ,METADATA ,QUALITY of life ,STATISTICS ,PUBLIC health ,HEALTH facilities ,MASTERS programs (Higher education) ,HEALTH promotion ,STAKEHOLDER analysis ,COMPARATIVE studies ,DATA analysis software ,INDUSTRIAL safety ,PSYCHOSOCIAL factors - Abstract
Copyright of ESTUDAM Public Health Journal is the property of ESTUDAM Public Health Journal and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
30. A Revised and Expanded Taxonomy for Understanding Heterogeneity in Research and Reporting Practices.
- Author
-
Manapat, Patrick D., Anderson, Samantha F., and Edwards, Michael C.
- Abstract
Concerns about replication failures can be partially recast as concerns about excessive heterogeneity in research results. Although this heterogeneity is an inherent part of science (e.g., sampling variability; studying different conditions), not all heterogeneity results from unavoidable sources. In particular, the flexibility researchers have when designing studies and analyzing data adds additional heterogeneity. This flexibility has been the topic of considerable discussion in the last decade. Ideas, and corresponding phrases, have been introduced to help unpack researcher behaviors, including researcher degrees of freedom and questionable research practices. Using these concepts and phrases, methodological and substantive researchers have considered how researchers' choices impact statistical conclusions and reduce clarity in the research literature. While progress has been made, inconsistent, vague, and overlapping use of the terminology surrounding these choices has made it difficult to have clear conversations about the most pressing issues. Further refinement of the language conveying the underlying concepts can catalyze further progress. We propose a revised, expanded taxonomy for assessing research and reporting practices. In addition, we redefine several crucial terms in a way that reduces overlap and enhances conceptual clarity, with particular focus on distinguishing practices along two lines: research versus reporting practices and choices involving multiple empirically supported options versus choices known to be subpar. We illustrate the effectiveness of these changes using conceptual and simulated demonstrations, and we discuss how this taxonomy can be valuable to substantive researchers by helping to navigate this flexibility and to methodological researchers by motivating research toward areas of greatest need. When replicating a scientific study, it is not reasonable to expect identical results - there will be some degree of variability from one study to another. However, too much variability between replication studies can begin to distort the truth and/or make it difficult to interpret a series of research results. Methodological and statistical choices that researchers make have the potential to add unnecessary variability. The many subjective choices involved in designing a study or analyzing data are likely to alter results, which increases the level of variability across a series of studies. Although progress has been made in addressing the role of researcher choice around methodological issues, the inconsistent use of terminology (e.g., researcher degrees of freedom, questionable research practices) has made discussions confusing. In this article, we present a new taxonomy for assessing research and reporting practices that is meant to clarify important terms and enhance conceptual clarity. We illustrate the usefulness of our new taxonomy with conceptual and simulated demonstrations and discuss how this taxonomy can be valuable to both substantive and methodological researchers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. How can meta-research be used to evaluate and improve the quality of research in the field of traditional, complementary, and integrative medicine?
- Author
-
Jeremy Y. Ng, Myeong Soo Lee, Jian-ping Liu, Amie Steel, L. Susan Wieland, Claudia M. Witt, David Moher, and Holger Cramer
- Subjects
Complementary and integrative medicine ,Meta-research ,Metascience ,Research quality ,Traditional medicine ,Miscellaneous systems and treatments ,RZ409.7-999 - Abstract
The field of traditional, complementary, and integrative medicine (TCIM) has garnered increasing attention due to its holistic approach to health and well-being. While the quantity of published research about TCIM has increased exponentially, critics have argued that the field faces challenges related to methodological rigour, reproducibility, and overall quality. This article proposes meta-research as one approach to evaluating and improving the quality of TCIM research. Meta-research, also known as research about research, can be defined as “the study of research itself: its methods, reporting, reproducibility, evaluation, and incentives”. By systematically evaluating methodological rigour, identifying biases, and promoting transparency, meta-research can enhance the reliability and credibility of TCIM research. Specific topics of interest that are discussed in this article include the following: 1) study design and research methodology, 2) reporting of research, 3) research ethics, integrity, and misconduct, 4) replicability and reproducibility, 5) peer review and journal editorial practices, 6) research funding: grants and awards, and 7) hiring, promotion, and tenure. For each topic, we provide case examples to illustrate meta-research applications in TCIM. We argue that meta-research initiatives can contribute to maintaining public trust, safeguarding research integrity, and advancing evidence based TCIM practice, while challenges include navigating methodological complexities, biases, and disparities in funding and academic recognition. Future directions involve tailored research methodologies, interdisciplinary collaboration, policy implications, and capacity building in meta-research.
- Published
- 2024
- Full Text
- View/download PDF
32. Subjective evidence evaluation survey for many-analysts studies
- Author
-
Alexandra Sarafoglou, Suzanne Hoogeveen, Don van den Bergh, Balazs Aczel, Casper J. Albers, Tim Althoff, Rotem Botvinik-Nezer, Niko A. Busch, Andrea M. Cataldo, Berna Devezer, Noah N. N. van Dongen, Anna Dreber, Eiko I. Fried, Rink Hoekstra, Sabine Hoffman, Felix Holzmeister, Jürgen Huber, Nick Huntington-Klein, John Ioannidis, Magnus Johannesson, Michael Kirchler, Eric Loken, Jan-Francois Mangin, Dora Matzke, Albert J. Menkveld, Gustav Nilsonne, Don van Ravenzwaaij, Martin Schweinsberg, Hannah Schulz-Kuempel, David R. Shanks, Daniel J. Simons, Barbara A. Spellman, Andrea H. Stoevenbelt, Barnabas Szaszi, Darinka Trübutschek, Francis Tuerlinckx, Eric L. Uhlmann, Wolf Vanpaemel, Jelte Wicherts, and Eric-Jan Wagenmakers
- Subjects
open science ,team science ,scientific transparency ,metascience ,crowdsourcing analysis ,Science - Abstract
Many-analysts studies explore how well an empirical claim withstands plausible alternative analyses of the same dataset by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g. effect size) provided by each analysis team. Although informative about the range of plausible effects in a dataset, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item subjective evidence evaluation survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous many-analysts study.
- Published
- 2024
- Full Text
- View/download PDF
33. The ideal psychologist vs. a messy reality : using and misunderstanding effect sizes, confidence intervals and power
- Author
-
Collins, Elizabeth, Watt, Roger, and Caes, Line
- Subjects
power analysis ,effect size ,statistics ,psychology ,open science ,replication crisis ,metascience ,confidence intervals - Abstract
In the past two decades, there have been calls for statistical reform in psychology. Three key concepts within reform are effect sizes, confidence intervals and statistical power. The aim of this thesis was to examine the use and knowledge of these particular concepts, to examine whether researchers are suitably equipped to incorporate them into their research. This thesis consists of five studies. Study 1 reviewed author guidelines across 100 psychology journals, to look for any statistical recommendations. Study 2 (n = 247) and Study 3 (n = 56) examined the use and knowledge of effect sizes using a questionnaire and online experiment. Study 4 surveyed psychology researchers on their use and knowledge of confidence intervals (n = 206). Similarly, Study 5 surveyed psychology researchers on their use and knowledge of power analyses and statistical power (n = 214). Typically, psychology journals expect authors to report effect sizes in their work, although there are fewer expectations related to confidence intervals. Power analyses are also frequently encouraged for sample size justification. Self-reported use of effect sizes, confidence intervals and power analyses was high, while common barriers to use included a lack of knowledge, a lack of motivation, and the influence of academic peers. While knowledge of effect sizes was quite high, they appear to only be understood in relatively limited contexts. In contrast, both confidence intervals and statistical power appear to be frequently misunderstood, and many researchers find power analysis calculations difficult. Researchers would benefit from increased education and support to encourage them to confidently adopt an assortment of statistics in their work, and more effort must be made to prevent statistical changes from becoming a new series of tick-box exercises that do not improve the integrity of psychological research.
- Published
- 2022
34. Psychological Science in the Wake of COVID-19: Social, Methodological, and Metascientific Considerations.
- Author
-
Rosenfeld, Daniel L, Balcetis, Emily, Bastian, Brock, Berkman, Elliot T, Bosson, Jennifer K, Brannon, Tiffany N, Burrow, Anthony L, Cameron, C Daryl, Chen, Serena, Cook, Jonathan E, Crandall, Christian, Davidai, Shai, Dhont, Kristof, Eastwick, Paul W, Gaither, Sarah E, Gangestad, Steven W, Gilovich, Thomas, Gray, Kurt, Haines, Elizabeth L, Haselton, Martie G, Haslam, Nick, Hodson, Gordon, Hogg, Michael A, Hornsey, Matthew J, Huo, Yuen J, Joel, Samantha, Kachanoff, Frank J, Kraft-Todd, Gordon, Leary, Mark R, Ledgerwood, Alison, Lee, Randy T, Loughnan, Steve, MacInnis, Cara C, Mann, Traci, Murray, Damian R, Parkinson, Carolyn, Pérez, Efrén O, Pyszczynski, Tom, Ratner, Kaylin, Rothgerber, Hank, Rounds, James D, Schaller, Mark, Silver, Roxane Cohen, Spellman, Barbara A, Strohminger, Nina, Swim, Janet K, Thoemmes, Felix, Urganci, Betul, Vandello, Joseph A, Volz, Sarah, Zayas, Vivian, and Tomiyama, A Janet
- Subjects
Humans ,Pandemics ,COVID-19 ,SARS-CoV-2 ,large-scale collaboration ,metascience ,Mental Health ,Good Health and Well Being ,Psychology ,Cognitive Sciences ,Social Psychology - Abstract
The COVID-19 pandemic has extensively changed the state of psychological science from what research questions psychologists can ask to which methodologies psychologists can use to investigate them. In this article, we offer a perspective on how to optimize new research in the pandemic's wake. Because this pandemic is inherently a social phenomenon-an event that hinges on human-to-human contact-we focus on socially relevant subfields of psychology. We highlight specific psychological phenomena that have likely shifted as a result of the pandemic and discuss theoretical, methodological, and practical considerations of conducting research on these phenomena. After this discussion, we evaluate metascientific issues that have been amplified by the pandemic. We aim to demonstrate how theoretically grounded views on the COVID-19 pandemic can help make psychological science stronger-not weaker-in its wake.
- Published
- 2022
35. Excavating FAIR Data: the Case of the Multicenter Animal Spinal Cord Injury Study (MASCIS), Blood Pressure, and Neuro-Recovery.
- Author
-
Almeida, Carlos A, Torres-Espin, Abel, Huie, J Russell, Sun, Dongming, Noble-Haeusslein, Linda J, Young, Wise, Beattie, Michael S, Bresnahan, Jacqueline C, Nielson, Jessica L, and Ferguson, Adam R
- Subjects
Animals ,Rats ,Spinal Cord Injuries ,Reproducibility of Results ,Blood Pressure ,Autonomic ,Data science ,Hemodynamics ,Metascience ,Motor recovery ,Neurotrauma ,Reproducibility ,Spinal contusion ,Spinal Cord Injury ,Injury - Trauma - (Head and Spine) ,Neurodegenerative ,Rehabilitation ,Injury (total) Accidents/Adverse Effects ,Neurosciences ,Good Health and Well Being ,Biochemistry and Cell Biology ,Neurology & Neurosurgery - Abstract
Meta-analyses suggest that the published literature represents only a small minority of the total data collected in biomedical research, with most becoming 'dark data' unreported in the literature. Dark data is due to publication bias toward novel results that confirm investigator hypotheses and omission of data that do not. Publication bias contributes to scientific irreproducibility and failures in bench-to-bedside translation. Sharing dark data by making it Findable, Accessible, Interoperable, and Reusable (FAIR) may reduce the burden of irreproducible science by increasing transparency and support data-driven discoveries beyond the lifecycle of the original study. We illustrate feasibility of dark data sharing by recovering original raw data from the Multicenter Animal Spinal Cord Injury Study (MASCIS), an NIH-funded multi-site preclinical drug trial conducted in the 1990s that tested efficacy of several therapies after a spinal cord injury (SCI). The original drug treatments did not produce clear positive results and MASCIS data were stored in boxes for more than two decades. The goal of the present study was to independently confirm published machine learning findings that perioperative blood pressure is a major predictor of SCI neuromotor outcome (Nielson et al., 2015). We recovered, digitized, and curated the data from 1125 rats from MASCIS. Analyses indicated that high perioperative blood pressure at the time of SCI is associated with poorer health and worse neuromotor outcomes in more severe SCI, whereas low perioperative blood pressure is associated with poorer health and worse neuromotor outcome in moderate SCI. These findings confirm and expand prior results that a narrow window of blood-pressure control optimizes outcome, and demonstrate the value of recovering dark data for assessing reproducibility of findings with implications for precision therapeutic approaches.
- Published
- 2022
36. Where next for partial randomisation of research funding? The feasibility of RCTs and alternatives [version 2; peer review: 2 approved, 1 approved with reservations]
- Author
-
Tom Stafford, Bilal Mateen, Dan Hind, Ines Rombach, Helen Buckley Woods, James Wilsdon, and Munya Dimario
- Subjects
metascience ,metaresearch ,review ,experiments ,lottery ,eng ,Medicine ,Science - Abstract
We outline essential considerations for any study of partial randomisation of research funding, and consider scenarios in which randomised controlled trials (RCTs) would be feasible and appropriate. We highlight the interdependence of target outcomes, sample availability and statistical power for determining the cost and feasibility of a trial. For many choices of target outcome, RCTs may be less practical and more expensive than they at first appear (in large part due to issues pertaining to sample size and statistical power). As such, we briefly discuss alternatives to RCTs. It is worth noting that many of the considerations relevant to experiments on partial randomisation may also apply to other potential experiments on funding processes (as described in The Experimental Research Funder’s Handbook. RoRI, June 2022).
- Published
- 2024
- Full Text
- View/download PDF
37. Rhetoric of psychological measurement theory and practice
- Author
-
Kathleen L. Slaney, Megan E. Graham, Ruby S. Dhillon, and Richard E. Hohn
- Subjects
psychological measurement ,rhetoric ,rhetoric of science ,validation ,metascience ,methodological reform ,Psychology ,BF1-990 - Abstract
Metascience scholars have long been concerned with tracking the use of rhetorical language in scientific discourse, oftentimes to analyze the legitimacy and validity of scientific claim-making. Psychology, however, has only recently become the explicit target of such metascientific scholarship, much of which has been in response to the recent crises surrounding replicability of quantitative research findings and questionable research practices. The focus of this paper is on the rhetoric of psychological measurement and validity scholarship, in both the theoretical and methodological and empirical literatures. We examine various discourse practices in published psychological measurement and validity literature, including: (a) clear instances of rhetoric (i.e., persuasion or performance); (b) common or rote expressions and tropes (e.g., perfunctory claims or declarations); (c) metaphors and other “literary” styles; and (d) ambiguous, confusing, or unjustifiable claims. The methodological approach we use is informed by a combination of conceptual analysis and exploratory grounded theory, the latter of which we used to identify relevant themes within the published psychological discourse. Examples of both constructive and useful or misleading and potentially harmful discourse practices will be given. Our objectives are both to contribute to the critical methodological literature on psychological measurement and connect metascience in psychology to broader interdisciplinary examinations of science discourse.
- Published
- 2024
- Full Text
- View/download PDF
38. Meta-Science
- Author
-
Zwitter, Andrej and Dome, Takuo
- Subjects
Sustainable development ,Human flourishing ,Metascience ,Philosophy of science ,Global challenges ,Complex solutions - Abstract
Science has lost its ethical imperatives as it moved away from a science of ought to a science of is. Subsequently, it might have answers for how we can address global challenges, such as climate change and poverty, but not why we should. This supposedly neutral stance leaves it to politics and religions (in the sense of non-scientific fields of social engagement) to fill in the values. The problem is that through this concession, science implicitly acknowledges that it is not of universal relevance. Objective knowledge, as Karl Popper calls for, might be less easily attainable in the world of ideas and within the confines of scientific idealism. However, if ideas, values and meaning have equal claim to be drivers of change in the sense of causation, aspiring to identify objective knowledge about the world of ideas and of meaning is necessary. If the sciences and disciplines aim to give objectively valid reasons for our actions (and for how to address global challenges), we need to elevate the study of meaning beyond the cultural, disciplinary and ideational delineations. We need to come to a meta understanding of values and meaning equal to objective knowledge about the material world. But differently than in the material world this meta understanding needs to incorporate individual and subjective experiences as cornerstones of objectivity on a meta-level. We need a science of meaning; one that can scientifically answer Kant’s third question of “what may we hope for”.
- Published
- 2023
- Full Text
- View/download PDF
39. Towards Diversifying Early Language Development Research: The First Truly Global International Summer/Winter School on Language Acquisition (/L+/) 2021.
- Author
-
Aravena-Bravo, Paulina, Cristia, Alejandrina, Garcia, Rowena, Kotera, Hiromasa, Nicolas, Ramona Kunene, Laranjo, Ronel, Arokoyo, Bolanle Elizabeth, Benavides-Varela, Silvia, Benders, Titia, BollAvetisyan, Natalie, Cychosz, Margaret, Ben, Rodrigo Dal, Diop, Yatma, DuránUrzúa, Catalina, Havron, Naomi, Manalili, Marie, Narasimhan, Bhuvana, Omane, Paul Okyere, Rowland, Caroline, and Kolberg, Leticia Schiavon
- Subjects
- *
LANGUAGE acquisition , *LANGUAGE research , *LANGUAGE schools , *RESEARCH personnel , *RESEARCH & development - Abstract
With a long-term aim of empowering researchers everywhere to contribute to work on language development, we organized the First Truly Global /L+/ International Summer/ Winter School on Language Acquisition, a free 5-day virtual school for early career researchers. In this paper, we describe the school, our experience organizing it, and lessons learned. The school had a diverse organizer team, composed of 26 researchers (17 from under represented areas: Subsaharan Africa, South and Southeast Asia, and Central and South America); and a diverse volunteer team, with a total of 95 volunteers from 35 different countries, nearly half from under represented areas. This helped worldwide Page 5 of 5 promotion of the school, leading to 958 registrations from 88 different countries, with 300 registrants (based in 63 countries, 80% from under represented areas) selected to participate in the synchronous aspects of the event. The school employed asynchronous. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Increasing Value and Reducing Waste of Research on Neurofeedback Effects in Post-traumatic Stress Disorder: A State-of-the-Art-Review.
- Author
-
Marcu, Gabriela Mariana, Dumbravă, Andrei, Băcilă, Ionuţ-Ciprian, Szekely-Copîndean, Raluca Diana, and Zăgrean, Ana-Maria
- Abstract
Post-Traumatic Stress Disorder (PTSD) is often considered challenging to treat due to factors that contribute to its complexity. In the last decade, more attention has been paid to non-pharmacological or non-psychological therapies for PTSD, including neurofeedback (NFB). NFB is a promising non-invasive technique targeting specific brainwave patterns associated with psychiatric symptomatology. By learning to regulate brain activity in a closed-loop paradigm, individuals can improve their functionality while reducing symptom severity. However, owing to its lax regulation and heterogeneous legal status across different countries, the degree to which it has scientific support as a psychiatric treatment remains controversial. In this state-of-the-art review, we searched PubMed, Cochrane Central, Web of Science, Scopus, and MEDLINE and identified meta-analyses and systematic reviews exploring the efficacy of NFB for PTSD. We included seven systematic reviews, out of which three included meta-analyses (32 studies and 669 participants) that targeted NFB as an intervention while addressing a single condition—PTSD. We used the MeaSurement Tool to Assess systematic Reviews (AMSTAR) 2 and the criteria described by Cristea and Naudet (Behav Res Therapy 123:103479, 2019, https://doi.org/10.1016/j.brat.2019.103479) to identify sources of research waste and increasing value in biomedical research. The seven assessed reviews had an overall extremely poor quality score (5 critically low, one low, one moderate, and none high) and multiple sources of waste while opening opportunities for increasing value in the NFB literature. Our research shows that it remains unclear whether NFB training is significantly beneficial in treating PTSD. The quality of the investigated literature is low and maintains a persistent uncertainty over numerous points, which are highly important for deciding whether an intervention has clinical efficacy. Just as importantly, none of the reviews we appraised explored the statistical power, referred to open data of the included studies, or adjusted their pooled effect sizes for publication bias and risk of bias. Based on the obtained results, we identified some recurrent sources of waste (such as a lack of research decisions based on sound questions or using an appropriate methodology in a fully transparent, unbiased, and useable manner) and proposed some directions for increasing value (homogeneity and consensus) in designing and reporting research on NFB interventions in PTSD. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. The use of scientific methods and models in the philosophy of science.
- Author
-
Ventura, Rafael
- Abstract
What is the relation between philosophy of science and the sciences? As Pradeu et al. (British Journal for the Philosophy of Science https://doi.org/10.1086/715518, 2021) and Khelfaoui et al. (Synthese 199:6219, 2021) recently show, part of this relation is constituted by "philosophy in science": the use of philosophical methods to address questions in the sciences. But another part is what one might call "science in philosophy": the use of methods drawn from the sciences to tackle philosophical questions. In this paper, we focus on one class of such methods and examine the role that model-based methods play within "science in philosophy". To do this, we first build a bibliographic coupling network with Web of Science records of all papers published in philosophy of science journals from 2000 to 2020 ( N = 9217 ). After detecting the most prominent communities of papers in the network, we use a supervised classifier to identify all papers that use model-based methods. Drawing on work in cultural evolution, we also propose a model to represent the evolution of methods in each one of these communities. Finally, we measure the strength of cultural selection for model-based methods during the given time period by integrating model and data. Results indicate not only that model-based methods have had a significant presence in philosophy of science over the last two decades, but also that there is considerable variation in their use across communities. Results further indicate that some communities have experienced strong selection for the use of model-based methods but that other have not; we validate this finding with a logistic regression of paper methodology on publication year. We conclude by discussing some implications of our findings and suggest that model-based methods play an increasingly important role within "science in philosophy" in some but not all areas of philosophy of science. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Sociodemographic Reporting and Sample Composition Over 3 Decades of Psychopathology Research: A Systematic Review and Quantitative Synthesis.
- Author
-
Wilson, Sylia
- Subjects
- *
PATHOLOGICAL psychology , *ABNORMAL psychology , *GENDER identity , *ALASKA Natives , *RACE - Abstract
Although researchers seek to understand psychological phenomena in a population, quantitative research studies are conducted in smaller samples meant to represent the larger population of interest. This systematic review and quantitative synthesis considers reporting of sociodemographic characteristics and sample composition in the Journal of Abnormal Psychology (now the Journal of Psychopathology and Clinical Science) over the past 3 decades. Across k = 1,244 empirical studies, there were high and increasing rates of reporting of participant age/developmental stage and sex/gender, low but increasing reporting of socioeconomic status/income, and moderate and stable reporting of educational attainment. Rates of reporting of sexual orientation remained low and reporting of gender identity was essentially nonexistent. There were low to moderate but increasing rates of reporting of participant race and ethnicity. Approximately three-quarters of participants in studies over the past 3 decades were White, while the proportion of participants who were Asian, Black or African American, American Indian or Alaska Native, Native Hawaiian or Other Pacific Islander, or Hispanic/Latino was much lower. Approximately two-thirds of participants were female, with this proportion increasing over time. There were also notable differences in the proportion of study participants as a function of race and sex/gender for different forms of psychopathology. Basic science and theoretical psychopathology research must include sociodemographically diverse samples that are representative of and generalizable to the larger human population, while seeking to decrease stigma of psychopathology and increase mental health equity. Recommendations are made to increase sociodemographic diversity in psychopathology research and the scientific review/publication process. General Scientific Summary: Basic science and theoretical research on the etiology, development, symptomatology, and course of psychopathology must include sociodemographically diverse samples, while seeking to decrease the stigma of psychopathology and increase mental health equity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Guidelines to improve internationalization in the psychological sciences.
- Author
-
Puthillam, Arathy, Montilla Doble, Lysander James, Delos Santos, Junix Jerald I., Elsherif, Mahmoud Medhat, Steltenpohl, Crystal N., Moreau, David, Pownall, Madeleine, Silverstein, Priya, Anand-Vembar, Shaakya, and Kapoor, Hansika
- Subjects
- *
GLOBALIZATION , *SCIENCE fairs , *SCIENCE conferences , *HUMAN behavior , *RESEARCH personnel - Abstract
Conversations about the internationalization of psychological sciences have occurred over a few decades with very little progress. Previous work shows up to 95% of participants in the studies published in mainstream journals are from Western, Educated, Industrialized, Rich, Democratic nations. Similarly, a large proportion of authors are based in North America. This imbalance is well-documented across a range of subfields in psychology, yet the specific steps and best practices to bridge publication and data gaps across world regions are still unclear. To address this issue, we conducted a hackathon at the Society for the Improvement of Psychological Science 2021 conference to develop guidelines to improve international representation of authors and participants, adapted for various stakeholders in the production of psychological knowledge. Based on this hackathon, we discuss specific guidelines and practices that funding bodies, academic institutions, professional academic societies, journal editors and reviewers, and researchers should engage with to ensure psychology is the scientific discipline of human behavior and cognition across the world. These recommendations will help us develop a more valid and fairer science of human sociality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Comparative Cognition Needs Big Team Science: How Large-Scale Collaborations Will Unlock the Future of the Field.
- Author
-
Alessandroni, Nicolás, Altschul, Drew, Bazhydai, Marina, Byers-Heinlein, Krista, Elsherif, Mahmoud, Gjoneska, Biljana, Huber, Ludwig, Mazza, Valeria, Miller, Rachael, Nawroth, Christian, Pronizius, Ekaterina, Qadri, Muhammad A. J., Šlipogor, Vedrana, Soderstrom, Melanie, Stevens, Jeffrey R., Visser, Ingmar, Williams, Madison, Zettersten, Martin, and Prétôt, Laurent
- Subjects
- *
COGNITION research , *NUMBERS of species , *SAMPLE size (Statistics) , *COGNITION , *TEAMS - Abstract
Comparative cognition research has been largely constrained to isolated facilities, small teams, and a limited number of species. This has led to challenges such as conflicting conceptual definitions and underpowered designs. Here, we explore how Big Team Science (BTS) may remedy these issues. Specifically, we identify and describe four key BTS advantages -- increasing sample size and diversity, enhancing task design, advancing theories, and improving welfare and conservation efforts. We conclude that BTS represents a transformative shift capable of advancing research in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Biomarker adoption in developmental science: A data‐driven modelling of trends from 90 biomarkers across 20 years.
- Author
-
Qian, Weiqiang, Zhang, Chao, Piersiak, Hannah A., Humphreys, Kathryn L., and Mitchell, Colter
- Subjects
- *
BIOMARKERS , *C-reactive protein , *GLYCOSYLATED hemoglobin , *INTERLEUKINS , *SOMATOMEDIN , *CHILD development , *DEVELOPMENTAL psychology , *MULTIPLE regression analysis , *SYSTOLIC blood pressure , *RANDOM forest algorithms , *REGRESSION analysis , *MAGNETIC resonance imaging , *DNA methylation , *BRAIN cortical thickness , *DIASTOLIC blood pressure , *RESEARCH funding , *DESCRIPTIVE statistics , *TUMOR necrosis factors , *WAIST circumference , *PREDICTION models , *PERIODICAL articles , *STATISTICAL models , *PEPTIDE hormones , *BLOOD cell count , *BODY mass index , *IMPACT factor (Citation analysis) , *CYSTATIN C , *CHOLESTEROL - Abstract
Developmental scientists have adopted numerous biomarkers in their research to better understand the biological underpinnings of development, environmental exposures, and variation in long‐term health. Yet, adoption patterns merit investigation given the substantial resources used to collect, analyse, and train to use biomarkers in research with infants and children. We document trends in use of 90 biomarkers between 2000 and 2020 from approximately 430,000 publications indexed by the Web of Science. We provide a tool for researchers to examine each of these biomarkers individually using a data‐driven approach to estimate the biomarker growth trajectory based on yearly publication number, publication growth rate, number of author affiliations, National Institutes of Health dedicated funding resources, journal impact factor, and years since the first publication. Results indicate that most biomarkers fit a "learning curve" trajectory (i.e., experience rapid growth followed by a plateau), though a small subset decline in use over time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Zooming in on what counts as core and auxiliary: A case study on recognition models of visual working memory
- Author
-
Robinson, Maria M., Williams, Jamal R., Wixted, John T., and Brady, Timothy F.
- Published
- 2024
- Full Text
- View/download PDF
47. Adversarial Collaboration: The Next Science Reform
- Author
-
Clark, Cory J., Tetlock, Philip E., Frisby, Craig L., editor, Redding, Richard E., editor, O'Donohue, William T., editor, and Lilienfeld, Scott O., editor
- Published
- 2023
- Full Text
- View/download PDF
48. Ethics and Games, Ethical Games and Ethics in Game
- Author
-
Carvalho, Luiz Paulo, Santoro, Flávia Maria, Oliveira, Jonice, Costa, Rosa Maria M., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Santos, Rodrigo Pereira dos, editor, and Hounsell, Marcelo da Silva, editor
- Published
- 2023
- Full Text
- View/download PDF
49. Is biomedical research self-correcting? Modelling insights on the persistence of spurious science
- Author
-
David Robert Grimes
- Subjects
metaresearch ,metascience ,publication bias ,publish or perish ,research integrity ,research waste ,Science - Abstract
The reality that volumes of published biomedical research are not reproducible is an increasingly recognized problem. Spurious results reduce trustworthiness of reported science, increasing research waste. While science should be self-correcting from a philosophical perspective, that in insolation yields no information on efforts required to nullify suspect findings or factors shaping how quickly science may be corrected. There is also a paucity of information on how perverse incentives in the publishing ecosystem favouring novel positive findings over null results shape the ability of published science to self-correct. Knowledge of factors shaping self-correction of science remain obscure, limiting our ability to mitigate harms. This modelling study introduces a simple model to capture dynamics of the publication ecosystem, exploring factors influencing research waste, trustworthiness, corrective effort and time to correction. Results from this work indicate that research waste and corrective effort are highly dependent on field-specific false positive rates and time delays to corrective results to spurious findings are propagated. The model also suggests conditions under which biomedical science is self-correcting and those under which publication of correctives alone cannot stem propagation of untrustworthy results. Finally, this work models a variety of potential mitigation strategies, including researcher- and publisher-driven interventions.
- Published
- 2024
- Full Text
- View/download PDF
50. Sorry we′re open, come in we're closed: different profiles in the perceived applicability of open science practices to completed research projects
- Author
-
Jürgen Schneider
- Subjects
open science practices ,profiles ,applicability ,metascience ,Science - Abstract
Open science is an increasingly important topic for research, politics and funding agencies. However, the discourse on open science is heavily influenced by certain research fields and paradigms, leading to the risk of generalizing what counts as openness to other research fields, regardless of its applicability. In our paper, we provide evidence that researchers perceive different profiles in the potential to apply open science practices to their projects, making a one-size-fits-all approach unsuitable. In a pilot study, we first systematized the breadth of open science practices. The subsequent survey study examined the perceived applicability of 13 open science practices across completed research projects in a broad variety of research disciplines. We were able to identify four different profiles in the perceived applicability of open science practices. For researchers conducting qualitative-empirical research projects, comprehensively implementing the breadth of open science practices is tendentially not feasible. Further, research projects from some disciplines tended to fit a profile with little opportunity for public participation. Yet, disciplines and research paradigms appear not to be the key factors in predicting the perceived applicability of open science practices. Our findings underscore the case for considering project-related conditions when implementing open science practices. This has implications for the establishment of policies, guidelines and standards concerning open science.
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.