723 results on '"metascience"'
Search Results
2. Science of science: A multidisciplinary field studying science
- Author
-
Krauss, Alexander
- Published
- 2024
- Full Text
- View/download PDF
3. Benchmarking Scholarship in Consumer Research: The p-Index of Thought Leadership.
- Author
-
Pham, Michel Tuan, Wu, Alisa Yinghao, and Wang, Danqi
- Subjects
CONSUMER research ,SCHOLARS ,SCHOLARLY method ,SCIENTOMETRICS ,CITATION analysis ,ACADEMIC discourse ,LEADERSHIP - Abstract
The assessment of consumer scholarship must move beyond a mere counting of the number of "A"s on a researcher's CV to include at least some measure of impact. To facilitate a broader assessment of scholarship in consumer research, we provide detailed statistics on the productivity and citation impact of the field's 340 main gatekeepers: the editors, associate editors, and editorial board members of the Journal of Consumer Research and the Journal of Consumer Psychology. In addition, we introduce a new metric, called the p -index, which can be interpreted as an indicator of a researcher's propensity for thought leadership. Using this metric, we show that productivity and thought leadership do not necessarily go hand in hand in consumer research and that a combination of the two is a good predictor of the level of esteem that consumer scholars enjoy among their peers and of the receipt of major career awards. Our analyses provide greater transparency into how productivity, citation impact, and propensity for thought leadership are currently distributed among prominent consumer scholars. Furthermore, the detailed descriptive statistics reported can serve as useful benchmarks against which other consumer researchers' records may be meaningfully compared. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Effect Size Magnification: No Variable Is as Important as the One You're Thinking About—While You're Thinking About It.
- Author
-
Gandhi, Linnea, Manning, Benjamin S., and Duckworth, Angela L.
- Subjects
- *
HUMAN behavior , *STATISTICAL power analysis , *PSYCHOLOGICAL research , *EXPLANATION , *HUMAN beings - Abstract
The goal of psychological science is to discover truths about human nature, and the typical form of empirical insights is a simple statement of the form x relates to y. We suggest that such "one-liners" imply much larger x - y relationships than those we typically study. Given the multitude of factors that compete and interact to influence any human outcome, small effect sizes should not surprise us. And yet they do—as evidenced by the persistent and systematic underpowering of research studies in psychological science. We suggest an explanation. Effect size magnification is the tendency to exaggerate the importance of the variable under investigation because of the momentary neglect of others. Although problematic, this attentional focus serves a purpose akin to that of the eye's fovea. We see a particular x-y relationship with greater acuity when it is the center of our attention. Debiasing remedies are not straightforward, but we recommend (a) recalibrating expectations about the effect sizes we study, (b) proactively exploring moderators and boundary conditions, and (c) periodically toggling our focus from the x variable we happen to study to the non- x variables we do not. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Reporting Bias, Not External Focus: A Robust Bayesian Meta-Analysis and Systematic Review of the External Focus of Attention Literature.
- Author
-
McKay, Brad, Corson, Abbey E., Seedu, Jeswende, De Faveri, Celeste S., Hasan, Hibaa, Arnold, Kristen, Adams, Faith C., and Carter, Michael J.
- Subjects
- *
MOTOR ability , *PUBLICATION bias , *RESEARCH personnel , *ELECTROMYOGRAPHY , *MOTOR learning , *HETEROGENEITY - Abstract
Evidence has ostensibly been accumulating over the past 2 decades suggesting that an external focus on the intended movement effect (e.g., on the golf club during a swing) is superior to an internal focus on body movements (e.g., on your arms during a swing) for skill acquisition. Seven previous meta-studies have all reported evidence of external focus superiority. The most comprehensive of these concluded that an external focus enhances motor skill retention, transfer, and performance and leads to reduced eletromyographic activity during performance and that more distal external foci are superior to proximal external foci for performance. Here, we reanalyzed these data using robust Bayesian meta-analyses that included several plausible models of publication bias. We found moderate to strong evidence of publication bias for all analyses. After correcting for publication bias, estimated mean effects were negligible: g = 0.01 (performance), g = 0.15 (retention), g = 0.09 (transfer), g = 0.06 (electromyography), and g = −0.01 (distance effect). Bayes factors indicated data favored the null for each analysis, ranging from BF01 = 1.3 (retention) to 5.75 (performance). We found clear evidence of heterogeneity in each analysis, suggesting the impact of attentional focus depends on yet unknown contextual factors. Our results contradict the existing consensus that an external focus is always more effective than an internal focus. Instead, focus of attention appears to have a variety of effects that we cannot account for, and, on average, those effects are small to nil. These results parallel previous metascience suggesting publication bias has obfuscated the motor learning literature. Public Significance Statement: A robust Bayesian meta-analysis showed that directing learners to focus their attention on their intended movement effects—often called an external focus—may have little-to-no effect on motor performance and learning on average. Although the consensus among researchers and practitioners has been that an external focus is superior to focusing on one's own body during practice, the present results suggest this may depend on unknown factors, and our current understanding has been distorted by publication bias. These results highlight that a more cautious approach is necessary when recommending the use of external foci in applied settings until a more reliable body of literature can be established using preregistration, Registered Reports, and well-powered designs through multisite collaborations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Studying Adherence to Reporting Standards in Kinesiology: A Post-publication Peer Review Brief Report.
- Author
-
WATSON, NIKKI M. and THOMAS, JAFRĀ D.
- Subjects
KINESIOLOGY ,SPORTS sciences ,STAKEHOLDERS ,REPRODUCIBLE research ,CONTENT analysis - Abstract
To demonstrate how post-publication peer reviews—using journal article reporting standards—could improve the design and write-up of kinesiology research, the authors performed a post-publication peer review on one systematic literature review published in 2020. Two raters (1
st & 2nd authors) critically appraised the case article between April and May 2021. The latest Journal Article Reporting Standards by the American Psychological Association relevant to the review were used: i.e., Table 1 (quantitative research standards) and Table 9 (research synthesis standards). A standard fully met was deemed satisfactory. Per Krippendorff’s alpha-coefficient, inter-rater agreement was moderate for Table 1 (k-alpha = .57, raw-agreement = 72.2%) and poor for Table 9 (k-alpha = .09, raw-agreement = 53.6%). A 100% consensus was reached on all discrepancies. Results suggest the case article’s Abstract, Methods, and Discussion sections required clarification or more detail. Per Table 9 standards, four sections were largely incomplete: i.e., Abstract (100%- incomplete), Introduction (66%-incomplete), Methods (75%-incomplete), and Discussion (66%-incomplete). Case article strengths included tabular summary of studies analyzed in the systematic review and a cautionary comment about the review’s generalizability. The article’s write-up gave detail to help the reader understand the scope of the study and decisions made by the authors. However, adequate detail was not provided to assess the credibility of all claims made in the article. This could affect readers’ ability to obtain critical and nuanced understanding of the article’s topics. The results of this critique should encourage (continuing) education on journal article reporting standards for diverse stakeholders (e.g., authors, reviewers). [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
7. Same model, same data, but different outcomes: Evaluating the impact of method choices in structural equation modeling.
- Author
-
Sarstedt, Marko, Adler, Susanne J., Ringle, Christian M., Cho, Gyeongcheol, Diamantopoulos, Adamantios, Hwang, Heungsun, and Liengaard, Benjamin D.
- Subjects
STRUCTURAL equation modeling ,RESEARCH personnel ,DECISION making ,ORGANIZATIONAL change ,REPRODUCIBLE research - Abstract
Scientific research demands robust findings, yet variability in results persists due to researchers' decisions in data analysis. Despite strict adherence to state‐of the‐art methodological norms, research results can vary when analyzing the same data. This article aims to explore this variability by examining the impact of researchers' analytical decisions when using different approaches to structural equation modeling (SEM), a widely used method in innovation management to estimate cause–effect relationships between constructs and their indicator variables. For this purpose, we invited SEM experts to estimate a model on absorptive capacity's impact on organizational innovation and performance using different SEM estimators. The results show considerable variability in effect sizes and significance levels, depending on the researchers' analytical choices. Our research underscores the necessity of transparent analytical decisions, urging researchers to acknowledge their results' uncertainty, to implement robustness checks, and to document the results from different analytical workflows. Based on our findings, we provide recommendations and guidelines on how to address results variability. Our findings, conclusions, and recommendations aim to enhance research validity and reproducibility in innovation management, providing actionable and valuable insights for improved future research practices that lead to solid practical recommendations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Heterogeneity in effect size estimates.
- Author
-
Holzmeister, Felix, Johannesson, Magnus, Böhm, Robert, Dreber, Anna, Huber, Jürgen, and Kirchler, Michael
- Subjects
- *
PATH analysis (Statistics) , *ERROR rates , *HETEROGENEITY , *EXPERIMENTAL design , *EMPIRICAL research - Abstract
A typical empirical study involves choosing a sample, a research design, and an analysis path. Variation in such choices across studies leads to heterogeneity in results that introduce an additional layer of uncertainty, limiting the generalizability of published scientific findings. We provide a framework for studying heterogeneity in the social sciences and divide heterogeneity into population, design, and analytical heterogeneity. Our framework suggests that after accounting for heterogeneity, the probability that the tested hypothesis is true for the average population, design, and analysis path can be much lower than implied by nominal error rates of statistically significant individual studies. We estimate each type's heterogeneity from 70 multilab replication studies, 11 prospective meta-analyses of studies employing different experimental designs, and 5 multianalyst studies. In our data, population heterogeneity tends to be relatively small, whereas design and analytical heterogeneity are large. Our results should, however, be interpreted cautiously due to the limited number of studies and the large uncertainty in the heterogeneity estimates. We discuss several ways to parse and account for heterogeneity in the context of different methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Journal self-citations trends in sport sciences: an analysis of disciplinary journals from 2013 to 2022.
- Author
-
Bennett, Hunter, Singh, Ben, and Slattery, Flynn
- Abstract
This study reports on the yearly rate of journal self-citation (JSC) in sport sciences, how it changes over time, and its association with journal impact factor (JIF). Citations made by all 87 journals in "sport sciences" from 2013 to 2022 were extracted, as was their 2022 JIF. JSC rates were calculated using a Poisson distribution method. A mixed-effects negative binomial regression examined changes in yearly JSC rates over time. The association between average JSC rates and JIF were compared using a negative binomial regression. The median JSC rate was 6.3 self-citations per 100 citations. JSC rates are increasing in sport sciences by ~ 10% per year (incidence rate ratio [IRR] = 1.1, 95% CI 1.1–1.2; trivial effect). There was a significant negative association between JSC rate and JIF (IRR = 0.9, 95% CI 0.9, 1.0; trivial effect). Contrary to observations made in prior literature examining broader disciplines, the increasing JSC rate in sport sciences may be attributed to the growing maturity of this novel discipline. As sport-science topic areas become more established and appear in discipline specific journals, more JSCs may occur due to an increasing body of literature in these journals. The negative association between JSC rate and JIF may be due to specialized and less visible journals having a naturally lower JIF, as their impact is confined to a narrower field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Can a Good Theory Be Built Using Bad Ingredients?
- Author
-
Field, Sarahanne M., Volz, Leonhard, Kaznatcheev, Artem, and van Dongen, Noah
- Published
- 2024
- Full Text
- View/download PDF
11. What Makes a Good Theory, and How Do We Make a Theory Good?
- Author
-
Guest, Olivia
- Published
- 2024
- Full Text
- View/download PDF
12. A libraries reproducibility hackathon: connecting students to University research and testing the longevity of published code [version 1; peer review: awaiting peer review]
- Author
-
Chasz Griego, Kristen Scotti, Elizabeth Terveen, Joseph Chan, Daisy Sheng, Alfredo González-Espinoza, and Christopher Warren
- Subjects
Case Study ,Articles ,Reproducibility ,Hackathon ,Academic Libraries ,Open Science ,Metascience ,Digital Humanities ,Computational Research ,Software ,Community Engagement - Abstract
Reproducibility is a basis of scientific integrity, yet it remains a significant challenge across disciplines in computational science. This reproducibility crisis is now being met with an Open Science movement, which has risen to prominence within the scientific community and academic libraries especially. To address the need for reproducible computational research and promote Open Science within the community, members of the Open Science and Data Collaborations Program at Carnegie Mellon University Libraries organized a single-day hackathon centered around reproducibility. Partnering with a faculty researcher in English and Digital Humanities, this event allowed several students an opportunity to interact with real research outputs, test the reproducibility of data analyses with code, and offer feedback for improvements. With Python code and data shared by the researcher in an open repository, we revealed that students could successfully reproduce most of the data visualizations, but they required completing some manual setup and modifications to address depreciated libraries to successfully rerun the code. During the event, we also investigated the option of using ChatGPT to debug and troubleshoot rerunning this code. By interacting with a ChatGPT API in the code, we found and addressed the same roadblocks and successfully reproduced the same figures as the participating students. Assessing a second option, we also collaborated with the researcher to publish a compute capsule in Code Ocean. This option presented an alternative to manual setup and modifications, an accessible option for more limited devices like tablets, and a simple solution for outside researchers to modify or build on existing research code.
- Published
- 2024
- Full Text
- View/download PDF
13. Replication of the natural selection of bad science
- Author
-
Kohrt, Florian, Smaldino, Paul E, McElreath, Richard, and Schönbrodt, Felix
- Subjects
Information and Computing Sciences ,Philosophy and Religious Studies ,History and Philosophy Of Specific Fields ,agent-based model ,replication ,metascience ,cultural evolution ,incentives - Abstract
This study reports an independent replication of the findings presented by Smaldino and McElreath (Smaldino, McElreath 2016 R. Soc. Open Sci. 3, 160384 (doi:10.1098/rsos.160384)). The replication was successful with one exception. We find that selection acting on scientist's propensity for replication frequency caused a brief period of exuberant replication not observed in the original paper due to a coding error. This difference does not, however, change the authors' original conclusions. We call for more replication studies for simulations as unique contributions to scientific quality assurance.
- Published
- 2023
14. The evolving hierarchy of naturalized philosophy: A metaphilosophical sketch.
- Author
-
Rivelli, Luca
- Subjects
- *
MACHINE learning , *ARTIFICIAL intelligence , *THEORY of knowledge , *NORMATIVITY (Ethics) , *SCHOLARS - Abstract
Some scholars claim that epistemology of science and machine learning are actually overlapping disciplines studying induction, respectively affected by Hume's problem of induction and its formal machine‐learning counterpart, the "no‐free‐lunch" (NFL) theorems, to which even advanced AI systems such as LLMs are not immune. Extending Kevin Korb's view, this paper envisions a hierarchy of disciplines where the lowermost is a basic science, and, recursively, the metascience at each level inductively learns which methods work best at the immediately lower level. Due to Hume's dictum and NFL theorems, no exact metanorms for the good performance of each object science can be obtained after just a finite number of levels up the hierarchy, and the progressive abstractness of each metadiscipline and consequent ill‐definability of its methods and objects makes science—as defined by a minimal standard of scientificity—cease to exist above a certain metalevel, allowing for a still rational style of inquiry into science that can be called "philosophical." Philosophical levels, transitively reflecting on science, peculiarly manifest a non–empirically learned urge to self‐reflection constituting the properly normative aspect of philosophy of science. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Selective reporting of placebo tests in top economics journals.
- Author
-
Dreber, Anna, Johannesson, Magnus, and Yang, Yifan
- Subjects
- *
PLACEBOS , *NULL hypothesis - Abstract
Placebo tests provide incentives to underreport statistically significant tests, a form of reversed p‐hacking. We test for such underreporting in 11 top economics journals between 2009 and 2021 based on a pre‐registered analysis plan. If the null hypothesis is true in all tests, 2.5% of them should be significant at the 5% level with an effect in the same direction as the main test (and 5% in total). The actual fraction of statistically significant placebo tests with an effect in the same direction is 1.29% (95% CI [0.83, 1.63]), and the overall fraction of statistically significant placebo tests is 3.10% (95% CI [2.2, 4.0]). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Americans harbor much less favorable explicit sentiments toward young adults than toward older adults.
- Author
-
Frandoli, Stéphane P., Shakeri, Angela, and North, Michael S.
- Subjects
- *
OLDER people , *YOUNG adults , *AGE discrimination , *SOCIAL scientists , *AGE groups - Abstract
Public and academic discourse on ageism focuses primarily on prejudices targeting older adults, implicitly assuming that this age group experiences the most age bias. We test this assumption in a large, preregistered study surveying Americans' explicit sentiments toward young, middle-aged, and older adults. Contrary to certain expectations about the scope and nature of ageism, responses from two crowdsourced online samples matched to the US adult population (N = 1,820) revealed that older adults garner the most favorable sentiments and young adults, the least favorable ones. This pattern held across a wide range of participant demographics and outcome variables, in both samples. Signaling derogation of young adults more than benign liking of older adults, participants high on SDO (i.e., a key antecedent of group prejudice) expressed even less favorable sentiments toward young adults--and more favorable ones toward older adults. In two follow-up, preregistered, forecasting surveys, lay participants (N = 500) were generally quite accurate at predicting these results; in contrast, social scientists (N = 241) underestimated how unfavorably respondents viewed young adults and how favorably they viewed older adults. In fact, the more expertise in ageism scientists had, the more biased their forecasts. In a rapidly aging world with exacerbated concerns over older adults' welfare, young adults also face increasing economic, social, political, and ecological hardship. Our findings highlight the need for policymakers and social scientists to broaden their understanding of age biases and develop theory and policies that ponder discriminations targeting all age groups. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Communicating Study Design Trade-offs in Software Engineering.
- Author
-
Robillard, Martin P., Arya, Deeksha M., Ernst, Neil A., Guo, Jin L. C., Lamothe, Maxime, Nassif, Mathieu, Novielli, Nicole, Serebrenik, Alexander, Steinmacher, Igor, and Stol, Klaas-Jan
- Subjects
DESIGN software ,EXPERIMENTAL design ,SOFTWARE architecture ,RESEARCH personnel ,OPPORTUNITY costs ,SOFTWARE engineering - Abstract
Reflecting on the limitations of a study is a crucial part of the research process. In software engineering studies, this reflection is typically conveyed through discussions of study limitations or threats to validity. In current practice, such discussions seldom provide sufficient insight to understand the rationale for decisions taken before and during the study, and their implications. We revisit the practice of discussing study limitations and threats to validity and identify its weaknesses. We propose to refocus this practice of self-reflection to a discussion centered on the notion of trade-offs. We argue that documenting trade-offs allows researchers to clarify how the benefits of their study design decisions outweigh the costs of possible alternatives. We present guidelines for reporting trade-offs in a way that promotes a fair and dispassionate assessment of researchers' work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Reducing Racial Bias in Scientific Communication: Journal Policies and Their Influence on Reporting Racial Demographics.
- Author
-
Auelua-Toomey, Sakaria Laisene, Mortenson, Elizabeth, and Roberts, Steven Othello
- Subjects
- *
PREVENTION of racism , *CLINICAL psychology , *JOB involvement , *GOVERNMENT policy , *DOCTORAL programs , *LOGISTIC regression analysis , *AUTHORSHIP , *DESCRIPTIVE statistics , *SCHOLARLY communication , *ODDS ratio , *PUBLISHING , *DATA analysis software , *CONFIDENCE intervals - Abstract
Research titles with White samples, compared to research titles with samples of color, have been less likely to include the racial identity of the sample. This unequal writing practice has serious ramifications for both the history and future of psychological science, as it solidifies in the permanent scientific record the false notion that research with White samples is more generalizable and valuable than research with samples of color. In the present research, we experimentally tested the extent to which PhD students (63% White students, 27% students of color) engaged in this unequal writing practice, as well as the extent to which this practice might be disrupted by journal policies. In Study 1, PhD students who read about research conducted with a White sample, compared to those who read about the exact same research conducted with a Black sample, were significantly less likely to mention the sample's racial identity when generating research titles, keywords, and summaries. In Study 2, PhD students instructed to mention the racial identity of their samples, and PhD students instructed to not mention the identity of their samples (though to a lesser extent), were less likely to write about the White versus Black samples unequally. Across both studies, we found that PhD students were overall supportive of a policy to make the racial demographics of samples more transparent, believing that it would help to reduce racial biases in the field. Public Significance Statement: We discovered that, when left to their own discretion, PhD students were less likely to specify the racial demographics of a research sample in their scientific writing when the sample was White compared to when it was Black. Such a White-centric bias could imply to readers that research with White samples is inherently more valuable and generalizable. However, our findings also indicate that a journal policy mandating the mention of racial demographics in research samples can mitigate this racial inequality in communication. This policy proved more effective than an alternative colorblind policy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. BIBLIOMETRIC ANALYSIS OF PHD, RESIDENCY DISSERTATIONS AND MASTER'S THESES IN PUBLIC HEALTH DEPARTMENTS IN TÜRKİYE BETWEEN 1970-2022.
- Author
-
DENİZLİ, Yasemin, UÇAR, Abdullah, UÇAR, Mahmut Talha, and TUNCA, Muhammet Yunus
- Subjects
CROSS-sectional method ,PATIENT education ,DATA mining ,MEDICAL personnel ,DATA analysis ,INTERNSHIP programs ,UNIVERSITIES & colleges ,TRAVEL hygiene ,HYGIENE ,DESCRIPTIVE statistics ,ACADEMIC dissertations ,NON-communicable diseases ,DEPARTMENTS ,BIBLIOMETRICS ,RESEARCH methodology ,METADATA ,QUALITY of life ,STATISTICS ,PUBLIC health ,HEALTH facilities ,MASTERS programs (Higher education) ,HEALTH promotion ,STAKEHOLDER analysis ,COMPARATIVE studies ,DATA analysis software ,INDUSTRIAL safety ,PSYCHOSOCIAL factors - Abstract
Copyright of ESTUDAM Public Health Journal is the property of ESTUDAM Public Health Journal and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
20. Rhetoric of psychological measurement theory and practice.
- Author
-
Slaney, Kathleen L., Graham, Megan E., Dhillon, Ruby S., and Hohn, Richard E.
- Subjects
PSYCHOMETRICS ,THEORY-practice relationship ,PSYCHOLOGICAL literature ,RHETORIC ,SCIENTIFIC language - Abstract
Metascience scholars have long been concerned with tracking the use of rhetorical language in scientific discourse, oftentimes to analyze the legitimacy and validity of scientific claim-making. Psychology, however, has only recently become the explicit target of such metascientific scholarship, much of which has been in response to the recent crises surrounding replicability of quantitative research findings and questionable research practices. The focus of this paper is on the rhetoric of psychological measurement and validity scholarship, in both the theoretical and methodological and empirical literatures. We examine various discourse practices in published psychological measurement and validity literature, including: (a) clear instances of rhetoric (i.e., persuasion or performance); (b) common or rote expressions and tropes (e.g., perfunctory claims or declarations); (c) metaphors and other “literary” styles; and (d) ambiguous, confusing, or unjustifiable claims. The methodological approach we use is informed by a combination of conceptual analysis and exploratory grounded theory, the latter of which we used to identify relevant themes within the published psychological discourse. Examples of both constructive and useful or misleading and potentially harmful discourse practices will be given. Our objectives are both to contribute to the critical methodological literature on psychological measurement and connect metascience in psychology to broader interdisciplinary examinations of science discourse. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Cognition of Time and Thinking Beyond
- Author
-
Bi, Zedong, Crusio, Wim E., Series Editor, Dong, Haidong, Series Editor, Radeke, Heinfried H., Series Editor, Rezaei, Nima, Series Editor, Steinlein, Ortrud, Series Editor, Xiao, Junjie, Series Editor, Merchant, Hugo, editor, and de Lafuente, Victor, editor
- Published
- 2024
- Full Text
- View/download PDF
22. Advances in Methods and Practices in Psychological Science
- Subjects
psychology ,psychological science ,research methods ,replication ,metascience ,registered replication report ,Psychology ,BF1-990 - Published
- 2024
23. How can meta-research be used to evaluate and improve the quality of research in the field of traditional, complementary, and integrative medicine?
- Author
-
Jeremy Y. Ng, Myeong Soo Lee, Jian-ping Liu, Amie Steel, L. Susan Wieland, Claudia M. Witt, David Moher, and Holger Cramer
- Subjects
Complementary and integrative medicine ,Meta-research ,Metascience ,Research quality ,Traditional medicine ,Miscellaneous systems and treatments ,RZ409.7-999 - Abstract
The field of traditional, complementary, and integrative medicine (TCIM) has garnered increasing attention due to its holistic approach to health and well-being. While the quantity of published research about TCIM has increased exponentially, critics have argued that the field faces challenges related to methodological rigour, reproducibility, and overall quality. This article proposes meta-research as one approach to evaluating and improving the quality of TCIM research. Meta-research, also known as research about research, can be defined as “the study of research itself: its methods, reporting, reproducibility, evaluation, and incentives”. By systematically evaluating methodological rigour, identifying biases, and promoting transparency, meta-research can enhance the reliability and credibility of TCIM research. Specific topics of interest that are discussed in this article include the following: 1) study design and research methodology, 2) reporting of research, 3) research ethics, integrity, and misconduct, 4) replicability and reproducibility, 5) peer review and journal editorial practices, 6) research funding: grants and awards, and 7) hiring, promotion, and tenure. For each topic, we provide case examples to illustrate meta-research applications in TCIM. We argue that meta-research initiatives can contribute to maintaining public trust, safeguarding research integrity, and advancing evidence based TCIM practice, while challenges include navigating methodological complexities, biases, and disparities in funding and academic recognition. Future directions involve tailored research methodologies, interdisciplinary collaboration, policy implications, and capacity building in meta-research.
- Published
- 2024
- Full Text
- View/download PDF
24. Zooming in on what counts as core and auxiliary: A case study on recognition models of visual working memory
- Author
-
Robinson, Maria M., Williams, Jamal R., Wixted, John T., and Brady, Timothy F.
- Published
- 2024
- Full Text
- View/download PDF
25. A Revised and Expanded Taxonomy for Understanding Heterogeneity in Research and Reporting Practices.
- Author
-
Manapat, Patrick D., Anderson, Samantha F., and Edwards, Michael C.
- Abstract
Concerns about replication failures can be partially recast as concerns about excessive heterogeneity in research results. Although this heterogeneity is an inherent part of science (e.g., sampling variability; studying different conditions), not all heterogeneity results from unavoidable sources. In particular, the flexibility researchers have when designing studies and analyzing data adds additional heterogeneity. This flexibility has been the topic of considerable discussion in the last decade. Ideas, and corresponding phrases, have been introduced to help unpack researcher behaviors, including researcher degrees of freedom and questionable research practices. Using these concepts and phrases, methodological and substantive researchers have considered how researchers' choices impact statistical conclusions and reduce clarity in the research literature. While progress has been made, inconsistent, vague, and overlapping use of the terminology surrounding these choices has made it difficult to have clear conversations about the most pressing issues. Further refinement of the language conveying the underlying concepts can catalyze further progress. We propose a revised, expanded taxonomy for assessing research and reporting practices. In addition, we redefine several crucial terms in a way that reduces overlap and enhances conceptual clarity, with particular focus on distinguishing practices along two lines: research versus reporting practices and choices involving multiple empirically supported options versus choices known to be subpar. We illustrate the effectiveness of these changes using conceptual and simulated demonstrations, and we discuss how this taxonomy can be valuable to substantive researchers by helping to navigate this flexibility and to methodological researchers by motivating research toward areas of greatest need. When replicating a scientific study, it is not reasonable to expect identical results - there will be some degree of variability from one study to another. However, too much variability between replication studies can begin to distort the truth and/or make it difficult to interpret a series of research results. Methodological and statistical choices that researchers make have the potential to add unnecessary variability. The many subjective choices involved in designing a study or analyzing data are likely to alter results, which increases the level of variability across a series of studies. Although progress has been made in addressing the role of researcher choice around methodological issues, the inconsistent use of terminology (e.g., researcher degrees of freedom, questionable research practices) has made discussions confusing. In this article, we present a new taxonomy for assessing research and reporting practices that is meant to clarify important terms and enhance conceptual clarity. We illustrate the usefulness of our new taxonomy with conceptual and simulated demonstrations and discuss how this taxonomy can be valuable to both substantive and methodological researchers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Increasing Value and Reducing Waste of Research on Neurofeedback Effects in Post-traumatic Stress Disorder: A State-of-the-Art-Review.
- Author
-
Marcu, Gabriela Mariana, Dumbravă, Andrei, Băcilă, Ionuţ-Ciprian, Szekely-Copîndean, Raluca Diana, and Zăgrean, Ana-Maria
- Abstract
Post-Traumatic Stress Disorder (PTSD) is often considered challenging to treat due to factors that contribute to its complexity. In the last decade, more attention has been paid to non-pharmacological or non-psychological therapies for PTSD, including neurofeedback (NFB). NFB is a promising non-invasive technique targeting specific brainwave patterns associated with psychiatric symptomatology. By learning to regulate brain activity in a closed-loop paradigm, individuals can improve their functionality while reducing symptom severity. However, owing to its lax regulation and heterogeneous legal status across different countries, the degree to which it has scientific support as a psychiatric treatment remains controversial. In this state-of-the-art review, we searched PubMed, Cochrane Central, Web of Science, Scopus, and MEDLINE and identified meta-analyses and systematic reviews exploring the efficacy of NFB for PTSD. We included seven systematic reviews, out of which three included meta-analyses (32 studies and 669 participants) that targeted NFB as an intervention while addressing a single condition—PTSD. We used the MeaSurement Tool to Assess systematic Reviews (AMSTAR) 2 and the criteria described by Cristea and Naudet (Behav Res Therapy 123:103479, 2019, https://doi.org/10.1016/j.brat.2019.103479) to identify sources of research waste and increasing value in biomedical research. The seven assessed reviews had an overall extremely poor quality score (5 critically low, one low, one moderate, and none high) and multiple sources of waste while opening opportunities for increasing value in the NFB literature. Our research shows that it remains unclear whether NFB training is significantly beneficial in treating PTSD. The quality of the investigated literature is low and maintains a persistent uncertainty over numerous points, which are highly important for deciding whether an intervention has clinical efficacy. Just as importantly, none of the reviews we appraised explored the statistical power, referred to open data of the included studies, or adjusted their pooled effect sizes for publication bias and risk of bias. Based on the obtained results, we identified some recurrent sources of waste (such as a lack of research decisions based on sound questions or using an appropriate methodology in a fully transparent, unbiased, and useable manner) and proposed some directions for increasing value (homogeneity and consensus) in designing and reporting research on NFB interventions in PTSD. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Towards Diversifying Early Language Development Research: The First Truly Global International Summer/Winter School on Language Acquisition (/L+/) 2021.
- Author
-
Aravena-Bravo, Paulina, Cristia, Alejandrina, Garcia, Rowena, Kotera, Hiromasa, Nicolas, Ramona Kunene, Laranjo, Ronel, Arokoyo, Bolanle Elizabeth, Benavides-Varela, Silvia, Benders, Titia, BollAvetisyan, Natalie, Cychosz, Margaret, Ben, Rodrigo Dal, Diop, Yatma, DuránUrzúa, Catalina, Havron, Naomi, Manalili, Marie, Narasimhan, Bhuvana, Omane, Paul Okyere, Rowland, Caroline, and Kolberg, Leticia Schiavon
- Subjects
- *
LANGUAGE acquisition , *LANGUAGE research , *LANGUAGE schools , *RESEARCH personnel , *RESEARCH & development - Abstract
With a long-term aim of empowering researchers everywhere to contribute to work on language development, we organized the First Truly Global /L+/ International Summer/ Winter School on Language Acquisition, a free 5-day virtual school for early career researchers. In this paper, we describe the school, our experience organizing it, and lessons learned. The school had a diverse organizer team, composed of 26 researchers (17 from under represented areas: Subsaharan Africa, South and Southeast Asia, and Central and South America); and a diverse volunteer team, with a total of 95 volunteers from 35 different countries, nearly half from under represented areas. This helped worldwide Page 5 of 5 promotion of the school, leading to 958 registrations from 88 different countries, with 300 registrants (based in 63 countries, 80% from under represented areas) selected to participate in the synchronous aspects of the event. The school employed asynchronous. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. The use of scientific methods and models in the philosophy of science.
- Author
-
Ventura, Rafael
- Abstract
What is the relation between philosophy of science and the sciences? As Pradeu et al. (British Journal for the Philosophy of Science https://doi.org/10.1086/715518, 2021) and Khelfaoui et al. (Synthese 199:6219, 2021) recently show, part of this relation is constituted by "philosophy in science": the use of philosophical methods to address questions in the sciences. But another part is what one might call "science in philosophy": the use of methods drawn from the sciences to tackle philosophical questions. In this paper, we focus on one class of such methods and examine the role that model-based methods play within "science in philosophy". To do this, we first build a bibliographic coupling network with Web of Science records of all papers published in philosophy of science journals from 2000 to 2020 ( N = 9217 ). After detecting the most prominent communities of papers in the network, we use a supervised classifier to identify all papers that use model-based methods. Drawing on work in cultural evolution, we also propose a model to represent the evolution of methods in each one of these communities. Finally, we measure the strength of cultural selection for model-based methods during the given time period by integrating model and data. Results indicate not only that model-based methods have had a significant presence in philosophy of science over the last two decades, but also that there is considerable variation in their use across communities. Results further indicate that some communities have experienced strong selection for the use of model-based methods but that other have not; we validate this finding with a logistic regression of paper methodology on publication year. We conclude by discussing some implications of our findings and suggest that model-based methods play an increasingly important role within "science in philosophy" in some but not all areas of philosophy of science. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Subjective evidence evaluation survey for many-analysts studies
- Author
-
Alexandra Sarafoglou, Suzanne Hoogeveen, Don van den Bergh, Balazs Aczel, Casper J. Albers, Tim Althoff, Rotem Botvinik-Nezer, Niko A. Busch, Andrea M. Cataldo, Berna Devezer, Noah N. N. van Dongen, Anna Dreber, Eiko I. Fried, Rink Hoekstra, Sabine Hoffman, Felix Holzmeister, Jürgen Huber, Nick Huntington-Klein, John Ioannidis, Magnus Johannesson, Michael Kirchler, Eric Loken, Jan-Francois Mangin, Dora Matzke, Albert J. Menkveld, Gustav Nilsonne, Don van Ravenzwaaij, Martin Schweinsberg, Hannah Schulz-Kuempel, David R. Shanks, Daniel J. Simons, Barbara A. Spellman, Andrea H. Stoevenbelt, Barnabas Szaszi, Darinka Trübutschek, Francis Tuerlinckx, Eric L. Uhlmann, Wolf Vanpaemel, Jelte Wicherts, and Eric-Jan Wagenmakers
- Subjects
open science ,team science ,scientific transparency ,metascience ,crowdsourcing analysis ,Science - Abstract
Many-analysts studies explore how well an empirical claim withstands plausible alternative analyses of the same dataset by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g. effect size) provided by each analysis team. Although informative about the range of plausible effects in a dataset, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item subjective evidence evaluation survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous many-analysts study.
- Published
- 2024
- Full Text
- View/download PDF
30. Where next for partial randomisation of research funding? The feasibility of RCTs and alternatives [version 2; peer review: 2 approved, 1 approved with reservations]
- Author
-
Tom Stafford, Bilal Mateen, Dan Hind, Ines Rombach, Helen Buckley Woods, James Wilsdon, and Munya Dimario
- Subjects
metascience ,metaresearch ,review ,experiments ,lottery ,eng ,Medicine ,Science - Abstract
We outline essential considerations for any study of partial randomisation of research funding, and consider scenarios in which randomised controlled trials (RCTs) would be feasible and appropriate. We highlight the interdependence of target outcomes, sample availability and statistical power for determining the cost and feasibility of a trial. For many choices of target outcome, RCTs may be less practical and more expensive than they at first appear (in large part due to issues pertaining to sample size and statistical power). As such, we briefly discuss alternatives to RCTs. It is worth noting that many of the considerations relevant to experiments on partial randomisation may also apply to other potential experiments on funding processes (as described in The Experimental Research Funder’s Handbook. RoRI, June 2022).
- Published
- 2024
- Full Text
- View/download PDF
31. The ideal psychologist vs. a messy reality : using and misunderstanding effect sizes, confidence intervals and power
- Author
-
Collins, Elizabeth, Watt, Roger, and Caes, Line
- Subjects
power analysis ,effect size ,statistics ,psychology ,open science ,replication crisis ,metascience ,confidence intervals - Abstract
In the past two decades, there have been calls for statistical reform in psychology. Three key concepts within reform are effect sizes, confidence intervals and statistical power. The aim of this thesis was to examine the use and knowledge of these particular concepts, to examine whether researchers are suitably equipped to incorporate them into their research. This thesis consists of five studies. Study 1 reviewed author guidelines across 100 psychology journals, to look for any statistical recommendations. Study 2 (n = 247) and Study 3 (n = 56) examined the use and knowledge of effect sizes using a questionnaire and online experiment. Study 4 surveyed psychology researchers on their use and knowledge of confidence intervals (n = 206). Similarly, Study 5 surveyed psychology researchers on their use and knowledge of power analyses and statistical power (n = 214). Typically, psychology journals expect authors to report effect sizes in their work, although there are fewer expectations related to confidence intervals. Power analyses are also frequently encouraged for sample size justification. Self-reported use of effect sizes, confidence intervals and power analyses was high, while common barriers to use included a lack of knowledge, a lack of motivation, and the influence of academic peers. While knowledge of effect sizes was quite high, they appear to only be understood in relatively limited contexts. In contrast, both confidence intervals and statistical power appear to be frequently misunderstood, and many researchers find power analysis calculations difficult. Researchers would benefit from increased education and support to encourage them to confidently adopt an assortment of statistics in their work, and more effort must be made to prevent statistical changes from becoming a new series of tick-box exercises that do not improve the integrity of psychological research.
- Published
- 2022
32. Psychological Science in the Wake of COVID-19: Social, Methodological, and Metascientific Considerations.
- Author
-
Rosenfeld, Daniel L, Balcetis, Emily, Bastian, Brock, Berkman, Elliot T, Bosson, Jennifer K, Brannon, Tiffany N, Burrow, Anthony L, Cameron, C Daryl, Chen, Serena, Cook, Jonathan E, Crandall, Christian, Davidai, Shai, Dhont, Kristof, Eastwick, Paul W, Gaither, Sarah E, Gangestad, Steven W, Gilovich, Thomas, Gray, Kurt, Haines, Elizabeth L, Haselton, Martie G, Haslam, Nick, Hodson, Gordon, Hogg, Michael A, Hornsey, Matthew J, Huo, Yuen J, Joel, Samantha, Kachanoff, Frank J, Kraft-Todd, Gordon, Leary, Mark R, Ledgerwood, Alison, Lee, Randy T, Loughnan, Steve, MacInnis, Cara C, Mann, Traci, Murray, Damian R, Parkinson, Carolyn, Pérez, Efrén O, Pyszczynski, Tom, Ratner, Kaylin, Rothgerber, Hank, Rounds, James D, Schaller, Mark, Silver, Roxane Cohen, Spellman, Barbara A, Strohminger, Nina, Swim, Janet K, Thoemmes, Felix, Urganci, Betul, Vandello, Joseph A, Volz, Sarah, Zayas, Vivian, and Tomiyama, A Janet
- Subjects
Humans ,Pandemics ,COVID-19 ,SARS-CoV-2 ,large-scale collaboration ,metascience ,Mental Health ,Good Health and Well Being ,Psychology ,Cognitive Sciences ,Social Psychology - Abstract
The COVID-19 pandemic has extensively changed the state of psychological science from what research questions psychologists can ask to which methodologies psychologists can use to investigate them. In this article, we offer a perspective on how to optimize new research in the pandemic's wake. Because this pandemic is inherently a social phenomenon-an event that hinges on human-to-human contact-we focus on socially relevant subfields of psychology. We highlight specific psychological phenomena that have likely shifted as a result of the pandemic and discuss theoretical, methodological, and practical considerations of conducting research on these phenomena. After this discussion, we evaluate metascientific issues that have been amplified by the pandemic. We aim to demonstrate how theoretically grounded views on the COVID-19 pandemic can help make psychological science stronger-not weaker-in its wake.
- Published
- 2022
33. Excavating FAIR Data: the Case of the Multicenter Animal Spinal Cord Injury Study (MASCIS), Blood Pressure, and Neuro-Recovery.
- Author
-
Almeida, Carlos A, Torres-Espin, Abel, Huie, J Russell, Sun, Dongming, Noble-Haeusslein, Linda J, Young, Wise, Beattie, Michael S, Bresnahan, Jacqueline C, Nielson, Jessica L, and Ferguson, Adam R
- Subjects
Animals ,Rats ,Spinal Cord Injuries ,Reproducibility of Results ,Blood Pressure ,Autonomic ,Data science ,Hemodynamics ,Metascience ,Motor recovery ,Neurotrauma ,Reproducibility ,Spinal contusion ,Spinal Cord Injury ,Injury - Trauma - (Head and Spine) ,Neurodegenerative ,Rehabilitation ,Injury (total) Accidents/Adverse Effects ,Neurosciences ,Good Health and Well Being ,Biochemistry and Cell Biology ,Neurology & Neurosurgery - Abstract
Meta-analyses suggest that the published literature represents only a small minority of the total data collected in biomedical research, with most becoming 'dark data' unreported in the literature. Dark data is due to publication bias toward novel results that confirm investigator hypotheses and omission of data that do not. Publication bias contributes to scientific irreproducibility and failures in bench-to-bedside translation. Sharing dark data by making it Findable, Accessible, Interoperable, and Reusable (FAIR) may reduce the burden of irreproducible science by increasing transparency and support data-driven discoveries beyond the lifecycle of the original study. We illustrate feasibility of dark data sharing by recovering original raw data from the Multicenter Animal Spinal Cord Injury Study (MASCIS), an NIH-funded multi-site preclinical drug trial conducted in the 1990s that tested efficacy of several therapies after a spinal cord injury (SCI). The original drug treatments did not produce clear positive results and MASCIS data were stored in boxes for more than two decades. The goal of the present study was to independently confirm published machine learning findings that perioperative blood pressure is a major predictor of SCI neuromotor outcome (Nielson et al., 2015). We recovered, digitized, and curated the data from 1125 rats from MASCIS. Analyses indicated that high perioperative blood pressure at the time of SCI is associated with poorer health and worse neuromotor outcomes in more severe SCI, whereas low perioperative blood pressure is associated with poorer health and worse neuromotor outcome in moderate SCI. These findings confirm and expand prior results that a narrow window of blood-pressure control optimizes outcome, and demonstrate the value of recovering dark data for assessing reproducibility of findings with implications for precision therapeutic approaches.
- Published
- 2022
34. The replication crisis is less of a “crisis” in Lakatos’ philosophy of science than it is in Popper’s
- Author
-
Rubin, Mark
- Published
- 2025
- Full Text
- View/download PDF
35. Rhetoric of psychological measurement theory and practice
- Author
-
Kathleen L. Slaney, Megan E. Graham, Ruby S. Dhillon, and Richard E. Hohn
- Subjects
psychological measurement ,rhetoric ,rhetoric of science ,validation ,metascience ,methodological reform ,Psychology ,BF1-990 - Abstract
Metascience scholars have long been concerned with tracking the use of rhetorical language in scientific discourse, oftentimes to analyze the legitimacy and validity of scientific claim-making. Psychology, however, has only recently become the explicit target of such metascientific scholarship, much of which has been in response to the recent crises surrounding replicability of quantitative research findings and questionable research practices. The focus of this paper is on the rhetoric of psychological measurement and validity scholarship, in both the theoretical and methodological and empirical literatures. We examine various discourse practices in published psychological measurement and validity literature, including: (a) clear instances of rhetoric (i.e., persuasion or performance); (b) common or rote expressions and tropes (e.g., perfunctory claims or declarations); (c) metaphors and other “literary” styles; and (d) ambiguous, confusing, or unjustifiable claims. The methodological approach we use is informed by a combination of conceptual analysis and exploratory grounded theory, the latter of which we used to identify relevant themes within the published psychological discourse. Examples of both constructive and useful or misleading and potentially harmful discourse practices will be given. Our objectives are both to contribute to the critical methodological literature on psychological measurement and connect metascience in psychology to broader interdisciplinary examinations of science discourse.
- Published
- 2024
- Full Text
- View/download PDF
36. Biomarker adoption in developmental science: A data‐driven modelling of trends from 90 biomarkers across 20 years.
- Author
-
Qian, Weiqiang, Zhang, Chao, Piersiak, Hannah A., Humphreys, Kathryn L., and Mitchell, Colter
- Subjects
- *
BIOMARKERS , *C-reactive protein , *GLYCOSYLATED hemoglobin , *INTERLEUKINS , *SOMATOMEDIN , *CHILD development , *DEVELOPMENTAL psychology , *MULTIPLE regression analysis , *SYSTOLIC blood pressure , *RANDOM forest algorithms , *REGRESSION analysis , *MAGNETIC resonance imaging , *DNA methylation , *BRAIN cortical thickness , *DIASTOLIC blood pressure , *RESEARCH funding , *DESCRIPTIVE statistics , *TUMOR necrosis factors , *WAIST circumference , *PREDICTION models , *PERIODICAL articles , *STATISTICAL models , *PEPTIDE hormones , *BLOOD cell count , *BODY mass index , *IMPACT factor (Citation analysis) , *CYSTATIN C , *CHOLESTEROL - Abstract
Developmental scientists have adopted numerous biomarkers in their research to better understand the biological underpinnings of development, environmental exposures, and variation in long‐term health. Yet, adoption patterns merit investigation given the substantial resources used to collect, analyse, and train to use biomarkers in research with infants and children. We document trends in use of 90 biomarkers between 2000 and 2020 from approximately 430,000 publications indexed by the Web of Science. We provide a tool for researchers to examine each of these biomarkers individually using a data‐driven approach to estimate the biomarker growth trajectory based on yearly publication number, publication growth rate, number of author affiliations, National Institutes of Health dedicated funding resources, journal impact factor, and years since the first publication. Results indicate that most biomarkers fit a "learning curve" trajectory (i.e., experience rapid growth followed by a plateau), though a small subset decline in use over time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Comparative Cognition Needs Big Team Science: How Large-Scale Collaborations Will Unlock the Future of the Field.
- Author
-
Alessandroni, Nicolás, Altschul, Drew, Bazhydai, Marina, Byers-Heinlein, Krista, Elsherif, Mahmoud, Gjoneska, Biljana, Huber, Ludwig, Mazza, Valeria, Miller, Rachael, Nawroth, Christian, Pronizius, Ekaterina, Qadri, Muhammad A. J., Šlipogor, Vedrana, Soderstrom, Melanie, Stevens, Jeffrey R., Visser, Ingmar, Williams, Madison, Zettersten, Martin, and Prétôt, Laurent
- Subjects
- *
COGNITION research , *NUMBERS of species , *SAMPLE size (Statistics) , *COGNITION , *TEAMS - Abstract
Comparative cognition research has been largely constrained to isolated facilities, small teams, and a limited number of species. This has led to challenges such as conflicting conceptual definitions and underpowered designs. Here, we explore how Big Team Science (BTS) may remedy these issues. Specifically, we identify and describe four key BTS advantages -- increasing sample size and diversity, enhancing task design, advancing theories, and improving welfare and conservation efforts. We conclude that BTS represents a transformative shift capable of advancing research in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Guidelines to improve internationalization in the psychological sciences.
- Author
-
Puthillam, Arathy, Montilla Doble, Lysander James, Delos Santos, Junix Jerald I., Elsherif, Mahmoud Medhat, Steltenpohl, Crystal N., Moreau, David, Pownall, Madeleine, Silverstein, Priya, Anand-Vembar, Shaakya, and Kapoor, Hansika
- Subjects
- *
GLOBALIZATION , *SCIENCE fairs , *SCIENCE conferences , *HUMAN behavior , *RESEARCH personnel - Abstract
Conversations about the internationalization of psychological sciences have occurred over a few decades with very little progress. Previous work shows up to 95% of participants in the studies published in mainstream journals are from Western, Educated, Industrialized, Rich, Democratic nations. Similarly, a large proportion of authors are based in North America. This imbalance is well-documented across a range of subfields in psychology, yet the specific steps and best practices to bridge publication and data gaps across world regions are still unclear. To address this issue, we conducted a hackathon at the Society for the Improvement of Psychological Science 2021 conference to develop guidelines to improve international representation of authors and participants, adapted for various stakeholders in the production of psychological knowledge. Based on this hackathon, we discuss specific guidelines and practices that funding bodies, academic institutions, professional academic societies, journal editors and reviewers, and researchers should engage with to ensure psychology is the scientific discipline of human behavior and cognition across the world. These recommendations will help us develop a more valid and fairer science of human sociality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Sociodemographic Reporting and Sample Composition Over 3 Decades of Psychopathology Research: A Systematic Review and Quantitative Synthesis.
- Author
-
Wilson, Sylia
- Subjects
- *
PATHOLOGICAL psychology , *ABNORMAL psychology , *GENDER identity , *ALASKA Natives , *RACE - Abstract
Although researchers seek to understand psychological phenomena in a population, quantitative research studies are conducted in smaller samples meant to represent the larger population of interest. This systematic review and quantitative synthesis considers reporting of sociodemographic characteristics and sample composition in the Journal of Abnormal Psychology (now the Journal of Psychopathology and Clinical Science) over the past 3 decades. Across k = 1,244 empirical studies, there were high and increasing rates of reporting of participant age/developmental stage and sex/gender, low but increasing reporting of socioeconomic status/income, and moderate and stable reporting of educational attainment. Rates of reporting of sexual orientation remained low and reporting of gender identity was essentially nonexistent. There were low to moderate but increasing rates of reporting of participant race and ethnicity. Approximately three-quarters of participants in studies over the past 3 decades were White, while the proportion of participants who were Asian, Black or African American, American Indian or Alaska Native, Native Hawaiian or Other Pacific Islander, or Hispanic/Latino was much lower. Approximately two-thirds of participants were female, with this proportion increasing over time. There were also notable differences in the proportion of study participants as a function of race and sex/gender for different forms of psychopathology. Basic science and theoretical psychopathology research must include sociodemographically diverse samples that are representative of and generalizable to the larger human population, while seeking to decrease stigma of psychopathology and increase mental health equity. Recommendations are made to increase sociodemographic diversity in psychopathology research and the scientific review/publication process. General Scientific Summary: Basic science and theoretical research on the etiology, development, symptomatology, and course of psychopathology must include sociodemographically diverse samples, while seeking to decrease the stigma of psychopathology and increase mental health equity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. FENS‐Kavli Network of Excellence: Postponed, non‐competitive peer review for research funding.
- Author
-
Dresler, Martin
- Subjects
- *
RESEARCH funding , *EDUCATORS , *MERGERS & acquisitions , *RESEARCH grants , *OPEN scholarship - Abstract
Receiving research grants is among the highlights of an academic career, affirming previous accomplishments and enabling new research endeavours. Much of the process of acquiring research funding, however, belongs to the less favourite duties of many researchers: It is time consuming, often stressful and, in the majority of cases, unsuccessful. This resentment towards funding acquisition is backed up by empirical research: The current system to distribute research funding, via competitive calls for extensive research applications that undergo peer review, has repeatedly been shown to fail in its task to reliably rank proposals according to their merit, while at the same time being highly inefficient. The simplest, fairest and broadly supported alternative would be to distribute funding more equally across researchers, for example, by an increase of universities' base funding, thereby saving considerable time that can be spent on research instead. Here, I propose how to combine such a 'funding flat rate' model—or other efficient distribution strategies—with quality control through postponed, non‐competitive peer review using open science practices. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
41. A Metascientific Review of the Evidential Value of Acceptance and Commitment Therapy for Depression.
- Author
-
Williams, Alexander J., Botanov, Yevgeny, Giovanetti, Annaleis K., Perko, Victoria L., Sutherland, Carrie L., Youngren, Westley, and Sakaluk, John K.
- Subjects
- *
ACCEPTANCE & commitment therapy , *MENTAL health services , *COGNITIVE therapy , *MENTAL depression , *CLINICAL health psychology - Abstract
• Considering the replication crisis, metascientific reviews of therapy are needed. • We metascientifically reviewed Acceptance and Commitment Therapy (ACT) for depression. • ACT is credibly better than weak control groups in depression treatment. • Some evidence comparing ACT with cognitive behavioral therapy (CBT) was ambiguous. • Other evidence comparing ACT with CBT credibly indicated CBT's superiority. In the past three-and-a-half decades, nearly 500 randomized controlled trials (RCTs) have examined Acceptance and Commitment Therapy (ACT) for a range of health problems, including depression. However, emerging concerns regarding the replicability of scientific findings across psychology and mental health treatment outcome research highlight a need to re-examine the strength of evidence for treatment efficacy. Therefore, we conducted a metascientific review of the evidential value of ACT in treating depression. Whereas reporting accuracy was generally high across all trials, we found important differences in evidential value metrics corresponding to the types of control conditions used. RCTs of ACT compared to weaker controls (e.g., no treatment, waitlist) were well-powered, with sample sizes appropriate for detecting plausible effect sizes. They typically yielded stronger Bayesian evidence for (and larger posterior estimates of) ACT efficacy, though there was some evidence of significance inflation among these effects. RCTs of ACT against stronger controls (e.g., other psychotherapies), meanwhile, were poorly powered, designed to detect implausibly large effect sizes, and yielded ambiguous—if not contradicting—Bayesian evidence and estimates of efficacy. Although our review supports a view of ACT as efficacious for treating depression compared to weaker controls, future RCTs must provide more transparent reporting with larger groups of participants to properly assess the difference between ACT and competitor treatments such as behavioral activation and other forms of cognitive behavioral therapy. Clinicians and health organizations should reassess the use of ACT for depression if costs and resources are higher than for other efficacious treatments. Clinical trials contributing effects to our synthesis can be found at https://osf.io/qky35. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
42. Adversarial Collaboration: The Next Science Reform
- Author
-
Clark, Cory J., Tetlock, Philip E., Frisby, Craig L., editor, Redding, Richard E., editor, O'Donohue, William T., editor, and Lilienfeld, Scott O., editor
- Published
- 2023
- Full Text
- View/download PDF
43. Ethics and Games, Ethical Games and Ethics in Game
- Author
-
Carvalho, Luiz Paulo, Santoro, Flávia Maria, Oliveira, Jonice, Costa, Rosa Maria M., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Santos, Rodrigo Pereira dos, editor, and Hounsell, Marcelo da Silva, editor
- Published
- 2023
- Full Text
- View/download PDF
44. Unrestricted Versus Regulated Open Data Governance: A Bibliometric Comparison of SARS-CoV-2 Nucleotide Sequence Databases
- Author
-
Nathanael Sheehan, Federico Botta, and Sabina Leonelli
- Subjects
covid-19 ,genomic data sharing ,data infrastructures ,data governance ,open science ,metascience ,Science (General) ,Q1-390 - Abstract
Two distinct modes of data governance have emerged in accessing and reusing viral data pertaining to COVID-19: an unrestricted model, espoused by data repositories part of the International Nucleotide Sequence Database Collaboration and a regulated model promoted by the Global Initiative on Sharing All Influenza data. In this paper, we focus on publications mentioning either infrastructure in the period between January 2020 and January 2023, thus capturing a period of acute response to the COVID-19 pandemic. Through a variety of bibliometric and network science methods, we compare the extent to which either data infrastructure facilitated collaboration from different countries around the globe to understand how data reuse can enhance forms of diversity between institutions, countries, and funding groups. Our findings reveal disparities in representation and usage between the two data infrastructures. We conclude that both approaches offer useful lessons, with the unrestricted model providing insights into complex data linkage and the regulated model demonstrating the importance of global representation.
- Published
- 2024
- Full Text
- View/download PDF
45. Is biomedical research self-correcting? Modelling insights on the persistence of spurious science
- Author
-
David Robert Grimes
- Subjects
metaresearch ,metascience ,publication bias ,publish or perish ,research integrity ,research waste ,Science - Abstract
The reality that volumes of published biomedical research are not reproducible is an increasingly recognized problem. Spurious results reduce trustworthiness of reported science, increasing research waste. While science should be self-correcting from a philosophical perspective, that in insolation yields no information on efforts required to nullify suspect findings or factors shaping how quickly science may be corrected. There is also a paucity of information on how perverse incentives in the publishing ecosystem favouring novel positive findings over null results shape the ability of published science to self-correct. Knowledge of factors shaping self-correction of science remain obscure, limiting our ability to mitigate harms. This modelling study introduces a simple model to capture dynamics of the publication ecosystem, exploring factors influencing research waste, trustworthiness, corrective effort and time to correction. Results from this work indicate that research waste and corrective effort are highly dependent on field-specific false positive rates and time delays to corrective results to spurious findings are propagated. The model also suggests conditions under which biomedical science is self-correcting and those under which publication of correctives alone cannot stem propagation of untrustworthy results. Finally, this work models a variety of potential mitigation strategies, including researcher- and publisher-driven interventions.
- Published
- 2024
- Full Text
- View/download PDF
46. Sorry we′re open, come in we're closed: different profiles in the perceived applicability of open science practices to completed research projects
- Author
-
Jürgen Schneider
- Subjects
open science practices ,profiles ,applicability ,metascience ,Science - Abstract
Open science is an increasingly important topic for research, politics and funding agencies. However, the discourse on open science is heavily influenced by certain research fields and paradigms, leading to the risk of generalizing what counts as openness to other research fields, regardless of its applicability. In our paper, we provide evidence that researchers perceive different profiles in the potential to apply open science practices to their projects, making a one-size-fits-all approach unsuitable. In a pilot study, we first systematized the breadth of open science practices. The subsequent survey study examined the perceived applicability of 13 open science practices across completed research projects in a broad variety of research disciplines. We were able to identify four different profiles in the perceived applicability of open science practices. For researchers conducting qualitative-empirical research projects, comprehensively implementing the breadth of open science practices is tendentially not feasible. Further, research projects from some disciplines tended to fit a profile with little opportunity for public participation. Yet, disciplines and research paradigms appear not to be the key factors in predicting the perceived applicability of open science practices. Our findings underscore the case for considering project-related conditions when implementing open science practices. This has implications for the establishment of policies, guidelines and standards concerning open science.
- Published
- 2024
- Full Text
- View/download PDF
47. Region of Attainable Redaction, an extension of Ellipse of Insignificance analysis for gauging impacts of data redaction in dichotomous outcome trials
- Author
-
David Robert Grimes
- Subjects
metascience ,replicability ,sustainability ,metaresearch ,tools ,statistics ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
In biomedical science, it is a reality that many published results do not withstand deeper investigation, and there is growing concern over a replicability crisis in science. Recently, Ellipse of Insignificance (EOI) analysis was introduced as a tool to allow researchers to gauge the robustness of reported results in dichotomous outcome design trials, giving precise deterministic values for the degree of miscoding between events and non-events tolerable simultaneously in both control and experimental arms (Grimes, 2022). While this is useful for situations where potential miscoding might transpire, it does not account for situations where apparently significant findings might result from accidental or deliberate data redaction in either the control or experimental arms of an experiment, or from missing data or systematic redaction. To address these scenarios, we introduce Region of Attainable Redaction (ROAR), a tool that extends EOI analysis to account for situations of potential data redaction. This produces a bounded cubic curve rather than an ellipse, and we outline how this can be used to identify potential redaction through an approach analogous to EOI. Applications are illustrated, and source code, including a web-based implementation that performs EOI and ROAR analysis in tandem for dichotomous outcome trials is provided.
- Published
- 2024
- Full Text
- View/download PDF
48. Prior beliefs and the interpretation of scientific results
- Author
-
Ami Eidels
- Subjects
metascience ,research methods ,credibility ,transparency ,replication ,real-time procedures ,Science - Abstract
How do prior beliefs affect the interpretation of scientific results? I discuss a hypothetical scenario where researchers publish results that could either support a theory they believe in, or refute that theory, and ask if the two instances carry the same weight. More colloquially, I ask if we should overweigh scientific results supporting a given theory and reported by a researcher, or a team, that initially did not support that theory. I illustrate the challenge using two examples from psychology: evidence accumulation models, and extra sensory perception.
- Published
- 2023
- Full Text
- View/download PDF
49. Eleven years of student replication projects provide evidence on the correlates of replicability in psychology
- Author
-
Veronica Boyce, Maya Mathur, and Michael C. Frank
- Subjects
metascience ,replication ,large-scale replication project ,pedagogical replication ,social psychology ,cognitive psychology ,Science - Abstract
Cumulative scientific progress requires empirical results that are robust enough to support theory construction and extension. Yet in psychology, some prominent findings have failed to replicate, and large-scale studies suggest replicability issues are widespread. The identification of predictors of replication success is limited by the difficulty of conducting large samples of independent replication experiments, however: most investigations reanalyse the same set of [Formula: see text]. We introduce a new dataset of 176 replications from students in a graduate-level methods course. Replication results were judged to be successful in 49% of replications; of the 136 where effect sizes could be numerically compared, 46% had point estimates within the prediction interval of the original outcome (versus the expected 95%). Larger original effect sizes and within-participants designs were especially related to replication success. Our results indicate that, consistent with prior reports, the robustness of the psychology literature is low enough to limit cumulative progress by student investigators.
- Published
- 2023
- Full Text
- View/download PDF
50. Investigating Lay Perceptions of Psychological Measures: A Registered Report
- Author
-
Joseph Mason, Madeleine Pownall, Amy Palmer, and Flavio Azevedo
- Subjects
measurement crisis ,credibility crisis ,cognitive interview ,think aloud ,qualitative research ,metascience ,Psychology ,BF1-990 ,Social Sciences - Abstract
In recent years, the reliability and validity of psychology measurement practices has been called into question, as part of an ongoing reappraisal of the robustness, reproducibility, and transparency of psychological research. While useful progress has been made, to date, the majority of discussions surrounding psychology’s measurement crisis have involved technical, quantitative investigations into the validity, reliability, and statistical robustness of psychological measures. This registered report offers a seldom-heard qualitative perspective on these ongoing debates, critically exploring members of the general public’s (i.e., non-experts) lay perceptions of widely used measures in psychology. Using a combination of cognitive interviews and a think aloud study protocol, participants (n = 23) completed one of three popular psychology measures. Participants reflected on each of the measures, discussed the contents, and provided perceptions of what the measures are designed to test. Coding of the think aloud protocols showed that participants across the measures had issues in interpreting and responding to items. Thematic analysis of the cognitive interviews identified three dominant themes that each relate to lay perceptions of psychology measurements. These were: (1) participants’ grappling with attempting to ‘capture their multiple selves’ in the questionnaires, (2) participants perceiving the questionnaire method as generally ‘missing nuance and richness’ and (3) exposing the ‘hidden labour of questionnaires’. These findings are discussed in the context of psychology’s measurement reform.
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.