Back to Search Start Over

Quantifying Error in Effect Size Estimates in Attention, Executive Function and Implicit Learning

Authors :
Kelly Garner
Christopher Raymond Nolan
Abbey S Nydam
Zoie Nott
Howard Bowman
Paul Edmund Dux
Publication Year :
2022
Publisher :
Center for Open Science, 2022.

Abstract

Accurate quantification of effect sizes has the power to motivate theory, and reduce misinvestment of scientific resources by informing power calculations during study planning. However, a combination of publication bias and small sample sizes (~$N$ = 25) hampers certainty in current effect size estimates. We sought to determine the extent to which sample sizes may produce error in effect size estimates for four commonly used paradigms assessing attention, executive function and implicit learning (Attentional Blink (AB), Multitasking (MT), Contextual Cueing (CC), Serial Response Task (SRT)). We combined a large data-set with a bootstrapping approach to simulate 1000 experiments across a range of $N$ (13-313). Beyond quantifying the effect size and statistical power that can be anticipated for each study design, we demonstrate that experiments with lower values of N can potentially double or triple information loss. Furthermore, we identify the probability that sampling a similar study will provide a reasonable effect size estimate, and show that using such an approach for power calculations will lead to an imprecise estimate between 40-67% of the time, given commonly used sample sizes. We conclude with practical recommendations for researchers and demonstrate how our simulation approach can yield theoretical insights that are not readily achieved by other methods; such as identifying the information gained from rejecting the null hypothesis, and quantifying the contribution of individual variation to error in effect size estimates.

Details

Database :
OpenAIRE
Accession number :
edsair.doi...........b5bf26f31d87805b9ac0af39d377a0d7