1. Biases in research evaluation: Inflated assessment, oversight, or error-type weighting?
- Author
-
Reich, Darcy A., Green, Melanie C., Brock, Timothy C., and Tetlock, Philip E.
- Subjects
- *
MOTIVATION (Psychology) , *MANIPULATIVE behavior , *PSYCHOLOGISTS , *PSYCHOLOGY - Abstract
Abstract: Reviewers of research are more lenient when evaluating studies on important topics [Wilson, T. D., Depaulo, B. M., Mook, D. G., & Klaaren, K. J. (1993). Scientists’ evaluations of research: the biasing effects of the importance of the topic. Psychological Science, 4(5), 323–325]. Three experiments (N =145, 36, and 91 psychologists) investigated different explanations of leniency, including inflation of assessments (applying a heuristic associating importance with quality), oversight (failing to detect flaws), and error-weighting (prioritizing Type II error avoidance). In Experiment 1, psychologists evaluated the publishability and rigor of studies in a 2 (topic importance)×2 (accuracy motivation)×2 (research domain) design. Experiment 2 featured an exact replication of Wilson et al. and suggested that report length moderated the effects of importance on perceived rigor, but not on publishability. In Experiment 3, a manipulation of error-weighting replaced the manipulation of domain (Experiment 1). Results favored error-weighting, rather than inflation or oversight. Perceived seriousness of Type II error (in Experiments 1 and 3) and the error-weighting manipulation (in Experiment 3) predicted study evaluations. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF