8 results on '"Prager, Johannes"'
Search Results
2. Metacognitive Myopia: A Major Obstacle on the Way to Rationality.
- Author
-
Fiedler, Klaus, Prager, Johannes, and McCaughey, Linda
- Subjects
- *
MYOPIA , *JUDGMENT (Psychology) , *DECISION making , *QUALITY control , *LEGAL judgments , *METACOGNITIVE therapy - Abstract
The notion of metacognitive myopia refers to a conspicuous weakness of the quality control of memory and reasoning processes. Although people are often remarkably sensitive even to complex samples of information when making evaluative judgments and decisions, their uncritical and naive tendency to take the validity of sampled information for granted constitutes a major obstacle to rational behavior. After illustrating this phenomenon with reference to prominent biases (base-rate neglect, misattribution, perseverance), we decompose metacognitive myopia into two distinct but intertwined functions, monitoring and control. We offer explanations for why effectively monitoring the biases resulting from information sampling in an uncertain world is so difficult and why the control function is severely restricted by the lack of volitional control over mental actions. Because of these and other difficulties, metacognitive myopia constitutes a major obstacle to rational judgment and decision making. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Forming Impressions From Self-Truncated Samples of Traits−Interplay of Thurstonian and Brunswikian Sampling Effects.
- Author
-
Prager, Johannes and Fiedler, Klaus
- Subjects
- *
SAMPLING theorem , *IMPRESSION formation (Psychology) , *SOCIAL perception , *SAMPLE size (Statistics) , *JUDGMENT (Psychology) - Abstract
Consistent with sampling theories in judgment and decision research, impression judgments depend on the number of traits drawn randomly from a population of target person traits in distinct ways. When sample size is determined externally by the experimenter, the sensitivity of resulting impression judgments to the prevailing (positive or negative) valence increases with the number of traits. In contrast, sensitivity is negatively related to sample size (more extreme judgments for smaller samples) when sampling is self-truncated. Building on previous findings by Prager et al. (2018), two new experiments corroborate the judgment pattern for self-truncated sampling and elaborate on the distinction of Brunswikian sampling (of stimuli in the environment) and Thurstonian sampling (of states within the judge's mind). Thurstonian sampling effects were evident in depolarized (regressive) judgments by yoked control participants provided with exactly the same trait samples as original judges, who could truncate sampling when they felt ready for a judgment. Experiment 1 included two kinds of yoked controls, receiving trait samples truncated in a previous stage either by themselves or by other judges, distinguishing between temporal and interpersonal sources of Thurstonian sampling variance. As expected, self-yoking yielded less regressive shrinkage than other-yoking. Experiment 2 provided convergent results with yoked controls manipulated within participants, dealing with higher dispersion of impressions on self-truncated samples (Thurstone, 1927). Across both experiments, individual impression judgments were highly predictable from theoretically meaningful parameters: expected valence in the population, sampling error, sample size, and different indices of trait diagnosticity. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Towards a Deeper Understanding of Impression Formation -- New Insights Gained From a Cognitive-Ecological Perspective.
- Author
-
Prager, Johannes, Krueger, Joachim I., and Fiedler, Klaus
- Subjects
- *
IMPRESSION formation (Psychology) , *SOCIAL cognition theory (Communication) , *INTERPERSONAL relations , *SOCIAL attitudes , *EMPLOYEE selection , *JUDGMENT (Psychology) , *PERSONALITY - Abstract
Impression formation is a basic module of fundamental research in social cognition, with broad implications for applied research on interpersonal relations, social attitudes, employee selection, and person judgments in legal and political context. Drawing on a pool of 28 predominantly positive traits used in Solomon Asch's (1946) seminal impression studies, two research teams have investigated the impact of the number of person traits sampled randomly from the pool on the evaluative impression of the target person. Whereas Norton, Frost, and Ariely (2007) found a "less-is-more" effect, reflecting less positive impressions with increasing sample size n, Ullrich, Krueger, Brod, and Groschupf (2013) concluded that an //-independent averaging rule can account for the data patterns obtained in both labs. We address this issue by disentangling different influences of n on resulting impressions, namely varying baserates of positive and negative traits, different sampling procedures, and trait diagnosticity. Depending on specific task conditions, which can be derived on theoretical grounds, the strength of resulting impressions (in the direction of the more prevalent valence) (a) increases with increasing n for diagnostic traits, (b) is independent of n for nondiagnostic traits, or (c) decreases with n when self-truncated sampling produces a distinct primacy effect. This refined pattern, which holds for the great majority of individual participants, illustrates the importance of strong theorizing in cumulative science (Fiedler, 2017) built on established empirical laws and logically sound theorizing. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. The Regression Trap and Other Pitfalls of Replication Science—Illustrated by the Report of the Open Science Collaboration.
- Author
-
Fiedler, Klaus and Prager, Johannes
- Subjects
- *
REPLICATION (Experimental design) , *REGRESSION analysis , *PSYCHOLOGICAL research , *STATISTICAL sampling , *RESEARCH methodology - Abstract
The Open Science Collaboration’s
2015 report suggests that replication effect sizes in psychology are modest. However, closer inspection reveals serious problems. When plotting replication effects are against original effects, the regression trap is lurking: Expecting replication effects to be equally strong as original effects is logically unwarranted; they are inevitably subject to regressive shrinkage. To control for regression, the reliability of original and replication studies must be taken into account. Further problems arise from missing manipulation checks and sampling biases. Our critical comment highlights the need for replication science to live up to the same methodological scrutiny as other research. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
6. Quo Vadis, Methodology? The Key Role of Manipulation Checks for Validity Control and Quality of Science.
- Author
-
Fiedler, Klaus, McCaughey, Linda, and Prager, Johannes
- Subjects
- *
MEDICAL quality control , *EXPERIMENTAL design , *PERSONALITY disorders , *RESEARCH methodology , *PSYCHOLOGY , *HYPOTHESIS , *SCIENCE , *MEDICAL research ,RESEARCH evaluation - Abstract
The current debate about how to improve the quality of psychological science revolves, almost exclusively, around the subordinate level of statistical significance testing. In contrast, research design and strict theorizing, which are superordinate to statistics in the methods hierarchy, are sorely neglected. The present article is devoted to the key role assigned to manipulation checks (MCs) for scientific quality control. MCs not only afford a critical test of the premises of hypothesis testing but also (a) prompt clever research design and validity control, (b) carry over to refined theorizing, and (c) have important implications for other facets of methodology, such as replication science. On the basis of an analysis of the reality of MCs reported in current issues of the Journal of Personality and Social Psychology, we propose a future methodology for the post– p <.05 era that replaces scrutiny in significance testing with refined validity control and diagnostic research designs. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Getting lost in an infinite design space is no solution.
- Author
-
Gollwitzer M and Prager J
- Abstract
Almaatouq et al. argue that an "integrative experiment design" approach can help generating cumulative empirical and theoretical knowledge. Here, we discuss the novelty of their approach and scrutinize its promises and pitfalls. We argue that setting up a "design space" may turn out to be theoretically uninformative, inefficient, and even impossible. Designing truly diagnostic experiments provides a better alternative.
- Published
- 2024
- Full Text
- View/download PDF
8. Speed-accuracy trade-offs in sample-based decisions.
- Author
-
Fiedler K, McCaughey L, Prager J, Eichberger J, and Schnell K
- Subjects
- Humans, Reaction Time, Metacognition
- Abstract
Success on many tasks depends on a trade-off between speed and accuracy. In a novel variant, a speed-accuracy trade-off with sample-based decisions in which both speed and accuracy jointly depend on (self-truncated) sample size, we found strong accuracy biases. On every trial of a sequential investment game, participants chose between 2 investment funds based on binary samples of the funds' past outcomes. Participants could stop sampling and decide whenever they felt sufficiently informed. Total payoff was the product of choice accuracy and number of choices completed within the available time (speed). Participants' failure to understand the dominance of speed over accuracy-that speed decreases more than accuracy improves with increasing sample size-led to dramatic oversampling. Our research aimed to examine to what extent metacognitive functions of monitoring and control could correct for the accuracy bias. Experiments 1a through 1c demonstrated similarly strong accuracy biases and payoff losses in psychology and economics students, depressed, and control patients. In Experiments 2 through 4, the accuracy bias persisted despite several manipulations (feedback, sample limit, choice difficulty, payoff, sampling truncation as default) that underlined the speed advantage, reflecting a conspicuous metacognitive deficit. Even when participants faced no risk of losing on incorrect trials but could still win on correct trials (Experiment 3) and when sampling was contingent on the active solicitation of every new element (Experiment 4), participants continued to sample too much and failed to overcome the accuracy bias. The final discussion focuses on psychological reasons and possible remedies for the metacognitive deficit in trade-off regulation. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.