46 results on '"subjective probability"'
Search Results
2. Variations on a theme by Rachlin: Probability discounting.
- Author
-
Killeen, Peter R.
- Subjects
- *
INTERTEMPORAL choice , *DELAY discounting (Psychology) , *UTILITY theory , *PROBABILITY theory - Abstract
Rachlin and colleagues laid the groundwork for treating the discounting of probabilistic goods as a variant of the discounting of delayed goods. This approach was seminal for a large body of subsequent research. The present paper finds the original development problematic: In converting probability to delay, the authors incorrectly dropped trial duration. The subsumption of probability by delay is also empirically questionable, as those are different functions of variables such as magnitude of outcome and commodity versus money. A variant of Rachlin's theme treats human discounting studies as psychophysical matching experiments, in which one compound stimulus is adjusted to equal another. It is assumed that a function of amount (its utility) is multiplied by a function of probability (its weight). Conjoint measurement establishes the nature of these functions, yielding a logarithmic transform on amount, and a Prelec function on probability. This model provides a good and parsimonious account of probability discounting in diverse data sets. Variant representations of the data are explored. By inserting the probabilistically discounted utility into the additive utility theory of delay discounting, a general theory of probabilistic intertemporal choice is achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. A suggestion for the quantification of precise and bounded probability to quantify epistemic uncertainty in scientific assessments.
- Author
-
Raices Cruz, Ivette, Troffaes, Matthias C. M., and Sahlin, Ullrika
- Subjects
EPISTEMIC uncertainty ,BAYESIAN analysis ,PROBABILITY theory ,DECISION making ,FOOD safety - Abstract
An honest communication of uncertainty about quantities of interest enhances transparency in scientific assessments. To support this communication, risk assessors should choose appropriate ways to evaluate and characterize epistemic uncertainty. A full treatment of uncertainty requires methods that distinguish aleatory from epistemic uncertainty. Quantitative expressions for epistemic uncertainty are advantageous in scientific assessments because they are nonambiguous and enable individual uncertainties to be characterized and combined in a systematic way. Since 2019, the European Food Safety Authority (EFSA) recommends assessors to express epistemic uncertainty in conclusions of scientific assessments quantitatively by subjective probability. A subjective probability can be used to represent an expert judgment, which may or may not be updated using Bayes's rule to integrate evidence available for the assessment and could be either precise or approximate. Approximate (or bounded) probabilities may be enough for decision making and allow experts to reach agreement on certainty when they struggle to specify precise subjective probabilities. The difference between the lower and upper bound on a subjective probability can also be used to reflect someone's strength of knowledge. In this article, we demonstrate how to quantify uncertainty by bounded probability, and explicitly distinguish between epistemic and aleatory uncertainty, by means of robust Bayesian analysis, including standard Bayesian analysis through precise probability as a special case. For illustration, the two analyses are applied to an intake assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Robust Decision Analysis under Severe Uncertainty and Ambiguous Tradeoffs: An Invasive Species Case Study.
- Author
-
Sahlin, Ullrika, Troffaes, Matthias C. M., and Edsman, Lennart
- Subjects
DECISION making ,BAYESIAN analysis ,INTRODUCED species ,DECISION theory ,UTILITY functions - Abstract
Bayesian decision analysis is a useful method for risk management decisions, but is limited in its ability to consider severe uncertainty in knowledge, and value ambiguity in management objectives. We study the use of robust Bayesian decision analysis to handle problems where one or both of these issues arise. The robust Bayesian approach models severe uncertainty through bounds on probability distributions, and value ambiguity through bounds on utility functions. To incorporate data, standard Bayesian updating is applied on the entire set of distributions. To elicit our expert's utility representing the value of different management objectives, we use a modified version of the swing weighting procedure that can cope with severe value ambiguity. We demonstrate these methods on an environmental management problem to eradicate an alien invasive marmorkrebs recently discovered in Sweden, which needed a rapid response despite substantial knowledge gaps if the species was still present (i.e., severe uncertainty) and the need for difficult tradeoffs and competing interests (i.e., value ambiguity). We identify that the decision alternatives to drain the system and remove individuals in combination with dredging and sieving with or without a degradable biocide, or increasing pH, are consistently bad under the entire range of probability and utility bounds. This case study shows how robust Bayesian decision analysis provides a transparent methodology for integrating information in risk management problems where little data are available and/or where the tradeoffs are ambiguous. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Bayesian modeling of the mind: From norms to neurons.
- Author
-
Rescorla, Michael
- Subjects
- *
DECISION theory , *PERCEIVED control (Psychology) , *COGNITIVE science , *NEURONS , *DECISION making - Abstract
Bayesian decision theory is a mathematical framework that models reasoning and decision‐making under uncertain conditions. The past few decades have witnessed an explosion of Bayesian modeling within cognitive science. Bayesian models are explanatorily successful for an array of psychological domains. This article gives an opinionated survey of foundational issues raised by Bayesian cognitive science, focusing primarily on Bayesian modeling of perception and motor control. Issues discussed include the normative basis of Bayesian decision theory; explanatory achievements of Bayesian cognitive science; intractability of Bayesian computation; realist versus instrumentalist interpretation of Bayesian models; and neural implementation of Bayesian inference. This article is categorized under:Philosophy > Foundations of Cognitive Science [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. "This Is What We Don't Know": Treating Epistemic Uncertainty in Bayesian Networks for Risk Assessment.
- Author
-
Sahlin, Ullrika, Helle, Inari, and Perepolkin, Dmytro
- Subjects
EPISTEMIC uncertainty ,ENVIRONMENTAL risk assessment ,UNCERTAINTY ,RISK assessment ,ENVIRONMENTAL toxicology ,EPISTEMIC logic ,KNOWLEDGE base - Abstract
Failing to communicate current knowledge limitations, that is, epistemic uncertainty, in environmental risk assessment (ERA) may have severe consequences for decision making. Bayesian networks (BNs) have gained popularity in ERA, primarily because they can combine variables from different models and integrate data and expert judgment. This paper highlights potential gaps in the treatment of uncertainty when using BNs for ERA and proposes a consistent framework (and a set of methods) for treating epistemic uncertainty to help close these gaps. The proposed framework describes the treatment of epistemic uncertainty about the model structure, parameters, expert judgment, data, management scenarios, and the assessment's output. We identify issues related to the differentiation between aleatory and epistemic uncertainty and the importance of communicating both uncertainties associated with the assessment predictions (direct uncertainty) and the strength of knowledge supporting the assessment (indirect uncertainty). Probabilities, intervals, or scenarios are expressions of direct epistemic uncertainty. The type of BN determines the treatment of parameter uncertainty: epistemic, aleatory, or predictive. Epistemic BNs are useful for probabilistic reasoning about states of the world in light of evidence. Aleatory BNs are the most relevant for ERA, but they are not sufficient to treat epistemic uncertainty alone because they do not explicitly express parameter uncertainty. For uncertainty analysis, we recommend embedding an aleatory BN into a model for parameter uncertainty. Bayesian networks do not contain information about uncertainty in the model structure, which requires several models. Statistical models (e.g., hierarchical modeling outside the BNs) are required to consider uncertainties and variability associated with data. We highlight the importance of being open about things one does not know and carefully choosing a method to precisely communicate both direct and indirect uncertainty in ERA. Integr Environ Assess Manag 2021;17:221–232. © 2020 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals LLC on behalf of Society of Environmental Toxicology & Chemistry (SETAC) KEY POINTS: We propose a framework for treating epistemic uncertainty that can guide assessors in communicating uncertainty due to limitations in knowledge when using Bayesian networks (BNs) for risk assessment.A BN is by itself not enough to characterize uncertainty in an assessment, and uncertainty associated with model structure, expert judgments, data, and management scenarios may require modeling external to a BN.There are several ways to characterize direct and indirect epistemic uncertainty, such as a subjective probability, an interval, an uncertainty scenario, or a list of caveats, to be combined with a BN.The users of BNs for environmental risk assessment (ERA) should distinguish between aleatory and epistemic BNs and apply expressions and methods for treating uncertainty appropriate for the given type of BN and knowledge bases of the assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Supersize My Chances: Promotional Lotteries Impact Product Size Choices.
- Author
-
Taylor, Nükhet, Noseworthy, Theodore J., Pancer, Ethan, Mukhopadhyay, Anirban, and Raghubir, Priya
- Subjects
- *
SALES promotion , *CONSUMER behavior , *LOTTERIES , *CONSUMER psychology , *PRODUCT advertising - Abstract
Promotional lotteries offer consumers a chance to win one of many prizes along with their purchase. Critically, as is often the case, these campaigns not only include an assortment of prizes but also an assortment of offerings that one can buy to enter the lottery—such as a small or an extra‐large coffee. While companies regularly advertise that the objective odds of winning do not vary by the size of their product offerings, recent anecdotal evidence suggest that consumers behave as if it does. The net result is that consumers seem to be supersizing during promotional lotteries, and thus purchasing larger sized items. Eight studies (four core and four supplementary in Supporting information) and a single‐paper meta‐analysis confirm that the supersizing phenomenon is indeed real and provides evidence that this behavior is the manifestation of consumers elevating their sense of control. Specifically, supersizing serves to gain psychological control over the pursuit of a desirable, but seemingly unobtainable, outcome. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. Setting Air Quality Standards for PM2.5: A Role for Subjective Uncertainty in NAAQS Quantitative Risk Assessments?
- Author
-
Smith, Anne E.
- Subjects
CLEAN Air Act (U.S.) ,AIR quality standards ,RISK assessment ,UNCERTAINTY - Abstract
The U.S. Clean Air Act (CAA) requires the Administrator of the U.S. Environmental Protection Agency (EPA) to set and periodically review national ambient air quality standards (NAAQS) for criteria pollutants. Because NAAQS must be set without balancing health risks against cost, Administrators look for where health risk tapers off. For some pollutants, however, no evidence exists of such a diminishment. The Administrator must instead evaluate how the strength of evidence for the scientific validity of risk estimates weakens for exposure levels below the central mass of observations indicating a pollutant–health risk relationship. Such an evaluation requires judgments about uncertainties that are inherently subjective. The risk assessments the Agency prepares during NAAQS reviews provide a natural platform for quantitatively characterizing these subjective uncertainty judgments, but the Agency is no longer making use of this opportunity. This article describes EPA's early development of methods to quantitatively characterize subjective uncertainty in NAAQS risk assessments, then traces the progressive elimination of such uncertainty analysis in the risk assessments for the three past NAAQS reviews for fine particulate matter (PM2.5), even while judgments about this uncertainty were becoming increasingly central to Administrators' NAAQS decisions. As a result, the risk assessments now lack relevance to NAAQS decision making. To reestablish a meaningful decision‐support role for NAAQS risk assessments, this article suggests alterations to the process of preparing them. Taking no position on the scientific or legal appropriateness of past NAAQS decisions, the suggested process is intended to better synthesize the scientific evidence to better inform (without constraining) the Administrator's policy decisions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. Integrated Uncertainty Analysis for Ambient Pollutant Health Risk Assessment: A Case Study of Ozone Mortality Risk.
- Author
-
Smith, Anne E. and Glasgow, Garrett
- Subjects
AIR quality standards ,HEALTH risk assessment ,AIR pollutants ,RESPIRATORY diseases ,MORTALITY - Abstract
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically-based risk estimates based on a single statistical model selected from the scientific literature, called the 'core' model. The uncertainty presented for 'core' risk estimates reflects only the statistical uncertainty associated with that one model's concentration-response function parameter estimate(s). However, epidemiologically-based risk estimates are also subject to 'model uncertainty,' which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long-term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
10. Framing of Decisions: Effect on Active and Passive Risk Avoidance.
- Author
-
Huber, Oswald, Huber, Odilo W., and Bär, Arlette S.
- Subjects
AVOIDANCE (Psychology) ,DECISION making ,LEGAL judgments ,PSYCHOLOGY ,RISK - Abstract
ABSTRACT Decision makers intending to avoid risk in a decision situation can choose a less risky alternative (passive risk avoidance) or intervene actively in an alternative applying a risk-defusing action (active risk avoidance). In Experiment 1 (64 participants), we compared active and passive risk defusing in two framing conditions. In the negative frame, in the uncertain alternative, a change to the worse was possible; in the positive frame, a change to an improvement was possible. Each participant decided in both framing conditions. As expected, active risk avoidance behavior for preventing a negative outcome (i.e., in the negative frame) was more likely than for promoting a positive one (i.e., in the positive frame). If decision makers did not or could not actively defuse the risk, they chose in correspondence to the classical pattern: risk avoidance in the positive frame and risk seeking in the negative one. We replicated the latter result in a second experiment (32 participants). The classical framing pattern in passive risk avoidance in both experiments is remarkable, because participants were not presented or did not search for exact probabilities. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
11. A personal history of Bayesian statistics.
- Author
-
Leonard, Thomas Hoskyns
- Subjects
- *
WEB development , *BAYESIAN analysis , *AXIOMS , *ARM'S length transactions , *BUSINESS ethics - Abstract
The history of Bayesian statistics is traced, from a personal perspective, through various strands and via its re-genesis during the 1960s to the current day. Emphasis is placed on broad-sense Bayesian methodology that can be used to meaningfully analyze observed datasets. Over 750 people in science, medicine, and socioeconomics, who have influenced the evolution of the Bayesian approach into the powerful paradigm that it is today, are highlighted. The frequentist/Bayesian controversy is addressed, together with the ways in which many Bayesians combine the two ideologies as a Bayes/non-Bayes compromise, e.g., when drawing inferences about unknown parameters or when investigating the choice of sampling model in relation to its real-life background. A number of fundamental issues are discussed and critically examined, and some elementary explanations for nontechnical readers and some personal reminiscences are included. Some of the Bayesian contributions of the 21st century are subjected to more detailed critique, so that readers may learn more about the quality and relevance of the ongoing research. A recent resolution of Lindley's paradox by Baskurt and Evans is reported. The axioms of subjective probability are reassessed, some state-of-the-art alternatives to Leonard Savage's axioms of utility are discussed, and Deborah Mayo and Michael Evan's refutation of Allan Birnbaum's 1962 justification of the likelihood principle in terms of the sufficiency and conditionality principles is addressed. WIREs Comput Stat 2014, 6:80-115. doi: 10.1002/wics.1293 For further resources related to this article, please visit the . Additional Supporting Information may be found at . Disclaimer: Reference to third party websites in this article does not constitute an endorsement or recommendation by the Publisher. Views and opinions expressed on third party websites are personal to the author and do not necessarily reflect those of this Publisher. Conflict of interest: The author has declared no conflicts of interest for this article. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
12. Expected Uncertain Utility Theory.
- Author
-
Gul, Faruk and Pesendorfer, Wolfgang
- Subjects
RISK aversion ,MATHEMATICAL models of consumption ,BERNOULLI hypothesis (Risk) ,ECONOMIC demand ,ECONOMETRICS - Abstract
We introduce and analyze expected uncertain utility ( EUU) theory. A prior and an interval utility characterize an EUU decision maker. The decision maker transforms each uncertain prospect into an interval-valued prospect that assigns an interval of prizes to each state. She then ranks prospects according to their expected interval utilities. We define uncertainty aversion for EUU, use the EUU model to address the Ellsberg Paradox and other ambiguity evidence, and relate EUU theory to existing models. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
13. The Measurement of Subjective Probability: Evaluating the Sensitivity and Accuracy of Various Scales.
- Author
-
Haase, Niels, Renkewitz, Frank, and Betsch, Cornelia
- Subjects
PROBABILITY theory ,SENSITIVITY analysis ,COMPARATIVE studies ,VISUAL analog scale ,GOODNESS-of-fit tests ,INFORMATION theory - Abstract
The RISK of an event generally relates to its expected severity and the perceived probability of its occurrence. In RISK research, however, there is no standard measure for subjective probability estimates. In this study, we compared five commonly used measurement formats-two rating scales, a visual analog scale, and two numeric measures-in terms of their ability to assess subjective probability judgments when objective probabilities are available. We varied the probabilities (low vs. moderate) and severity (low vs. high) of the events to be judged as well as the presentation mode of objective probabilities (sequential presentation of singular events vs. graphical presentation of aggregated information). We employed two complementary goodness-of-fit criteria: the correlation between objective and subjective probabilities (sensitivity), and the root mean square deviations of subjective probabilities from objective values (accuracy). The numeric formats generally outperformed all other measures. The severity of events had no effect on the performance. Generally, a rise in probability led to decreases in performance. This effect, however, depended on how the objective probabilities were encoded: pictographs ensured perfect information, which improved goodness of fit for all formats and diminished this negative effect on the performance. Differences in performance between scales are thus caused only in part by characteristics of the scales themselves-they also depend on the process of encoding. Consequently, researchers should take the source of probability information into account before selecting a measure. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
14. Combining Experts' Judgments: Comparison of Algorithmic Methods Using Synthetic Data.
- Author
-
Hammitt, James K. and Zhang, Yifan
- Subjects
SPECIALISTS ,ALGORITHMS ,PROBABILITY theory ,COMPARATIVE studies ,UNCERTAINTY (Information theory) ,DATA analysis ,INFORMATION theory ,LEGAL judgments - Abstract
Expert judgment (or expert elicitation) is a formal process for eliciting judgments from subject-matter experts about the value of a decision-relevant quantity. Judgments in the form of subjective probability distributions are obtained from several experts, raising the question how best to combine information from multiple experts. A number of algorithmic approaches have been proposed, of which the most commonly employed is the equal-weight combination (the average of the experts' distributions). We evaluate the properties of five combination methods (equal-weight, best-expert, performance, frequentist, and copula) using simulated expert-judgment data for which we know the process generating the experts' distributions. We examine cases in which two well-calibrated experts are of equal or unequal quality and their judgments are independent, positively or negatively dependent. In this setting, the copula, frequentist, and best-expert approaches perform better and the equal-weight combination method performs worse than the alternative approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
15. Quantifying Experts' Uncertainty About the Future Cost of Exotic Diseases.
- Author
-
Gosling, John Paul, Hart, Andy, Mouat, David C., Sabirovic, Mirzet, Scanlan, Simon, and Simmons, Alick
- Subjects
COMMUNICABLE diseases ,LIVESTOCK diseases ,FOOT & mouth disease ,ECONOMISTS ,VETERINARIANS - Abstract
Since the foot-and-mouth disease outbreak of 2001 in the United Kingdom, there has been debate about the sharing, between government and industry, both the costs of livestock disease outbreaks and responsibility for the decisions that give rise to them. As part of a consultation into the formation of a new body to manage livestock diseases, government veterinarians and economists produced estimates of the average annual costs for a number of exotic infectious diseases. In this article, we demonstrate how the government experts were helped to quantify their uncertainties about the cost estimates using formal expert elicitation techniques. This has enabled the decisionmakers to have a greater appreciation of government experts' uncertainty in this policy area. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
16. Aggregating conclusive and inconclusive information: Data and a model based on the assessment of threat.
- Author
-
Baranski, Joseph V. and Petrusic, William M.
- Subjects
NAVAL officers ,JUDGMENT (Psychology) ,THREAT (Psychology) ,PSYCHOLOGY of military personnel ,RADAR - Abstract
This study examined the process of combining conclusive and inconclusive information using a Naval threat assessment simulation. On each of 36 trials, participants interrogated 10 pieces of information (e.g., speed, direction, bearing, etc.) about 'targets' in a simulated radar space. The number of hostile, peaceful, and inconclusive cues was factorially varied across targets. Three models were developed to understand how inconclusive information is used in the judgment of threat. According to one model, inconclusive information is ignored and the judgment of threat is based only on the conclusive information. According to a second model, the amount of dominant conclusive information is normalized by all of the available information. Finally, according to a third model, inconclusive information is partitioned under the assumption that it equally represents both dominant and non-dominant evidence. In Experiment 1, the data of novices (i.e., civilians) were best described by a model that assumes a partitioning of inconclusive evidence. This result was replicated in a second experiment involving variation of the global threat context. In a third experiment involving experts (i.e., Canadian Navy officers), the data of half of the participants were best described by the partitioning model and the data of the other half were best described by the normalizing model. In Experiments 1 and 2, the presence of inconclusive information produced a 'dilution effect', whereby hostile (peaceful) targets were judged as less hostile (peaceful) than the predictions of the Partitioning model. The dilution effect was not evident in the judgments of the Navy officers. Copyright © 2009 Crown in the right of Canada. Published by John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
17. Canonical interpretation of propositions as events.
- Author
-
Zimper, Alexander
- Subjects
PROPOSITION (Logic) ,PROBABILITY theory ,CONFIDENCE ,REASONING ,INTERPRETATION (Philosophy) - Abstract
This paper establishes conditions under which Savage's (1954) informal interpretation of subjective probabilities as measures of confidence in the truth of propositions can be formally justified. For this purpose we construct, for any given propositional language, a canonical state space such that each proposition a of the language is associated with a unique event A defined on this state space. As our main result we establish a one–one onto correspondence between the canonical state space and the set of all truth conditions for the propositional logic such that proposition a is exactly true at every truth condition that corresponds to some state in A. According to our approach, an agent's degree of confidence in the truth of a proposition can, therefore, be interpreted as his or her subjective probability that some truth condition holds at which the proposition is true. Such an interpretation, however, is only valid for agents with unlimited powers of logical reasoning. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
18. Ignorance Is Not Probability.
- Author
-
Huber, William A.
- Subjects
IGNORANCE (Theory of knowledge) ,PROBABILITY theory ,PARAMETERS (Statistics) ,ARBITRARY constants ,RISK assessment ,DECISION making - Abstract
The distinction between ignorance about a parameter and knowing only a probability distribution for that parameter is of fundamental importance in risk assessment. Brief dialogs between a hypothetical decisionmaker and a risk assessor illustrate this point, showing that the distinction has real consequences. These dialogs are followed by a short exposition that places risk analysis in a decision-theoretic framework, describes the important elements of that framework, and uses these to shed light on Terje Aven's criticism of nonprobabilistic purely “objective” methods. Suggestions are offered concerning a more effective approach to evaluating those methods. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
19. Exploring the conjunction fallacy within a category learning framework.
- Author
-
Nilsson, Håkan
- Subjects
HEURISTIC ,PROBABILITY theory ,HEURISTIC programming ,METHODOLOGY ,LOGICAL fallacies - Abstract
The literature presents two major theories on the cause of the conjunction fallacy. The first attributes the conjunction fallacy to the representativeness heuristic. The second suggests that the conjunction fallacy is caused by people combining p(A) and p(B) into p(A&B) in an inappropriate manner. These two theories were contrasted in two category-learning experiments. As predicted by the latter theory, data showed that participants that could assess p(A&B) directly made fewer conjunction fallacies than participants who had to compute p(A) and p(B) separately and then combine them into p(A&B). Least conjunction fallacies were observed in the cases where the representativeness heuristic was applicable. Overall, data showed that an inability to appropriately combine probabilities is one of the key cognitive mechanisms behind the conjunction fallacy. Copyright © 2008 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
20. Bayesian Reanalysis of the Challenger O-Ring Data.
- Author
-
Maranzano, Coire J. and Krzysztofowicz, Roman
- Subjects
BAYESIAN analysis ,PROBABILITY theory ,LOGISTIC regression analysis ,O-rings ,RISK assessment ,EXTRAPOLATION ,SPACE shuttles ,DECISION making - Abstract
A Bayesian forecasting model is developed to quantify uncertainty about the postflight state of a field-joint primary O-ring (not damaged or damaged), given the O-ring temperature at the time of launch of the space shuttle Challenger in 1986. The crux of this problem is the enormous extrapolation that must be performed: 23 previous shuttle flights were launched at temperatures between 53 °F and 81 °F, but the next launch is planned at 31 °F. The fundamental advantage of the Bayesian model is its theoretic structure, which remains correct over the entire sample space of the predictor and that affords flexibility of implementation. A novel approach to extrapolating the input elements based on expert judgment is presented; it recognizes that extrapolation is equivalent to changing the conditioning of the model elements. The prior probability of O-ring damage can be assessed subjectively by experts following a nominal-interacting process in a group setting. The Bayesian model can output several posterior probabilities of O-ring damage, each conditional on the given temperature and on a different strength of the temperature effect hypothesis. A lower bound on, or a value of, the posterior probability can be selected for decision making consistently with expert judgment, which encapsulates engineering information, knowledge, and experience. The Bayesian forecasting model is posed as a replacement for the logistic regression and the nonparametric approach advocated in earlier analyses of the Challenger O-ring data. A comparison demonstrates the inherent deficiency of the generalized linear models for risk analyses that require (1) forecasting an event conditional on a predictor value outside the sampling interval, and (2) combining empirical evidence with expert judgment. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
21. Exemplars in the mist: The cognitive substrate of the representativeness heuristic.
- Author
-
NILSSON, HÅKAN, JUSLIN, PETER, and OLSSON, HENRIK
- Subjects
- *
HEURISTIC , *PROBABILITY theory , *HYPOTHESIS , *DATA analysis , *NUMERICAL analysis , *JUDGMENT (Psychology) , *MATHEMATICAL models , *PSYCHOLOGY , *THOUGHT & thinking - Abstract
The idea that people often make probability judgments by a heuristic short-cut, the representativeness heuristic, has been widely influential, but also criticized for being vague. The empirical trademark of the heuristic is characteristic deviations between normative probabilities and judgments (e.g., the conjunction fallacy, base-rate neglect). In this article the authors contrast two hypotheses concerning the cognitive substrate of the representativeness heuristic, the prototype hypothesis ( Kahneman & Frederick, 2002 ) and the exemplar hypothesis ( Juslin & Persson, 2002 ), in a task especially designed to elicit representativeness effects. Computational modelling and an experiment reveal that representativeness effects are evident early in training and persist longer in a more complex task environment and that the data are best accounted for by a model implementing the exemplar hypothesis. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
22. Subjective and objective probability effects on P300 amplitude revisited.
- Author
-
Rosenfeld, J. Peter, Biroschak, Julianne R., Kleschen, Melissa J., and Smith, Kyle M.
- Subjects
- *
EVOKED potentials (Electrophysiology) , *ELECTROENCEPHALOGRAPHY , *ELECTROPHYSIOLOGY , *ACTION potentials , *CONDITIONED response - Abstract
Does objective probability affect P300 size independently and in addition to subjective probability? The latter was manipulated by the number of stimuli presented and classification task. Five groups saw target and frequent stimuli. Two saw these withp=.2 or .067, with two different button presses. Three groups saw two additional nontarget stimuli each withp=.067. One group pressed a different button for each stimulus. A second group pressed one button for the three oddballs, another for the frequent. A thirdcriticalgroup pressed one button for the target and another for other stimuli. In this group, P300 was larger for targets versus nontargets, and larger for nontargets versus frequents. Although nontargets were classified with frequents, their actual low probability distinguished them from frequents, and their subjective probability distinguished them from targets. Therefore, actual and subjective probability effects were independently found. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
23. Expressing Economic Risk—Review and Presentation of a Unifying Approach.
- Author
-
Aven, Terje, Nilsen, Espen Fyhn, and Nilsen, Thomas
- Subjects
RISK ,BAYESIAN analysis ,PROJECT management ,PRODUCTION engineering ,SAFETY - Abstract
Risk related to economic values is treated by many disciplines, including safety and production engineering, business, and project management. Within each of these and across these disciplines different nomenclature and principles are adopted for describing and communicating risk. The situation is rather confusing. In this article, we review various approaches and concepts that are used to express risk. We present and discuss a unifying approach for dealing with economic risk, with uncertainty being the key risk concept. The approach represents a rethinking on how to implement the Bayesian paradigm in practice to support decision making. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
24. The conjunction fallacy: a misunderstanding about conjunction?
- Author
-
Tentori, Katya, Bonini, Nicolao, and Osherson, Daniel
- Subjects
- *
CONJUNCTIONS (Grammar) , *LEARNING , *COGNITIVE development , *PROBABILITY theory - Abstract
It is easy to construct pairs of sentences X, Y that lead many people to ascribe higher probability to the conjunction X-and-Y than to the conjuncts X, Y. Whether an error is thereby committed depends on reasoners’ interpretation of the expressions “probability” and “and.” We report two experiments designed to clarify the normative status of typical responses to conjunction problems. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF
25. Effects of choice and relative frequency elicitation on overconfidence: further tests of an exemplar-retrieval model.
- Author
-
Sieck, Winston R.
- Subjects
CONFIDENCE ,SUBJECTIVITY ,PROBABILITY theory ,CALIBRATION ,PREDICTION (Psychology) - Abstract
An experiment is reported in which participants rendered judgments regarding the disease states of hypothetical patients. Participants either reported likelihoods that patients had the target disease (no choice), or classified patients into disease categories and then reported likelihoods that their classifications were correct (choice included). Also, participants' likelihood judgments were made in response to either a probability probe question, or a relative frequency probe. Two distinct exemplar-memory models were compared on their ability to predict overconfidence under these procedures. Both propose that people learn and judge by storing and retrieving examples. The exemplar retrieval model (ERM) proposes that amount of retrieval drives choice inclusion and likelihood probe effects. The alternative model assumes that response error mediates choice inclusion effects. Choice inclusion and the relative frequency probe reduced overconfidence, but the combined effects were subadditive. Only the ERM predicted this pattern, and it further provided good quantitative fits to these results. Copyright © 2003 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
26. What Number is “Fifty–Fifty”?: Redistributing Excessive 50% Responses in Elicited Probabilities.
- Author
-
Bruine de Bruin, Wändi, Fischbeck, Paul S., Stiber, Neil A., and Fischhoff, Baruch
- Subjects
RISK perception ,RACE ,DEMOGRAPHY ,AIR pollution ,ENVIRONMENTAL risk assessment - Abstract
Studies using open–ended response modes to elicit probabilistic beliefs have sometimes found an elevated frequency (or blip) at 50 in their response distributions. Our previous research[sup (1-3)] suggests that this is caused by intrusion of the phrase “fifty–fifty,” which represents epistemic uncertainty, rather than a true numeric probability of 50%. Such inappropriate responses pose a problem for decision analysts and others relying on probabilistic judgments. Using an explicit numeric probability scale (ranging from 0–100%) reduces thinking about uncertain events in verbal terms like “fifty–fifty,” and, with it, exaggerated use of the 50 response.[sup (1,2)] Here, we present two procedures for adjusting response distributions for data already collected with open–ended response modes and hence vulnerable to an exaggerated presence of 50%. Each procedure infers the prevalence of 50s had a numeric probability scale been used, then redistributes the excess. The two procedures are validated on some of our own existing data and then applied to judgments elicited from experts in groundwater pollution and bioremediation. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
27. Judging the Accuracy of a Likelihood Judgment: The Case of Smoking Risk.
- Author
-
Windschitl, Paul D.
- Subjects
SMOKING ,PROBABILITY theory ,PERSONS ,JUDGMENT (Psychology) ,REASONING ,THEORY of knowledge - Abstract
A standard method for assessing whether people have appropriate internal representations of an event's likelihood is to check whether their subjective probability or frequency estimates for the event correspond with the assumed objective value for that event. When a person's estimate for the event exceeds its assumed objective probability or frequency, the person's expectancy for the event is concluded to be greater than warranted. This paper describes three lines of reasoning as to why conclusions of this sort can be problematic. Recently published findings as well as data from two new experiments are described to support this main thesis. The case of smoking risk is used to illustrate the more general problem, and issues that must be considered to avoid or contend with the problem are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
28. Subjective Probabilities on Subjectively Unambiguous Events.
- Author
-
Epstein, Larry G. and Zhang, Jiankang
- Subjects
PROBABILITY measures ,PROBABILITY theory ,AMBIGUITY ,AXIOMS - Abstract
This paper suggests a behavioral definition of (subjective) ambiguity in an abstract setting where objects of choice are Savage‐style acts. Then axioms are described that deliver probabilistic sophistication of preference on the set of unambiguous acts. In particular, both the domain and the values of the decision‐maker's probability measure are derived from preference. It is argued that the noted result also provides a decision‐theoretic foundation for the Knightian distinction between risk and ambiguity. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
29. The Waltzing Oddball.
- Author
-
Verleger, Rolf and Berg, Patrick
- Subjects
- *
RELEVANCE , *PROBABILITY theory , *AMPLITUDE modulation , *FREQUENCY (Linguistics) - Abstract
We investigated whether task relevance and probability interact to influence P3 amplitude. High and low tones were presented in random order with equal probability. In the control condition (standard oddball), every high tone had to be counted. In the waltz condition, high tones had to be counted only if they were preceded by two other high tones. It was predicted that the P3s evoked by targets in the waltz condition would be larger than the P3s evoked by the same sequence of targets in the oddball condition. That is, the frequency of occurrence of the targets should have an effect on P3, in addition to effects of the frequency of stimulus occurrence and stimulus task relevance (target/ nontarget). This prediction was upheld. However, the largest P3s were evoked by nontargets following two high tones in the waltz condition. These P3s had a more anterior topographic maximum than usual. We contend that these anterior P3s reflect the interruption of an ongoing task and cannot be easily fit into the framework of the two concepts of task relevance and probability. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
30. "...'twas ten to one; And yet we ventured...": P300 and Decision Making.
- Author
-
Karis, Demetrios, Chesney, Gregory L., and Donchin, Emanuel
- Subjects
- *
DECISION making , *PREDICTION (Psychology) , *LINEAR systems , *MATHEMATICAL combinations , *RISK - Abstract
In some situations subjects' predictions of future events do not accurately reflect the subjective probability associated with these events. We set up such a situation by manipulating the payoff structure in a prediction paradigm, and found that P300 provides an index of the processes responsible for subjective probability, or expectancy, not obtainable from overt predictions. Sixteen subjects were required to predict, on each trial, whether a 1, 2, or 3 would appear on a display. The numbers appeared randomly with probabilities .45, .10, and .45, respectively. In one condition subjects were given bonuses according to an all-or-none payoff function in which they received one cent if they predicted correctly, and nothing if they were incorrect. In a second condition bonuses were determined by a linear payoff function in which subjects were paid one cent if they predicted correctly, and one-half cent if they were off by one (e.g., predict 1 and 2 appears). After each condition subjects estimated the actual number of stimuli presented. These estimates were the same for both conditions, although predictions differed radically, with 2 predicted much more frequently in the linear condition. P300 area was largest for the rare event (2), and the relationship between P300 and probability was unaffected by payoffs. Our design did introduce differences between conditions in the overall "riskiness" of predictions, and the strategies adopted by most subjects also resulted in differences in the salience, or task relevance, of the feedback stimuli. These differences resulted in an overall increase in P300 in the all-or-none condition. A relationship also emerged between the subjects' strategies and their ERPs. Subjects who adopted more effective strategies responded differentially to feedback from high and low risk predictions, whereas the others did not. [ABSTRACT FROM AUTHOR]
- Published
- 1983
- Full Text
- View/download PDF
31. Probability Learning and the P3 Component of the Visual Evoked Potential in Man.
- Author
-
Johnston, Victor S. and Holcomb, Phillip J.
- Subjects
- *
EVOKED potentials (Electrophysiology) , *AVERSIVE stimuli , *ELECTROPHYSIOLOGY , *CONDITIONED response , *ACTION potentials , *PSYCHOPHYSIOLOGY - Abstract
Evoked potentials were recorded to potentials stimuli ( (S1) which were predictive of stimuli (S2) worth different monetary values to determine if these waveforms reflected the probability and/or the value of the predicted event as the subject learned the relationship between S1 and S2. Both the monetary value of S2 and its conditional probability following S1 were systematically manipulated over a wide range of values. Subjects were required to use the conditional probability information (0.5, 0.75,1.0) in order to make a correct behavioral response and receive the monetary payoff ($0, $1, $2). The results indicate that the amplitude of the P3 component of the average evoked response to S1 increases as subjects learn the relationship between S1 and S2, and S2 is a high value event. [ABSTRACT FROM AUTHOR]
- Published
- 1980
- Full Text
- View/download PDF
32. P300 and Stimulus Categorization: Two Plus One is not so Different from One Plus One.
- Author
-
Johnson Jr, Ray and Donchin, Emanuel
- Subjects
- *
EVOKED potentials (Electrophysiology) , *ELECTROPHYSIOLOGY , *PROBABILITY learning , *STIMULUS generalization - Abstract
Event related brain potentials (ERPs) were recorded from subjects who were instructed to count one of three, equally probable tones presented in a random sequence. In another condition, the subjects had to count one of two stimuli, one of which was presented with a probability of.33. The data support the view that the pattern of variation of P300 amplitude with the sequential structure of the series depends on the category to which events are assigned, rather than on the individual stimuli eliciting the P300. Furthermore, the data support the idea that the amplitude of P300 elicited by task-relevant stimuli is determined by the subjective probability associated with the eliciting event. [ABSTRACT FROM AUTHOR]
- Published
- 1980
- Full Text
- View/download PDF
33. An eigenvalue method of obtaining subjective probabilities.
- Author
-
Yager, Ronald R.
- Abstract
This article deals with decision making by individual persons, living systems at the organism level. It presents a methodology for extracting a set of subjective probabilities from such a decision maker. The method involves obtaining an N × N matrix of compared probabilities and then finding the maximum eigenvalue of this matrix. The unit eigenvector associated with this maximum eigenvalue contains the desired subjective probabilities. The results of an experiment which compared this method with other methods of eliciting subjective probabilities are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1979
- Full Text
- View/download PDF
34. SUBJECTIVE PROBABILITY AND CAUSALITY ASSESSMENT.
- Author
-
Lane, David A.
- Subjects
PROBABILITY theory ,BAYESIAN analysis ,DRUG side effects ,CLINICAL trials ,DRUG efficacy ,STOCHASTIC processes - Abstract
Serious adverse reactions to a drug usually occur too rarely to be identified in the clinical trials required to demonstrate efficacy before the drug can be put on the market. Instead, they are generally first encountered in the uncontrolled world of everyday clinical practice, and the industrial and national regulatory agencies that are responsible for drug safety must rely for their first indications of a possible drug-adverse event connection on case reports submitted to them by practitioners who observe an occurrence of the event in one of their patients taking the drug. A typical situation in the work of these agencies finds a group of experts assembled around a table reviewing a small series of case reports (perhaps as small as one!) that link a particular drug with a particular type of adverse event. The experts want to determine whether, for each case in their series, the available evidence indicates that the drug caused the adverse event to occur. How are they to proceed? This paper outlines an approach to this problem, based on the use of subjective probability. [ABSTRACT FROM AUTHOR]
- Published
- 1989
- Full Text
- View/download PDF
35. The Enhancement Effect in Probability Judgment.
- Author
-
Koehler, Derek J., Brenner, Lyle A., and Tversky, Amos
- Subjects
PROBABILITY theory ,RESEARCH ,DECISION making ,BEHAVIORAL assessment ,SOCIAL sciences - Abstract
Research has shown that the judged probability of an event depends on the specificity with which the focal and alternative hypotheses are described. In particular, unpacking the components of the focal hypothesis generally increases the judged probability of the focal hypothesis, while unpacking the components of the alternative hypothesis decreases the judged probability of the focal hypothesis. As a consequence, the Judged probability of the union of disjoint events is generally less than the sum of their judged probabilities. This article shows that the total judged probability of a set of mutually exclusive and exhaustive hypotheses increases with the degree to which the evidence is compatible with these hypotheses. This phenomenon, which we refer to as the enhancement effect, is consistent with a descriptive account of subjective probability called support theory. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
36. Evaluating and Combining Subjective Probability Estimates.
- Author
-
Wallsten, Thomas S., Budescu, David V., Erev, Ido, and Diederich, Adele
- Subjects
PROBABILITY theory ,JUDGMENT (Psychology) ,SUBJECTIVITY ,DECISION making ,MATHEMATICAL combinations - Abstract
This paper concerns the evaluation and combination of subjective probability estimates for categorical events. We argue that the appropriate criterion for evaluating individual and combined estimates depends on the type of uncertainty the decision maker seeks to represent, which in turn depends on his or her model of the event space. Decision makers require accurate estimates in the presence of aleatory uncertainty about exchangeable events, diagnostic estimates given epistemic uncertainty about unique events, and some combination of the two when the events are not necessarily unique, but the best equivalence class definition for exchangeable events is not apparent. Following a brief review of the mathematical and empirical literature on combining judgments, we present an approach to the topic that derives from (1) a weak cognitive model of the individual that assumes subjective estimates are a function of underlying judgment perturbed by random error and (2) a classification of judgment contexts in terms of the underlying information structure. In support of our developments, we present new analyses of two sets of subjective probability estimates, one of exchangeable and the other of unique events. As predicted, mean estimates were more accurate than the individual values in the first case and more diagnostic in the second. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
37. Confidence Judgments by Actors and Observers.
- Author
-
Koehler, Derek J. and Harvey, Nigel
- Subjects
JUDGMENT (Psychology) ,CONFIDENCE ,PROBABILITY theory ,OBSERVABILITY (Control theory) ,ACTORS ,DECISION making - Abstract
We report three experiments comparing confidence judgments made by actors and by observers. In Experiment 1, actors generated qualitative answers (countries of the world) in a country-identification task: in Experiment 2, actors generated quantitative answers (years) in a historical event-dating task. Both actors and observers indicated their confidence in the actors' answers. Actors were significantly less confident in their answers than were observers in the first experiment. This effect was substantially reduced in the second experiment, whether confidence was measured by judged probability or by credible interval width. Experiment 3 used a control task in which actors attempted to bring an outcome variable into a desired range. In contrast to the first two experiments, actors in the control task were more confident than observers. Because subjects were generally overconfident in all three experiments, the present results demonstrate that the use of observers can reduce or exacerbate overconfidence depending on the kind of task and the nature of the event or possibility under evaluation. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
38. Patterns of Preference for Numerical and Verbal Probabilities.
- Author
-
Olson, Michael J. and Budescu, David V.
- Subjects
UNCERTAINTY ,INFORMATION theory ,COMMUNICATION models ,PROBABILITY theory ,EXPECTED utility - Abstract
We report results of an experiment designed to test a principle formulated by Budescu and Wallsten (1993), that, when communicating uncertainty information, mode choices are sensitive to sources and degrees of vagueness. In addition, we examined subjects' efficacy in using such uncertainty information as a function of communication mode, source, and vagueness. In phase one of the experiment, subjects in a dyad used precise (numerical) or imprecise (verbal) expressions to communicate to a remote partner precise or vague uncertainty about the likelihoods of events. Spinner outcomes were used to generate precise uncertainly while answers to almanac questions were used to elicit vague uncertainty. In phase two, subjects saw the events paired with their partners" estimates of similar events, and were asked to gamble on one event from each pair. Communication mode preferences were measured as the relative frequency that subjects chose the numerical mode to either express or receive uncertainty information regarding the events. Efficacy was measured as the relative frequency that subjects choose from the pair the event associated with the objectively more probable uncertainty expression. Underlying uncertainty interacted with direction of communication to affect preferences for modes of expression of the probabilities. Subjects preferred precise (numerical) information, especially for precise events (spinners). For vague events (questions), their preference for precise (numerical) information was stronger when receiving than when communicating information. Similar preferences were reflected in the efficiency of subsequent gamble decisions based on the probability estimates. Specifically, decisions were more efficacious (i.e. consistent with Expected Utility) when degrees of precision in events and estimates matched. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
39. Linda versus World Cup: Conjunctive Probabilities in Three-event Fictional and Real-life Predictions.
- Author
-
Teigen, Karl Halvor, Martinussen, Monica, and Lund, Thorleif
- Subjects
ERRORS ,LEGAL judgments ,PROBABILITY theory ,FIFA World Cup ,SOCCER tournaments ,UNCERTAINTY - Abstract
Conjunction errors in probability judgments have been explained in terms of representativeness, non-normative combination procedures, and linguistic, conversational, or conceptual misunderstandings. In two studies, a three-event variant of the classical Linda scenario (Tversky and Kahneman, 1983) was contrasted with estimates of Norway's chances in three coming World Cup soccer matches. Conjunction errors occurred even in the latter, real-life prediction task, but much less frequently than in the fictional Linda case. Magnitude of the conjunction effect was found to be dependent upon type of probability (fictional versus dispositional), unequal versus equal probabilities of constituent events, predictions of positive versus negative outcomes, and, for real-life predictions only, number of constituent events. Fictional probability ratings were close to but lower than representativeness ratings, giving evidence for a representativeness and adjustment-for-uncertainty strategy, whereas probabilities of real-life events were given a causal model interpretation. [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
40. An Evaluation of the Reliability of Probability Judgments Across Response Modes and Over Time.
- Author
-
Whitcomb, Kathleen M., Önkal, Dilek, Benson, P. George, and Curley, Shawn P.
- Subjects
DECISION making ,NUMERICAL analysis ,PROBABILITY measures ,CHARTS, diagrams, etc. - Abstract
Despite the importance of probability assessment methods in behavioral decision theory and decision analysis, little attention has been directed at evaluating their reliability and validity. In fact, no comprehensive study of reliability has been undertaken. Since reliability is a necessary condition for validity, this oversight is significant. The present study was motivated by that oversight. We investigated the reliability of probability measures derived from three response modes; numerical probabilities, pie diagrams, and odds. Unlike previous studies, the experiment was designed to distinguish systematic deviations in probability judgments, such as those due to experience or practice, from random deviations. It was found that subjects assessed probabilities reliably for all three assessment methods regardless of the reliability measures employed. However, a small but statistically significant decrease over time in the magnitudes of assessed probabilities was observed. This effect was linked to a decrease in subjects' overconfidence during the course of the experiment. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
41. Judging the Strength of Designated Evidence.
- Author
-
Briggs, Laura K. and Krantz, David H.
- Subjects
EVIDENCE ,PROBABILITY theory ,MATHEMATICS ,CONJOINT analysis ,TRUTH - Abstract
Judgments of evidence strength were obtained in a series of vignettes, each containing one or two items of evidence for each of two logically independent hypotheses. Each Judgment concerned a designated subset of the presented evidence. Different groups of subjects encountered different but overlapping sets of evidence. We compared groups that judged the same designated evidence for the same hypothesis, but differed in exposure to surrounding (nondesignated) evidence. Results showed clear separation of relevant from irrelevant evidence and of designated from surrounding relevant evidence. This was particularly clear in the second experiment, where judgments of designated evidence remained invariant whether the surrounding evidence supported or contradicted the specified hypothesis. The conjunction fallacy was observed for judgments of evidence strength, but was substantially reduced by a simple instruction. Because subjects can separate evidence, it becomes possible to construct cardinally scaled standards of evidence strength using conjoint-measurement methods. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
42. Confidence Depends on Level of Aggregation.
- Author
-
Sniezek, Janet A. and Buckley, Timothy
- Subjects
PROBABILITY theory ,JUDGMENT (Psychology) ,CONFIDENCE - Abstract
The credible intervals that people set around their point estimates are typically too narrow (cf. Lichtenstein, Fischhoff, & Phillips, 1982). That is, a set of many such intervals does not contain the actual values of the criterion variables as often as it should given the probability assigned to this event for each estimate. The typical interpretation of such data is that people are overconfident about the accuracy of their judgments. This paper presents data from two studies showing the typical levels of overconfidence for individual estimates of unknown quantities. However, data from the same subjects on a different measure of confidence for the same items, their own global assessment for the set of multiple estimates as a whole, showed significantly lower levels of confidence and overconfidence than their average individual assessment for items in the set. It is argued that the event and global assessments of judgment quality are fundamentally different and are affected by unique psychological processes. Finally, we discuss the implications of a difference between confidence in single and multiple estimates for confidence research and theory. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
43. Psychological Conceptions of Randomness.
- Author
-
Ayton, Peter, Hunt, Anne J., and Wright, George
- Subjects
PSYCHOLOGY ,HUMAN biology ,PSYCHOLOGICAL research ,SOCIAL science research ,PREJUDICES - Abstract
This article presents a critique of the concept of randomness as it occurs in the psychological literature. The first section of our article outlines the significance of a concept of randomness to the process of induction; we need to distinguish random and non-random events in order to perceive lawful regularities and formulate theories concerning events in the world. Next we evaluate the psychological research that has suggested that human concepts of randomness are not normative. We argue that, because the tasks set to experimental subjects are logically problematic, observed biases may be an artifact of the experimental situation and that even if such biases do generalise they may not have pejorative implications for induction in the real world. Thirdly we investigate the statistical methodology utilised in tests for randomness and find it riddled with paradox. In a fourth section we find various branches of scientific endeavour that are stymied by the problems posed by randomness. Finally we briefly mention the social significance of randomness and conclude by arguing that such a fundamental concept merits and requires more serious considerations. [ABSTRACT FROM AUTHOR]
- Published
- 1989
- Full Text
- View/download PDF
44. The Effectiveness of Imprecise Probability Forecasts.
- Author
-
Benson, P. George and Whitcomb, Kathleen M.
- Subjects
FORECASTING ,DECISION making ,ALGORITHMS ,PROBABILITY theory ,DISTRIBUTION (Probability theory) ,FEASIBILITY studies - Abstract
In this paper we investigate the feasibility of algorithmically deriving precise probability forecasts from imprecise forecasts. We provide an empirical evaluation of precise probabilities that have been derived from two types of imprecise probability forecasts: probability intervals and probability intervals with second-order probability distributions. The minimum cross-entropy (MCE) principle is applied to the former to derive precise (i.e. additive) probabilities; expectation (EX) is used to derive precise probabilities in the latter case. Probability intervals that were constructed without second-order probabilities tended to be narrower than and contained in those that were amplified by second-order probabilities. Evidence that this narrowness is due to motivational bias is presented. Analysis of forecasters' mean Probability Scores for the derived praise probabilities indicates that it is possible to derive precise forecasts whose external correspondence is as good as directly assessed precise probability forecasts. The forecasts of the EX method, however, are more like the directly assessed precise forecasts than those of the MCE method. [ABSTRACT FROM AUTHOR]
- Published
- 1993
- Full Text
- View/download PDF
45. Effects of Difficulty on Judgemental Probability Forecasting of Control Response Efficacy.
- Author
-
Harvey, N.
- Subjects
ESTIMATION theory ,FORECASTING ,DECISION making ,PROBLEM solving ,DECISION theory ,MATHEMATICAL statistics - Abstract
A judgemental control task was framed as a problem of medical decision making. The control parameter of a recursive system (i.e. a patient) was initially set so that output (i.e. a diagnostic index) fell outside a designated criterion range (corresponding to health). Subjects were told to bring the system's output into the designated range by resetting this control parameter (by specifying the dose of a drug). After each of these control responses, they made a probabilistic forecast that it would have the desired effect. It was found that these forecasts were more overconfident when the control task was more difficult but that the reason for this varied. When difficulty was manipulated across subjects, there was little evidence that lower control performance was associated with any lowering of the probabilistic forecasts. When difficulty was manipulated within subjects, they did lower their forecasts for more difficult task variants but did so insufficiently. In fact, relations between probabilistic forecasts of control response efficacy and proportion of those responses that were actually effective was linear with a slope of 0.44. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
46. Subjective Confidence in Forecasts: A Response to Fischhoff and MacGregor.
- Author
-
Wright, George and Ayton, Peter
- Subjects
STATISTICAL correlation ,ECONOMIC forecasting ,FORECASTING ,PROBABILITY theory ,PSYCHOLOGY ,UNCERTAINTY - Abstract
Here we evaluate the generalizability of calibration studies which have used general knowledge questions, and argue that on conceptual, methodological and empirical grounds the results have limited applicability to judgemental forecasting. We also review evidence which suggests that judgemental forecast probabilities are influenced by variables such as the desirability, imminence, time period and perceived controllability of the event to be forecast. As these variables do not apply to judgement in the domain of general knowledge, a need for research recognizing and exploring the psychological processes underlying uncertainty about the future is apparent. [ABSTRACT FROM AUTHOR]
- Published
- 1986
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.