Society For Risk Analysis Annual Meeting 2017

Session Schedule & Abstracts


* Disclaimer: All presentations represent the views of the authors, and not the organizations that support their research. Please apply the standard disclaimer that any opinions, findings, and conclusions or recommendations in abstracts, posters, and presentations at the meeting are those of the authors and do not necessarily reflect the views of any other organization or agency. Meeting attendees and authors should be aware that this disclaimer is intended to apply to all abstracts contained in this document. Authors who wish to emphasize this disclaimer should do so in their presentation or poster. In an effort to make the abstracts as concise as possible and easy for meeting participants to read, the abstracts have been formatted such that they exclude references to papers, affiliations, and/or funding sources. Authors who wish to provide attendees with this information should do so in their presentation or poster.

Common abbreviations

T3-C
Symposium: Symposium: Advances in Probability Assessment for Risk Analysis

Room: Salon C   1:30 pm–3:00 pm

Chair(s): Richard John   richardj@usc.edu

Sponsored by Decision Analysis and Risk Specialty Group

Risk Analysis often depends on expert judgements of uncertainties from subject matter experts. This symposium explores a wide range of topics related to probability assessment for risk analysis. The symposium includes a proposed methodology to attenuate overconfidence bias and evidence of validation from a behavioral experiment. A second study offers a proposed methodology to utilize verbal descriptions of uncertainty in place of numbers, and is accompanied by a behavioral validation study comparing verbal and numeric assessments of uncertainty. A third study reports a comprehensive meta-analysis of multiple performance measures for probability assessment, and addresses whether variables such as expertise moderate performance. The final study reports a behavioral experiment of how hazard risk assessments are influenced by the separate and interactive effects of contingency (or covariation) evidence and causal mechanism evidence.



T3-C.1  1:30 pm  Quantifying the Accuracy of Subjective Probability Estimates: A Meta-Analysis. Baucum M*, Nguyen K; University of Southern California   baucum@usc.edu

Abstract: Accurate subjective probability estimates play an important role in many risk assessment contexts. Although there is a respectable amount of scholarly works on probability assessments, there has not been a quantitative synthesis of research findings on this literature. The current meta-analysis attempts to close this research gap by systematically reviewing studies on probability assessments. In particular, this research addresses whether subjective probability estimates are better than random guesses, and it explores the effects of various methodological and substantive moderators on the accuracy of subjective probability estimates (including the effects of assessment context, response mode, expertise, and demographic variables). A comprehensive search for empirical studies on probability assessments between January 1st 1950 and January 1st 2017 in 133 different databases returned 466 records. Two independent coders screened all of the records and determined that 84 out of 466 records meet the study’s inclusion criteria. The following effect sizes were coded independently: the Brier score, Murphy’s calibration index, Murphy’s discrimination index, and Yates’ bias (confidence) index. A total number of 10 different moderators were also coded from the primary studies. Preliminary data analyses suggest that the accuracy of subjective probability judgments was better than the accuracy of judgments made by chance alone, and that judgments made by experts are significantly more accurate than judgments made by non-experts. Still, there is substantial heterogeneity in the effect sizes, suggesting the role of other moderators beyond expertise. We discuss the roles of these moderator variables and implications for the practice of probabilistic forecasting.

T3-C.2  1:50 pm  Comparing Verbal and Numeric Forecasts New Findings and Implications. Nguyen KD*, John RJ; University of Southern California   hoangdun@usc.edu

Abstract: While the process of risk assessment requires probability estimates in their numerical forms, many organizations prefer to use verbal measures of uncertainties to characterize and quantify potential risks. This research proposes a measure from Signal Detection Theory for such evaluation purpose and demonstrates a method based on Savage’s conceptualization of subjective probabilities to quantify verbal expressions of uncertainty. A sample of 118 NFL football experts was recruited to participate in the study. The experts were randomized into one of the two experimental conditions that differ in the response scale. Experts in the NUMBER condition were asked to make predictions of various possible outcomes in the NFL 2016-2017 regular season by using a numerical scale. On the other hand, experts in the VERBAL condition were asked to make the same predictions by using a verbal scale that includes 11 different probability words. The experts in the VERBAL condition were later invited to participate in a separate study about “preference for gambles”. Using an iterative procedure, we were able to quantify the numeric values corresponding to the verbal expressions in the verbal response scale in the main study. These quantified values were then used to transform the verbal responses into numeric values for the VERBAL experts Results from the follow up study showed that numeric values of positive expressions such as probable varied a lot more than the values of negative expressions such as improbable. Results from the main experiment showed that verbal forecasts were not statistically significant from numerical predictions in terms of the Brier score. However, numerical judgments were more resolute or discriminatory than verbal judgments. Experts in both conditions showed an overall tendency of underconfidence although the degree of underconfidence was much less extreme among VERBAL experts.

T3-C.3  2:10 pm  How to Debias Overprecision in Probability Elicitations? Ferretti V, Guney S, Montibeller G*, von Winterfeldt D; Loughborough University   g.montibeller@lboro.ac.uk

Abstract: The appraisal of complex policies often involves alternatives that have uncertain impacts, such as in health, counter-terrorism, or urban planning. Many of these impacts are hard to estimate, because of the lack of conclusive data, few reliable predictive models, or conflicting evidence. In these cases, decision analysts often use expert judgment to quantify uncertain impacts. One of the most pervasive cognitive biases in those judgments is overconfidence, which leads to overprecision in the estimates provided by experts. In this paper we report on our findings in assessing the effectiveness of best practices to debias overconfidence in probabilistic estimation of impacts. We tested the use of counterfactuals, hypothetical bets, and automatic stretching of ranges in three experiments where subjects were providing estimates for general knowledge questions. Our findings confirmed results from previous research, which showed the pervasiveness and stickiness of this bias. But it also indicated that more intrusive treatments, such as automatic stretching, are more effective than those merely requiring introspection (e.g. counterfactuals).

T3-C.4  2:30 pm  Contingency, Causality, and Risk. John RS*, Baucum M; University of Southern California   richardj@usc.edu

Abstract: The way people evaluate risks and hazards heavily depends on how they attribute causality; controversial issues in risk perception, ranging from genetically modified foods and health defects to immigration policies and terrorism, are largely governed by how people infer causal relationships. Psychologists have long distinguished between two types of evidence involved in causal inferences: contingency evidence, which highlights the covariation between a cause and a supposed effect (e.g., drunk drivers often get into accidents), and mechanism evidence, which focuses on the process by which a cause is capable of producing an effect (e.g., intoxication impairs perception, which can cause accidents). Yet research comparing the relative effects of these two evidence types on causal inferences has not been extended to the risk perception domain. Thus, in two experiments (n=257, 172), we presented MTurk participants with varying degrees of contingency and mechanism evidence for 1) a drug’s ability to cause health problems, and 2) the ability of increased security guard presence in U.S. cities to prevent terror attacks. Both studies found that, on average, participants treated contingency and mechanism evidence equally when inferring causality, and evidence against the presence of a causal mechanism or statistical covariation did not hinder the effect of evidence supporting the other. In fact, causality judgments based on both evidence types were well-predicted by an additive, non-interacting regression model based on participants’ responses to each evidence type individually (R-squared=0.76), contrary to the normative conception of each evidence type being a necessary precondition for causality. There was also substantial inter-individual variability in the weight placed on the two evidence types. We discuss implications of the group- and individual-level results for risk communication, and highlight the important role of evidence type in characterizing hazards.



[back to schedule]