# Society For Risk Analysis Annual Meeting 2017

### Session Schedule & Abstracts

*****Disclaimer: All presentations represent the views of the authors, and not the organizations that support their research. Please apply the standard disclaimer that any opinions, findings, and conclusions or recommendations in abstracts, posters, and presentations at the meeting are those of the authors and do not necessarily reflect the views of any other organization or agency. Meeting attendees and authors should be aware that this disclaimer is intended to apply to all abstracts contained in this document. Authors who wish to emphasize this disclaimer should do so in their presentation or poster. In an effort to make the abstracts as concise as possible and easy for meeting participants to read, the abstracts have been formatted such that they exclude references to papers, affiliations, and/or funding sources. Authors who wish to provide attendees with this information should do so in their presentation or poster.**Common abbreviations**

## M4-G |

Chair(s): Jon T. Selvik jon.t.selvik@uis.no
Sponsored by Foundational Issues in Risk Analysis Specialty Group |

The present symposium will contribute to the strengthening of the foundation of risk analysis. It will address issues that are critical for risk analysis as a field and are relevant for broad categories of applications. The scope of the symposium covers the study, investigation, development and scrutiny/clarification of basic and general concepts, theories, principles, and methods for the purpose of understanding, assessing, describing, managing, governing, and/or communicating risk. |

M4-G.1 3:30 pm Data Analytics, Risk Analysis, and Uncertainty. Guikema SD*, Flage R; University of Michigan sguikema@umich.edu
Data analytics has become increasingly popular within many areas of research, including risk analysis. There is much promised by this approach, including better leveraging data for predictive and inferential modeling. This applies in risk analysis as well where data analytic methods have bee used for a variety of predictive modeling applications. While these approaches make a strong contribution, there is a weakness common to many of these approach: how they handle uncertainty. This talk first gives an overview of predictive data analytic methods. It then summarizes how these methods are often used in risk analysis, focusing on the types predictions given and whether or not they include uncertainty in these predictions. The talk then gives an overview of a recently developed approach for predictive modeling in which uncertainty is represented in the predictions. Finally, the talk discusses implications for a path forward for considering how to better represent uncertainty in predictive analytics methods for risk analysis. Abstract: |

M4-G.2 3:50 pm How to address uncertainty in security risk management. Jore S.H.*; University of Stavanger, Norway sissel.h.jore@uis.no
Probabilities and uncertainties are commonly mentioned aspects within safety risk management, but are often excluded in definitions of security risk. Standards and guidelines for how to conduct security risk analysis acknowledge uncertainty as a major aspect of the process and the need for addressing uncertainties. However, what uncertainty means in a security context and how to actually address and express uncertainties are generally not well accounted for. Assessing security risks e.g. terrorist attacks are different from assessing safety risks. First, the risk is dynamic and characterized by low frequency, high-consequence attacks committed by strategically thinking human beings who adapt and alter targets and modus operandi to changing realities. Second, limited available historical data exists and are often not reliable and representative. Third, there are not enough resources to eliminate all risks. Forth, there are a large number of possible attack scenarios, making it difficult to assess the complete effects of mitigating and protective measures. In recent years, several perspectives on risk have been developed that replace probability with uncertainty in their definition claiming more weight should be given to the knowledge dimension, the unforeseen and potential surprises. In accordance with these perspectives, measures for low, medium and high uncertainty have been suggested based on the factors of the strength and level of simplification of assumptions made, the degree of reliable and representative data, and the level of consensus among experts. In line with this perspective, we outline a framework for how to express uncertainty on a more detailed level. We propose a framework that also incorporate the uncertainty dimensions of likelihood, threats, values, vulnerabilities and consequence aspects since the uncertainty dimension also is present inn all these factors in the risk analysis. Abstract: |

M4-G.3 4:10 pm Risk assessment assumptions â€“ Uncertainty and bias. Flage R*; University of Stavanger roger.flage@uis.no
Making assumptions is inevitable in any type of risk assessment and in modelling in general. Assumptions are thus a key, generic risk assessment concept. The results of a risk assessment is valid conditional on the assumptions. In practice, the specification of an assumption can have a more or less strong justification, and there may be greater or lesser degree of uncertainty related to whether the assumption will hold true. Moreover, unless specified to reflect the analyst’s “best judgement”, an assumption can have either a conservative or an optimistic bias. This talk will review different ways to frame and define assumptions in a risk assessment context, discuss the concepts of uncertainty and bias in relation to assumptions, and present and discuss some recently proposed methods for handling uncertain assumptions in risk assessment. Abstract: |

M4-G.4 4:30 pm Quick Bayes Offers Performance Guarantees and Easy Risk Communication. Ferson S*, O'Rawe J; University of Liverpool and Applied Biomathematics sandp8@gmail.com
Quick Bayes is a variant of robust Bayesian analysis that is especially convenient for risk analysts because it does not require them to choose a prior distribution when no prior information is available (the noninformative case). In repeated use, the quantitative results from Quick Bayes exhibit frequentist coverage properties consistent with Neyman confidence intervals at arbitrary confidence levels, which conventional Bayesian analyses generally lack. These coverage properties mean that results from Quick Bayes exhibit guaranteed statistical performance that is especially attractive to engineers and policymakers. The numerical results from Quick Bayes can be matched to Gigerenzer's natural frequencies for easy and intuitive communication to decision makers and the lay public. We illustrate the application of the Quick Bayes approach in the context of fault tree analysis in which we can characterize an event probability estimated from an imperfectly specified fault tree with a terse, natural-language expression of the form “k out of n”, where 0 ≤ k ≤ n. These natural frequencies condense both the probability and analyst's epistemic uncertainty about the probability into a form that psychometric research suggests will be intelligible to humans. Preliminary evidence collected via crowd-sourced science shows that humans natively understand the implications of these natural frequencies, including what the size of n says about the reliability of the probability estimate. Abstract: |

M4-G.5 4:50 pm Taking the Reins: How Decision-Makers Can Stop being Hijacked by Uncertainty. Finkel AM, Gray GM*; Univ. of Pennsylvania/Univ. of Michigan (A.F.); George Washington Univ. (G.G.) afinkel@law.upenn.edu
Several decades after the mechanics of quantitative uncertainty analysis (QUA) for risk assessment and regulatory cost analysis were developed and refined, QUA still rarely reaches the minds of decision-makers. The most common justification for this situation is that â€śdecision-makers want a number, not a set of statistical distributions.â€ť This may be an accurate assessment of their druthers, but one obvious though perhaps impractical retort is to say that if decision-makers insist on misleading point estimates, then we need new and better decision-makers. This presentation offers a way out of this dilemma. Decision-makers do not have to understand (or even receive) all the information contained in a complete QUA, but they do have to drive the QUA. They need to instruct analysts which phenomena to analyze (parameter uncertainty, model uncertainty, interindividual variability, offsetting effects, and the value of future uncertainty reductions), they need to insist that uncertainties in cost be treated as exactly as important as uncertainties in risk, and â€” even more importantly â€” they need to instruct analysts which estimator(s) to seek, report, and explain. Here we offer 12 detailed principles to guide decision-makers into a new relationship with risk and cost analysts â€” 12 observations about how â€śeyes wide openâ€ť point estimates can vastly outperform point estimates handed to the decision-maker without context, justification, or honesty about the value judgments they enforce upon the decision. A decision-maker who explains â€śI chose Option A because its benefits of 2.345 exceed its costs of 1.234â€ť can be replaced by a dollar-store calculator. We need decision-makers who can say â€śI chose Option A because the spectrum of benefits it likely offers, to these citizens, considering the range of costs it likely imposes, makes it a superior choice to any other.â€ť QUA, performed carefully and following clear policy instructions, can empower decision-makers to earn their influential roles.
Abstract: |

[back to schedule]