Society For Risk Analysis Annual Meeting 2013

Session Schedule & Abstracts


* Disclaimer: All presentations represent the views of the authors, and not the organizations that support their research. Please apply the standard disclaimer that any opinions, findings, and conclusions or recommendations in abstracts, posters, and presentations at the meeting are those of the authors and do not necessarily reflect the views of any other organization or agency. Meeting attendees and authors should be aware that this disclaimer is intended to apply to all abstracts contained in this document. Authors who wish to emphasize this disclaimer should do so in their presentation or poster. In an effort to make the abstracts as concise as possible and easy for meeting participants to read, the abstracts have been formatted such that they exclude references to papers, affiliations, and/or funding sources. Authors who wish to provide attendees with this information should do so in their presentation or poster.

Common abbreviations

W2-H
Improving Risk Models for Security and Defense

Room: Johnson A    10:30 AM- 12:00 PM

Chair(s): Jun Zhuang   jzhuang@buffalo.edu



W2-H.1  10:30  A vector approach to measuring deterrence in adversary informed, scenario based risk analyses. Munns J*; Schafer Corporation   munnsjc@gmail.com

Abstract: When a scenario based risk model involves an adversarial actor which behaves to some extent outside of the system being modeled, deterrence is generally a critical consideration. It is often difficult to quantify, and therefore describe, the value of any deterrent effects realized by defensive measures tested in the risk model. This presentation will propose a novel method for measuring deterrence in a relative sense, with the goal of improving the decisions which are informed by a risk analysis. The approach relies on the likelihood vector generated by the risk scenario tree, where each scenario has a likelihood, and the collection of likelihoods can be considered a vector of n dimensions (where n is the number of scenarios being evaluated). If the components of that likelihood vector are determined, or at least influenced, by an adversary, and if we assume that there is deterrent value in forcing an adversary to change its plans, then any change to the likelihood vector can be measured and attributed a corresponding deterrent value. The change of the likelihood vector can simply be measured either as the angle between the old and the new vector, the magnitude of the vector which results from subtracting the old and the new vector, or with reference to some other point in the decision space. This approach could also be applied to a utility vector if an adversary calculates a utility for every scenario, or group of scenarios. This presentation considers several basic examples of the proposed vector based approach, suggests which forms of risk analysis this method is best used with, and provides several suggestions for how these metrics are best communicated and visually represented. Additional consideration is given to handling the case where scenario likelihoods are uncertain and are represented by probability functions.

W2-H.2  10:50  Robust Screening Policy--Balancing Congestion and Security in the Presence of Strategic Applicants with Private Information. Xu J, Song C, Zhuang, J J*; University at Buffalo, SUNY   jzhuang@buffalo.edu

Abstract: Concerns on security and congestion appear in security screening which is used to identify and deter potential threats (e.g., attackers, terrorists, smugglers, spies) among normal applicants wishing to enter an organization, location, or facility. Generally, in-depth screening reduces the risk of being attacked, but creates delays that may deter normal applicants and thus, decrease the welfare of the approver (authority, manager, screener). In this research, we develop a model to determine the optimal screening policy to maximize the reward from admitting normal applicants net of the penalty from admitting bad applicants. We use an M/M/n queueing system to capture the impact of security screening policies on system congestion and use game theory to model strategic behavior, in which potential applicants with private information can decide whether to apply based on the observed approver's screening policy and the submission behavior of other potential applicants. We provide analytical solutions for the optimal non-discriminatory screening policy and numerical illustrations for both the discriminatory and non-discriminatory policies. In addition, we discuss more complex scenarios including robust screening, imperfect screening, abandonment behavior, and complex server networks.

W2-H.3  11:10  Conquering the Iron Triangle of SME Elicitation . Nilsen M*, Hawkins B, Cox J, Gooding R, Whitmire M; Battelle Memorial Institute and the Department of Homeland Security Chemical Security Analysis Center   nilsenm@battelle.org

Abstract: The Chemical Terrorism Risk Assessment (CTRA) is a DHS CSAC funded program that estimates the risk among chemical terrorism attack scenarios and assists in prioritizing mitigation strategies. The CTRA relies on Subject Matter Expert (SME) elicitation of Law Enforcement and Intelligence Communities to quantify the threat and vulnerability parameters associated with various attack scenarios, as well as the capabilities of terrorist organizations. The term “Iron Triangle” is frequently used to describe situations with three interconnected constraints in which one constraint cannot be changed without changing at least one of the other constraints. One canonical example of this concept is the Iron Triangle of project management: Scope, Schedule and Budget. The challenges of SME elicitation can also be characterized as an Iron Triangle of Level of Detail, Data Quality, and SME Fatigue. Level of Detail in this context can be defined as the resolution of the elicitation results with respect to capturing intricacies and dependencies among the components of attack scenarios. Data Quality in elicitation terms can be defined as the level to which the results accurately capture the analysts’ opinions. SME Fatigue is a commonly used term in elicitations referring to the level of effort, measured in both hours and number of in-person meetings, required for an SME to participate. Innovative techniques, such as interactive tools, utility models, and YouTube-style video clips, are discussed as methods to conquer the Iron Triangle of elicitation and improve both the level of detail and quality of data, while reducing SME fatigue. The impact of these techniques will be highlighted by comparing past CTRA elicitation procedures with more recent CTRA elicitation techniques.

W2-H.4  11:30  Probabilistic Coherence Weighting for Increasing Accuracy of Expert Judgment. Olson KC*, Karvetski CW; George Mason University   kolson8@gmu.edu

Abstract: In this presentation, we provide a new way to generate accurate probability estimates from a group of diverse judges. The experiments we report involve elicitation of probability estimates that belong to sets in which only one probability is of primary interest; the other estimates serve to measure the individual judges' coherence within sets. These experiments extend previous efforts to increase accuracy of aggregate estimates by weighting probabilistic forecasts according to their coherence. Our method shows that asking for only two additional judgments can achieve significant increases in accuracy over a simple average. In the aggregation, we adjust the judgments of each participant to be coherent and calculate weights for the judgments based on the earlier incoherence across all participants. More generally, our findings provide insight into the trade-off between methods that aim to increase accuracy of judgments by improving their coherence with experimental manipulations and methods that leverage the incoherence of judgments to increase accuracy during aggregation. Our two studies show that concurrent judgments of related probabilities produce more accurate equal-weight averages but have less incoherence on which our coherence weighting operates. However, independent judgments of related probabilities produce less accurate linear averages but have more incoherence on which our coherence weighting can capitalize.



[back to schedule]