Society For Risk Analysis Annual Meeting 2017

Session Schedule & Abstracts


* Disclaimer: All presentations represent the views of the authors, and not the organizations that support their research. Please apply the standard disclaimer that any opinions, findings, and conclusions or recommendations in abstracts, posters, and presentations at the meeting are those of the authors and do not necessarily reflect the views of any other organization or agency. Meeting attendees and authors should be aware that this disclaimer is intended to apply to all abstracts contained in this document. Authors who wish to emphasize this disclaimer should do so in their presentation or poster. In an effort to make the abstracts as concise as possible and easy for meeting participants to read, the abstracts have been formatted such that they exclude references to papers, affiliations, and/or funding sources. Authors who wish to provide attendees with this information should do so in their presentation or poster.

Common abbreviations

W1-H
Miscellaneous - Foundations

Room: Salon J   8:30 am–10:10 am

Chair(s): Myriam Merad   myriam.merad@unice.fr

Sponsored by Foundational Issues in Risk Analysis Specialty Group



W1-H.1  8:30 am  Normal Chaos in Managing Risks – Dealing with Complex Processes. Lauder M., Marynissen H., Summers T.*; Antwerp Management School and University of Maryland   hugo.marynissen@ams.ac.be

Abstract: There is wide recognition that the world we live in is complex. This can be seen in the many academic papers within the field of social science, which explore a plethora of issues that this complexity throws up. The aim of this presentation is to offer the ‘normal chaos’ paradigm as a basic premise for research into the management of complex processes. The idea of normal chaos emerged from research in the ways society learns from public inquiries. It demonstrates that those conducting inquiries have a clear view of how the world works. While not stated explicitly, their views come across very clearly within the comments they make and the recommendations they offer. This worldview has been labelled the ‘perfect world’ paradigm. The construct of normal chaos at first provided an alternative lens through which to theorise about resilient ways of organising and managing stressful situations. Subsequently, it has been seen to have much wider application. Normal chaos offers a direct challenge to work based on the premise that increasing control and coordination provides the way to manage complex situations. The new proposition suggests that control and coordination mechanisms are limited and that the pursuit of ‘more of the same’ soon reaches a point of diminishing returns. The construct suggests that we cope rather than control such situations. This initial research investigates how the boundary between order and chaos can be recognised and managed. It identifies two states: the first is normal chaos, where events generally work as planned and abnormal chaos where events are recognised as being dangerously out of control. The underpinning of this construct is an individual’s limited mental capacity and the mental shortcuts used that often lead to an illusion of control. From a methodological perspective, this presentation offers an initial catalytic framework that stimulates thinking about these complex situations.

W1-H.2  8:50 am  Is hazard identification a scientific process? Recent evaluations of glyphosate suggest room for interpretation. De Roos AJ*; Dornsife School of Public Health at Drexel University   aderoos@drexel.edu

Abstract: Hazard identification, posed as the first step of risk assessment in the National Research Council’s 1983 ‘Red Book’, seeks to identify adverse health effects caused by a chemical, by conducting a “weight-of-evidence” evaluation of various types of scientific data. The hazard identification process was recently undertaken for consideration of glyphosate as a potential carcinogen by several agencies, with some divergent conclusions. Glyphosate was classified as a probable carcinogen by the International Agency for Research on Cancer (IARC) in 2015, based on sufficient evidence in experimental animals and limited evidence in human studies, with supportive mechanistic data indicating genotoxicity and induction of oxidative stress. In contrast, later assessments from both the European Food Safety Authority (EFSA) and the European Chemicals Agency (ECHA) concluded that glyphosate is not carcinogenic. Contradictory statements from these agencies have led to sensationalist news reports and confusion among the public. How did the evaluations arrive at such different conclusions? The review procedures of the groups differ, as do their memberships, and not all data were available for all reviews. Nevertheless, the different conclusions seem to stem, to a large extent, from differing interpretation of the same studies. Furthermore, even with interpretations that are similar on the surface (e.g., “limited” evidence of carcinogenicity from epidemiology studies), certain types of information were weighted differently between the review bodies. In my presentation, I will discuss where the evaluations differ in their procedures and conclusions, and I will illustrate differences in interpretation by highlighting specific data from animal, human, and mechanistic studies. Hazard identification for glyphosate will be used as an example of inconsistencies introduced by the inherent need for subjective opinion in weighting scientific evidence.

W1-H.4  9:10 am  The Deepwater Horizon disaster; data and causality from the investigation reports revisited through ontologies. Eude T*, Gangemi A, Travadel S, Guarnieri F; MINES ParisTech, PSL - Research University France and Université Paris Nord France and ISTC-CNR Italy   thibaut.eude@mines-paristech.fr

Abstract: Adopting an expert-based approach to study data on the Deepwater Horizon accident, we show that researches on disasters and risk analyses might be jeopardized by a lack of critical view on data sources and crucial social mechanisms that led to the accident might be obscured by the complexity of data and their contradictory analyses. Our study pinpoints two epistemological issues illustrating the social construction of knowledge: the need to elucidate causality expressions in the investigation reports and the management of massive datasets, from which discrepancies eventually surface. Concerning causality, we acknowledge its diversity, ranging from “logical” to “counterfactual” or “historical” causation. Each carries specific consequences in terms of knowledge and inferred preventive actions. Also, although accident investigations are conducted according to investigative methodologies, the conclusions depend largely on the authors’ assumptions on the physical and social world. This questions the findings of industrial accident studies built on a limited number of uncharted sources. To tackle these issues, we propose a methodology to record and assess the available knowledge on a given accident, in the form of a knowledge graph. This approach takes into account the assumptions supporting the findings of investigation reports. The technological framework is based on an extension of the DOLCE DnS UL semantic web ontology, modified to fit research on disasters, populated using the formal knowledge extraction tool FRED applied to investigation reports. We illustrate our methodology with an analysis of the official conclusions on the Deepwater Horizon case. The accident was analyzed through more than 32 investigation reports; Our aim is to provide the risk analysis community with a tool to manage the integrity, robustness and reliability of data sources, as well as to critically assess what is known and what needs further understanding regarding the accident causation.

W1-H.5  9:30 am  Differences between experts and laypeople: Risk prioritization in the food domain using deliberative and survey methods. Siegrist M*, Hübner P, Hartmann C; ETH Zurich   msiegrist@ethz.ch

Abstract: One aim of the study was to examine differences between experts and laypeople in risk prioritization in the food and everyday items domain. The second aim was to examine whether a deliberative method results in a different ranking compared with a survey method. We examined how laypeople (N = 92) and experts (N = 14) prioritized 28 hazards related to food and everyday items. Participants received detailed descriptions of the hazards, enabling the participants to make deliberative decisions. The participants prioritized the hazards before and after a group discussion (approximately 15 persons per group), in which the group’s average prioritization was discussed. The rankings of the hazards before and after the group discussion were highly correlated. However, laypeople and experts differed significantly in their rankings for 18 of the 28 hazards. To test the influence of the deliberative method (e.g., providing detailed information about each hazard), data from a second group of laypeople were collected with a no-information survey (N = 118). This group did not receive specific information about the hazards. The risk rankings of the laypeople who received information were highly correlated with the risk rankings of laypeople who did not receive information. Overall, the results suggest that deliberative methods of risk ranking or no-information survey methods with no information about hazards provide similar results among laypeople. The conclusion is that government agencies should not only base their risk prioritization on evidence from risk assessments but also need to consider laypeople’s hazard rankings. This procedure may result in an efficient and publicly accepted risk management strategy.



[back to schedule]