Society For Risk Analysis Annual Meeting 2017

Session Schedule & Abstracts


* Disclaimer: All presentations represent the views of the authors, and not the organizations that support their research. Please apply the standard disclaimer that any opinions, findings, and conclusions or recommendations in abstracts, posters, and presentations at the meeting are those of the authors and do not necessarily reflect the views of any other organization or agency. Meeting attendees and authors should be aware that this disclaimer is intended to apply to all abstracts contained in this document. Authors who wish to emphasize this disclaimer should do so in their presentation or poster. In an effort to make the abstracts as concise as possible and easy for meeting participants to read, the abstracts have been formatted such that they exclude references to papers, affiliations, and/or funding sources. Authors who wish to provide attendees with this information should do so in their presentation or poster.

Common abbreviations

M4-J
Poster Platform: Applications of Automation, Computational, and Informatic Tools to Operationalize Human Health Risk Assessments at EPA – the Genius Studio

Room: Salon 1   3:30 pm–5:00 pm

Chair(s): Ingrid Druwe, J. Allen Davis   druwe.ingrid@epa.gov

Sponsored by Dose-Response, Exposure Assessment, Decision Analysis and Risk, and Ecological Risk Assessment Specialty Groups

The National Center for Environmental Assessment’s (NCEA) Integrated Risk Information System (IRIS) has recently laid out a framework to incorporate numerous software tools into the assessment workflow to increase assessment production efficiency and facilitate decision-making transparency to our stakeholders. The goal of this electronic poster platform is to present these tools to the public through a unique Genius Studio format with live demonstrations of the tools’ uses and applications. Advanced tools for systematic review that leverage text mining and machine learning methodologies (SWIFT-Review, SWIFT-Active, and DocTER) will be highlighted. These tools accelerate the assessment process through automated literature ranking and active-learning models, standardize systematic review methods, and increase transparency of Agency literature screening decisions. Web-based tools for database management and data extraction and visualization (Health Assessment Workspace Collaborative, HAWC; DRAGON) will also be presented. These tools allow risk assessors to screen data from literature searches, perform risk of bias evaluations, and extract data from toxicological and epidemiologic studies for data integration, visualizations, and dose-response analysis. Additional literature management tools (Health and Environmental Research Online, HERO) will be covered, highlighting how NCEA identifies, compiles, characterizes, and prioritizes studies used in its various human health risk assessments. Lastly, presentations on computational toxicity methods and novel dose-response analysis methods (e.g., RapidTox, Bayesian meta-regression, model averaging) will be provided, highlighting NCEA’s and IRIS’ ongoing efforts to leverage cutting-edge quantitative methods in Agency risk assessments.

Disclaimer: The views expressed in this abstract are those of the authors and do not necessarily represent the views or policies of the U.S. Environmental Protection Agency.



M4-J.1  3:30 pm  SWIFT-Review: A Text-Mining Workbench for Systematic Review. Howard BE*, Tandon A, Phillips J, Shah R; Sciome, LLC   itsbehoward@hotmail.com

Abstract: Here, we introduce “SWIFT-Review” (SWIFT is an acronym for “Sciome Workbench for Interactive computer-Facilitated Text-mining”), a freely available, interactive workbench that provides numerous tools to assist with problem formulation and literature prioritization. SWIFT-Review can be used to search, categorize, and visualize patterns in literature search results. The software utilizes statistical modeling and machine learning methods that allow users to identify over-represented topics within the literature corpus and to rank-order titles and abstracts for manual screening. We have tested the automated document prioritization feature on 20 previously conducted systematic review datasets, and the results presented clearly suggest that using machine learning to triage documents for screening has the potential to save, on average, more than 50% of the screening effort ordinarily required when using un-ordered document lists. In addition, the tagging and annotation capabilities of SWIFT-Review can be used to produce “scoping reports” or “scoping studies,” a type of knowledge synthesis undertaken to guide the direction of future research priorities or to help with problem formulation when conducting a systematic review. As a result, users can more quickly assess the extent of available evidence, prioritize health outcomes and chemical exposures for systematic review, and understand the degree of evidence integration that may be required. In addition, the resulting visualizations can help to identify topics that have been extensively studied as well as emerging areas of research. SWIFT-Review integrates seamlessly with other text-mining platforms including Active-Screener and HAWC. The software remains under active development with several new features planned.

M4-J.2  3:30 pm  SWIFT-Active Screener: Reducing Literature Screening Effort Through Machine Learning for Systematic . Howard BE, Miller K, Phillips J, Tandon A, Phadke D, Mav D, Shah R*; Sciome, LLC   brian.howard@sciome.com

Abstract: Evidence-based toxicology is an emerging discipline in which researchers within government, industry and non-profit research organizations are increasingly employing systematic review in order to rigorously investigate, analyze and integrate the evidence available in peer-reviewed publications. A critical and time-consuming step in this process is screening the available body of literature to select relevant articles. To address this problem, we introduce SWIFT-Active Screener, a web-application which uses novel statistical and computational methods to prioritize relevant articles for inclusion while offering guidance on when additional screening will no longer yield additional relevant articles. We tested Active Screener on 20 diverse systematic review studies in which human reviewers have previously screened, in total, more than 115,000 titles and abstracts. When compared to a traditional screening procedure, this method resulted in substantial savings (50-75% for large projects) in terms of total number of articles screened. While these results are very promising, machine-learning prioritization approaches such as this can only be deployed confidently if users are ensured that no critical article will be missed in the process. Accordingly, Active Screener also employs a novel algorithm to estimate recall while users work, thus providing a statistical basis for decisions about when to stop screening. In Active Screener, these unique methodological advancements are implemented as a user-friendly web application that allows users to manage their review, track its progress and provide conflict resolution. Together, these tools will enable researchers to perform literature screening faster, cheaper and in a more reproducible manner.

M4-J.3  3:30 pm  HAWC (Health assessment workspace collaborative): A modular, web-based interface to facilitate development of human health assessments of chemicals. Shapiro AJ*, Addington JA, Rooney AA, Boyd WA; US National Toxicology Program    andy.shapiro@nih.gov

Abstract: Regulatory and scientific research institutions frequently conduct literature-based assessments of the potential for chemicals to pose a threat to human health. Such assessments typically consist of a critical review of a literature corpus to identify adverse health effects, and to characterize exposure-response relationships from literature. In addition to extraction of exposure-response data, systematic review of potential bias in literature, as well as documentation of the literature search strategy, are important steps in these reviews. A clear and detailed presentation of analysis and outputs, as well as intermediate decisions, are critical to ensure transparency of the process. We address these challenges by creating a modular, web-based content-management system to synthesize multiple data sources into overall human health assessments of chemicals. This free, open-source web-application, HAWC (Health Assessment Workspace Collaborative, https://hawcproject.org/), integrates and documents the overall workflow from literature search and review, to data extraction, dose-response analysis using benchmark dose modeling software (BMDS), customizable visualizations of evidence and risk of bias, and data exports. User access is assessment-specific; project-managers can create public or private assessments, and can share with their team during development and ultimately release publicly as supplemental information to final reports (e.g., the US National Toxicology Program (NTP) monograph of immunotoxicity associated with PFOA/PFOS exposure, or NTP’s systematic evidence mapping of paraquat and Parkinson’s disease). Crucial benefits of such a system include improved integrity of the data and analysis results, greater transparency, standardization and consistency in data collection and presentation. To date, nearly 400 assessments have been created by users, and has been adopted for use by groups such as the NTP, the US EPA NCEA, and the WHO IARC monographs program.

M4-J.4  3:30 pm  HERO: Tools for Systematic Review to Support U.S. EPA Science Assessments. Jones RM*, Thacker S; United States Environmental Protection Agency   jones.ryan@epa.gov

Abstract: • The Health and Environmental Research Online (HERO) database is a module-based database system constructed to support assessments by helping researchers to systematically identify, compile, organize, manage, characterize, and prioritize new relevant published research, as well as extract and report on data for use in U.S. the Environmental Protection Agency’s development of science assessments. HERO currently stores metadata for two million citations and has been used in over three hundred assessment documents, both drafts and final versions. HERO serves as a central repository for results from literature searches, and fosters team collaboration to transparently share their process of screening citations from the beginning of the process to the final selection of relevant material. HERO has introduced multiple tools to expedite the process of systematic review, such as a software modification that links the bibliography of an assessment back to HERO, allowing reviewers to evaluate each draft of an assessment more efficiently. Another example tool is a system to automatically categorize citations by academic discipline, making the efforts of multidisciplinary teams more efficient. A third example tool is ‘citation mapping,’ a keyword free search technique that uses aggregation of bibliographic coupling relationships to present a set of search results that is self-ranked by probability of relevance. Used in coordination with traditional screening techniques, these tools improve the efficiency of scientific experts by focusing their resources on the most relevant material. HERO is working with the providers of third party tools to incorporate efficient processes to perform screening tasks on HERO citations. • Disclaimer: The views expressed in this abstract are those of the authors and do not necessarily represent the views or policies of the U.S. Environmental Protection Agency.

M4-J.5  3:30 pm  EPA's Benchmark Dose Software and Related Dose-Response Models and Methods. Davis JA*, Gift J; US Environmental Protection Agency   davis.allen@epa.gov

Abstract: The EPA developed the Benchmark Dose Software (BMDS) as a tool to help Agency risk assessors facilitate applying benchmark dose (BMD) method’s to EPA’s human health risk assessment (HHRA) documents. The application of BMD methods overcomes many well know limitations of the more traditional NOAEL/LOAEL approach, and its use has expanded internationally to include thousands of users world-wide. The current version of BMDS allows users to model a number of different types of data, including quantal data, continuous data, and clustered developmental toxicity data. Advanced models that incorporate time as a variable into the modeling scheme are also offered, including models for time-to-tumor analysis, repeated-response data, or concentration × time data. To stay current with the state-of-the-science in this field, EPA has continued to research and implement new dose-response methods for inclusion in BMDS or development as stand-alone products. Recently, EPA released the new user interface for its Categorical Regression software, facilitating the analysis of severity data and the use of meta-analytical methods. Other current projects include the development of frequentist and Bayesian model averaging approaches to address model uncertainty, Bayesian meta-regression methods for modeling data from multiple epidemiological studies, and using BMD modeling to assess the toxicological similarity of chemical mixtures. Other research projects include the implementation of the hybrid approach for defining risk for continuous endpoints in a dichotomous fashion, implementation of log-normal distributions for continuous data, and further development of probabilistic dose-response methods. Disclaimer: The views expressed in this abstract are those of the authors and do not necessarily represent the views or policies of the U.S. Environmental Protection Agency.

M4-J.6  3:30 pm  The EPA Comptox Chemistry Dashboard: a web-based data integration hub and its applications to supporting risk assessment. Williams AJ*, Shah I, Patlewicz G, Wambaugh J, Grulke C, Edwards J, Richard A, Judson R; Environmental Protection Agency   williams.antony@epa.gov

Abstract: The U.S. Environmental Protection Agency (EPA) Computational Toxicology (CompTox) Program helps prioritize chemicals for research based on potential human health risks by integrating advances in biology, chemistry, and computer science. Over the past decade, EPA CompTox data have been made publically accessible through a series of software applications. Recent efforts have focused on developing a software architecture that assembles the data into a single platform. The new CompTox Chemistry Dashboard web application provides access to data associated with ~750,000 chemicals, including substances represented as chemical structures representations as well as UVCB substances (Unknown or Variable Composition, Complex Reaction Products and Biological Materials). Associated data include physicochemical properties, toxicity values, bioassay screening values, and exposure data. The dashboard provides searching based on chemical names, synonyms and CAS Registry Numbers, either as single entries or batch-based searching for multiple chemicals simultaneously. Data streams and functionality have been delivered to support risk assessment through the RapidTox application. Toxicity data is collected from multiple resources including the Integrated Risk Information System (IRIS), Provisional Peer-Reviewed Toxicity Values (PPRTV), and the European COSMOS database for cosmetics. The CompTox Chemistry Dashboard remains under constant curation and expansion with new data. This presentation will provide an overview of the dashboard, especially in regards to its use in rapid risk assessments. The presentation will highlight available data, the flexible search functionality, and the integration to other public resources. This abstract does not reflect U.S. EPA policy.

M4-J.7  3:30 pm  DRAGON ONLINE: Tool for Systematic Literature Review. Bornstein K*, Williams A, Hobbie K, Cawley M, Feiler T, Henning C, Turley A; ICF   audrey.turley@icf.com

Abstract: ICF’s DRAGON ONLINE enables scientists and researchers to work collaboratively on literature screening, data extraction, and study quality evaluations for systematic literature reviews. Documenting data and decisions supporting the review is critical to a systematic approach, increases transparency, and preserves institutional knowledge. What strengthens evidence-based scientific conclusions? More evidence? Higher quality evidence? Evidence from more than one data stream or study type? The answer varies based on the decision context (e.g., regulatory, scoping) and the decision framework (e.g., risk assessment paradigm, meta-analysis), but having well-organized, well-annotated data facilitates evaluation and development of conclusions. DRAGON ONLINE enables the organization, evaluation, and annotation of scientific data to support assessments. By standardizing the data elements evaluated across scientific studies, we can reach conclusions more readily because understanding data across studies and even evidence streams is easier. DRAGON ONLINE supports – Literature categorization based on user-defined key words – Data extraction from a variety of scientific studies – Study quality evaluation – Data visualization – Overall assessment management

M4-J.8  3:30 pm  DoCTER: Text Analytics to Prioritize Literature Search Results for Review. Hobbie K*, Cawley M, Turley A, Varghese A; ICF   audrey.turley@icf.com

Abstract: Comprehensive literature searches conducted for systematic reviews may result in tens of thousands of results. Distinguishing relevant literature from background noise is time and labor intensive. ICF’s tool, DoCTER, uses text analytics to move rapidly from literature search to risk analysis. Among DoCTER’s analytics capabilities are topic extraction, supervised clustering, smart clustering, and machine learning. Topic extraction requires no a priori knowledge. It assigns each reference to a single cluster and generates a topic signature (a set of keywords) for each cluster. Subject matter experts review the keywords and assign priority levels to each cluster. Supervised clustering requires a set of known relevant studies to add to search results when clustering; these are the “seeds.” Clustering with seeds takes the guesswork out of determining which clusters to prioritize and generates unbiased forecasts of retrieval accuracy. Smart clustering uses unsupervised semantic similarity algorithms and user-specified keywords to create a set of words and phrases that best define the topic of interest; it then ranks the relevance of each document in terms of these keywords. Smart clustering is an improvement over traditional clustering in some contexts because the process of cluster formation is directed and focused on the user’s objectives. Machine learning potentially delivers the greatest accuracy of these methods but requires a time investment to develop a training dataset following which relevance scores are generated for each study. The DoCTER pipeline combines these elements to prioritize review of the most relevant references only and is flexible to all user contexts, including situations in which no training data are available. In simulations, the DoCTER pipeline has demonstrated a five-fold increase in efficiency compared to a purely manual approach at an equivalent level of accuracy.

M4-J.9  3:30 pm  Systematic Review Automation Technologies: Available Tools and Best Practices. O'Blenis PA*, Stefanison I; Evidence Partners Inc   poblenis@evidencepartners.com

Abstract: The field of systematic review automation is evolving rapidly and is bringing with it the ability to produce and maintain higher volumes of evidence without increasing research workloads. These tools are also dramatically improving the auditability, reproducibility and transparency of work being produced, enabling more reliable evidence-based decision making. This poster will provide a snapshot of the current state-of-the-art in systematic review automation and will be presented through practical examples of how diverse organizations are leveraging these tools today. The poster will also review emerging innovations, such as natural language processing and AI, that are expected to further impact the way in which evidence-based research is conducted and managed. Finally, the poster will provide an overview of generally applicable best practices gleaned through work with a diverse community of approximate 250 different organizations conducting reviews across a broad spectrum of domains. The information will be presented in a practical format, with examples, to present the audience with a set of tools that may assist them in the conduct of their systematic reviews.



[back to schedule]