Society For Risk Analysis Annual Meeting 2012

Advancing Analysis

Session Schedule & Abstracts

* Disclaimer: All presentations represent the views of the authors, and not the organizations that support their research. Please apply the standard disclaimer that any opinions, findings, and conclusions or recommendations in abstracts, posters, and presentations at the meeting are those of the authors and do not necessarily reflect the views of any other organization or agency. Meeting attendees and authors should be aware that this disclaimer is intended to apply to all abstracts contained in this document. Authors who wish to emphasize this disclaimer should do so in their presentation or poster. In an effort to make the abstracts as concise as possible and easy for meeting participants to read, the abstracts have been formatted such that they exclude references to papers, affiliations, and/or funding sources. Authors who wish to provide attendees with this information should do so in their presentation or poster.

Common abbreviations

Quantitative Models: the Chemical Risk

Room: Pacific Concourse L   1:30 - 3 PM

Chair(s): Kan Shao, George Woodall

W3-H.1  13:30  A quantitative role for zebrafish in the assessment of human developmental toxicity. Fleming CR*, Lambert JC; 1: ORISE Fellow at U.S. EPA, Cincinnati, OH 2: U.S. EPA, Cincinnati, OH

Abstract: Among alternatives to mammalian toxicity testing, zebrafish provide a unique option in that in vitro developmental toxicity testing can be performed in a whole vertebrate. Zebrafish embryos have been shown to be a predictive qualitative screening tool for human developmental toxicity. Here we assess the utility of zebrafish as a quantitative model of human developmental toxicity by comparing rodent and zebrafish effect levels. Methods: A literature search was conducted to identify zebrafish developmental toxicity studies from which an estimate of internal dose could be obtained. Effect levels (NOAEL/LOAEL) were identified for developmental endpoints. A second literature search was then conducted for rodent studies identifying effect levels for endpoints similar to those examined in the zebrafish studies. Rodent:zebrafish effect level ratios were then determined. Results: Suitable zebrafish studies were identified for five chemicals: dioxin (TCDD), ethanol (EtOH), domoic acid (DA), all-trans retinoic acid (tRA), and caffeine (CA). The rodent:zebrafish ratios of the LOAELs for various endpoints were as follows: TCDD, 2.9-3.8; EtOH, 1.0-2.3; DA, 1.5-2.7; tRA,1.88; CA, 1.5. Zebrafish were typically more sensitive to developmental effects than rodents. Conclusions and Future Directions: These results suggest that zebrafish embryos may be a suitable surrogate for rodents in the quantitative assessment of developmental toxicity. However, this analysis is limited by the small number of chemicals assessed and the variations in experimental study design. Validation of these results should include uniform experimental methods assessing a variety of endpoints in a broader range of chemicals and estimating internal dose in the zebrafish. This could result in a final standardized methodology for assessment of developmental toxicity of new chemicals. The views expressed in this abstract are those of the authors and do not necessarily reflect the views or policies of the US EPA.

W3-H.2  13:50  Bayesian non-parametric methods in operational risk modeling. Rivera-Mancia ME*; McGill University

Abstract: In this study, an analysis of financial institutions internal loss data is performed by using a Bayesian non-parametric approach. We recall the idea of the point process approach, where the time and the exceedances over the threshold are assumed to follow a non-homogenuous Poisson process. The proposed model is based on a Dirichlet process mixture model for the intensity of the Poisson process. One of the main challenges in this setting is the choice of the mixture kernel to capture the tail behavior of the data. Here, the threshold can be considered as another model parameter and the proposed model considers the form of the distribution below and above the threshold. The estimation is carried out using Markov Chain Monte Carlo (MCMC) methods, particularly a blocked Gibbs sampler to obtain samples from the full posterior distribution and estimates of the minimum capital requirement.

W3-H.4  14:10  Is the assumption of normality or lognormality for continuous response data critical for benchmark dose estimation? Shao K*, Gift JS, Setzer RW; National Center for Environmental Assessment, U.S. EPA

Abstract: One important assumption used in benchmark dose (BMD) estimation from continuous data (e.g. body weight, relative liver weight) is whether responses are normally or lognormally distributed. Crump (1984) advocated that the assumption of normal distribution was theoretically and pragmatically reasonable, while other experts indicate that lognormal distribution is more appropriate from a toxicological perspective (Gaylor and Slikker 1990; Hattis et al 1999; Slob 2002). In addition, if lognormality is assumed, and only summarized response data (i.e., mean ± standard deviation, the format typically reported in peer-reviewed literature) are available, the BMD can only be approximated. In this study, we evaluate a variety of toxicity data reported on individual animal basis obtained from NTP’s database and investigate: (1) whether summarized data can provide precise BMD estimates; (2) whether the assumption of normal/lognormal distribution has significant impact on BMD estimates; (3) and what possible factors contribute to observed differences. Preliminary results indicate that a majority of BMD estimates approximated from summarized data in the study differ by less than 5% from the counterparts directly estimated from individual data, regardless of the distribution assumption. Three out of ten datasets examined in the study show that the assumption of normal/lognormal distribution might have considerable influence (as much as three-fold) on BMD estimates depending on the format of dose-response model and attributes of the data. Possible factors that result in the differences will be explored in the presentation.

W3-H.5  14:30  Benchmark calculation using categorical regression for multiple end-point responses. Chen CC*; NHRI

Abstract: ABSTRACT Benchmark dose (BMD) calculation for dichotomous or continuous response has been well established, which has advantages over no-observed-adverse-effect level (NOAEL) approach for risk assessment in consistency among different studies, independence on sample size, and uncertainty consideration. However, neither the BMD nor the NOAEL approach is capable of incorporating severity levels of response with ordinal categories such as no effect, adverse effect, and severe effect. In contrast, though categorical regression can estimate the probabilities of different severity categories over the continuum of exposure, inconsistent extra risk estimates may hinder adopting the approach for reference dose (RfD) derivation. By expressing additional risk to the background exposure as weighted combinations of the risks of different severity levels, we derive and compare the corresponding BMD of categorical regression to that with dichotomous response. Markov chain Monte Carlo simulations are employed for parameter estimates, with Bayesian model average for model uncertainty. Meta-analysis method is applied to integrate results from multiple endpoints. Toxicity data on aldicarb is reanalyzed as an illustrative example.

[back to schedule]