Society For Risk Analysis Annual Meeting 2013
Session Schedule & Abstracts
* Disclaimer: All presentations represent the views of the authors, and not the organizations that support their research. Please apply the standard disclaimer that any opinions, findings, and conclusions or recommendations in abstracts, posters, and presentations at the meeting are those of the authors and do not necessarily reflect the views of any other organization or agency. Meeting attendees and authors should be aware that this disclaimer is intended to apply to all abstracts contained in this document. Authors who wish to emphasize this disclaimer should do so in their presentation or poster. In an effort to make the abstracts as concise as possible and easy for meeting participants to read, the abstracts have been formatted such that they exclude references to papers, affiliations, and/or funding sources. Authors who wish to provide attendees with this information should do so in their presentation or poster.
|Chair(s): Aamir Fazil email@example.com
Sponsored by MRASG
M3-D.1 13:30 The Influence of Dosing Schedule on Rabbit Responses to Aerosols of Bacillus anthracis. Bartrand TA*, Marks HM, Coleman ME, Donahue D, Hines SA, Comer JE, Taft SC; Tetra Tech firstname.lastname@example.org|
Abstract: Traditional microbial dose response analysis and survival analysis were used to model time of death of New Zealand white rabbits exposed to low aerosol doses of Bacillus anthracis spores. Two sets of experimental data were analyzed. The first set included the times to death of hosts exposed to single doses of B. anthracis spores. The second set provided the times to death for rabbits exposed to multiple daily doses (excluding weekends) of B. anthracis spores. A model predicting times to death based on an exponential microbial dose response assessment, superimposed with an empirically derived incubation function using survival analysis methods was evaluated to fit the two data sets. Several additional models for time to death for aerosols of B. anthracis were also assessed for comparison, including varying the determined hazard function over time, survival models with different underlying dose response functions, and a published mechanistic model. None of these models provided a statistically significant improvement in fit over the exponential-based model in which there was no time dependent effect on the hazard function. Therefore, the model suggests, for the dosing schedule used in this study, long-term response of the hosts depends only on the net accumulated dose an animal received before dying. This finding may be due to small size of the data sets and number of animals that died. Further research with alternative dosing schedules, collection of immune system data (particularly innate immune response), and alternative pathogen-host pairings is needed to clarify the relationship of time to death and dosing schedule.
M3-D.2 13:50 Risk-Based Sampling: I Donâ€™t Want to Weight In Vain. Powell MR*; U.S. Dept. of Agriculture email@example.com|
Abstract: Recently, there has been increased interest in developing scientific schemes for risk-based sampling of food, animals, and plants for effective enforcement of regulatory standards and efficient allocation of surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of overfitting the model based on limited data, leading to false â€śoptimalâ€ť portfolios and unstable asset weights (churn). In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. Under constrained optimization, the annual frequency of lot inspection for each producer is defined to be at least one and otherwise proportional to the product of known production volume and estimated prevalence of contaminated lots. Under a simpler decision rule, frequency is proportional to volume. Assuming stationarity, the â€śrisk-basedâ€ť sampling frequencies assigned to producers by constrained optimization remain highly unstable after 20 years. In the presence of infrequent transients (e.g., outbreaks or extreme contamination events), the relative performance of the decision rules converges in terms of the number of contaminated lots detected as the intensity of transients increases, and simply sampling proportional to volume is more likely to detect the occurrence of transients than the complex optimization decision rule.
M3-D.3 14:10 Specifying input distributions: No method solves all problems. O\\\'Rawe J, Ferson S*, Sugeno M, Shoemaker K, Balch M, Goode J; Applied Biomathematics firstname.lastname@example.org|
Abstract: A fundamental task in probabilistic risk analysis is selecting an appropriate distribution or other characterization with which to model each input variable within the risk calculation. Currently, many different and often incompatible approaches for selecting input distributions are commonly used, including the method of matching moments and similar distribution fitting strategies, maximum likelihood estimation, Bayesian methods, maximum entropy criterion, among others. We compare and contrast six traditional methods and six recently proposed methods for their usefulness in risk analysis in specifying the marginal inputs to be used in probabilistic assessments. We apply each method to a series of challenge problems involving synthetic data, taking care to compare only analogous outputs from each method. We contrast the use of constraint analysis and conditionalization as alternative techniques to account for relevant information, and we compare criteria based on either optimization or performance to interpret empirical evidence in selecting input distributions. Despite the wide variety of available approaches for addressing this problem, none of the methods seems to suffice to handle all four kinds of uncertainty that risk analysts must routinely face: sampling uncertainty arising because the entire relevant population cannot be measured, mensurational uncertainty arising from the inability to measure quantities with infinite precision, demographic uncertainty arising when continuous parameters must be estimated from discrete data, and model structure uncertainty arising from doubt about the prior or the underlying data-generating process.
M3-D.4 14:30 Mixing good data with bad. Shoemaker K*, Siegrist J, Ferson S; Stony Brook University, Applied Biomathematics email@example.com|
Abstract: Data sets have different qualities. Some data are collected with careful attention to proper protocols and careful measurement using highly precise instruments. In contrast, some data are hastily collected by sloppy or unmotivated people with bad instruments or shoddy protocols under uncontrolled conditions. Statistical methods make it possible to formally combine these two kinds of data in a single analysis. But is it always a good idea to do so? Interval statistics is one convenient method that accounts for the different qualities of data in an analysis. High quality data have tighter intervals and poor quality data have wider intervals, and the two can be legitimately pooled using interval statistics, but it appears that it is not always advisable for an analyst to combine good data with bad. We describe examples showing that, under some circumstances, including more data without regard for its quality unnecessarily increases the amount of uncertainty in the final output of an analysis. Ordinarily, statistical judgment would frown on throwing away any data, but as demonstrated by these examples, it seems clearly advantageous sometimes to ignore this judgment. More data does not always lead to more statistical power, and increasing the precision of measurements sometimes provides a decidedly more efficient return on research effort. This result is highly intuitive even though these examples imply a notion of negative information, which traditional Bayesian analyses do not allow.
[back to schedule]