New Science - New Risks
Friday-Saturday, 30-31 March 2012
Center for Philosophy of Science
817 Cathedral of Learning
University of Pittsburgh
Pittsburgh, PA USA
Abstracts
alphabetical by author
Uncertainty, resilience and robustness in adaptive ecological management
Eric Desjardins et al
Abstract: Reconstructing the past and predicting the future of ecosystems are notoriously challenging, if not practically impossible, tasks. Yet, this is what ecological management, i.e., the supervision and modification of social-ecological systems (SESs), requires. Any ecological management project starts from an estimate of past and present conditions and involves decisions based on an understanding of how potential policies and interventions could produce a set of desirable future conditions.
Earlier ecological managers took a Maximum Sustainable Yield (MSY) approach, which dealt very poorly with the uncertainties in system behavior. MSY typically conceived of nature as providing a set of distinct and relatively independent ‘resources,’ and assumed a simple equilibrium-based understanding of ecological systems. Instead of dealing with uncertainty and unpredictability, MSY ignored them. Since the 1970s, ecologists have gradually moved away from this equilibrium view of nature toward a view of SESs as inherently dynamic and unpredictable, and theorists have developed a new conception of ecological management, seeking new strategies for dealing with the uncertainties that result. Where MSY took a "command and control" approach, aiming to hold a system fixed in some particular state indefinitely, the new Adaptive Ecological Management (AEM) approach recognized that unpredictable shocks and changes to a system are inevitable, and shifted the emphasis to the development of an iterative, multiple-stage process capable of responding to and reducing uncertainties and promoting system resilience.
But this shift to AEM has not been without problems. While ‘adaptive management’ has been widely discussed and adopted, an emerging consensus in the literature suggests that its record of success to date is disappointing. In addition, quite different forms or interpretations of AEM are now in play, and theorists disagree on the essential features of AEM. In this paper, we will clarify two fundamental conceptual issues in AEM: uncertainty and adaptiveness. This analysis will help us to understand the mixed record of performance by AEM to date, and its implications.
(1) Uncertainty:What are the kinds of uncertainties affecting our ability to predict and manage the behavior of SESs? We discuss various types of uncertainties falling into three categories: epistemic, natural and social. Epistemic uncertainties result from the difficulty of obtaining accurate representations of the state and the causal structure and dynamics of ecological systems. Natural uncertainties arise from the nature of the systems themselves, in particular from their complexity and their sensitivity to external influences such as climate change or the introduction of new species, and can result in partial or complete lack of controllability even if epistemic uncertainty is minimized. Finally, social uncertainties arise from the complex social-economic-institutional matrices within which AEM projects take place. We argue that an effective AEM approach must develop strategies for taking all (or most) of these uncertainties, and their interactions, into account.
(2) Adaptiveness: What is "adaptive" in AEM? What is adapting to what? We discuss several different ways to understand the "adaptive" aspect of AEM. Each attempts to reduce one or more kinds of uncertainty, or to mitigate the risks associated with these. Clarifying the different senses of adaptiveness reveals that there are important trade-offs among responses to uncertainties in different domains and at different scales. The senses of ‘adaptiveness’ current in the literature include the following:
- Using epistemic methods that are adapted to their context (i.e. to other methods in use, and to the constraints imposed by the systems under investigation) to produce robust models and safe-to-fail strategies: representations that, though inevitably incomplete and at least partly false, are unlikely to be misleading in a way that produces grossly-erroneous predictions, and strategies for intervention that are unlikely to result in high-cost surprises. This kind of adaptiveness addresses epistemic uncertainty, attempting to minimize the uncertainty itself and the risks it creates.
- Adapting our strategies for management and investigation to each other: adapting our management strategies so that they contribute in an active and ongoing way to increasing knowledge (by, for example, varying interventions to test their comparative efficacy, rather than consistently using the one currently believed to be most effective); and adapting our research methods so that they contribute where possible to management goals. This kind of adaptiveness addresses epistemic uncertainty in the longer run, but may exacerbate it in the short run, since it may require testing diverse interventions that are not always well-understood in advance in use in the field.
- Adapting our strategies (at all levels – strategies of investigation, strategies of intervention, and strategies of accommodation) to new information, and to changing natural conditions and changing social contexts. This kind of adaptiveness addresses natural and social uncertainty by trying to increase our ability to respond to the unexpected: i.e., not to reduce uncertainty but to minimize the risks that it creates. There are both conservative and radical versions of this approach, which differ importantly from one another. Some versions of this approach may increase uncertainty with respect to both natural and social aspects of the trajectory of the management project.
- Promoting the capacity of the SES to adapt to shocks and change, i.e., its ecological and social resilience. This kind of adaptiveness attempts to reduce high-risk natural and social uncertainty directly, but may depend on maintaining wider range of system behaviours or states, and so increasing local uncertainties.
The challenge of AEM is to build robust models and policies that allow us to find safe paths to resilient states for complex and unpredictable SESs. Uncertainties arise at every stage and in every aspect of this process, and interact in complex ways. Understanding the nature of epistemic robustness and of ecological and social resilience, and the relationships and tradeoffs among uncertainties of different types and at different scales, is essential to enabling AEM to achieve greater success in practice.
Disclosing Uncertainty
Baruch Fischhoff and Alex L. Davis
Abstract: We offer an approach to disclosing uncertainty meant to provide a behaviorally realistic compromise between the needs of its consumers and the abilities of its producers. Our approach recognizes that the consumers of disclosures need to know what values are plausible for variables relevant to their choices, what faith to place in the studies producing those assessments, and how mature the underlying science is. It recognizes that the producers of disclosures need to have confidence that what they say faithfully represents their beliefs and will not be misconstrued. Drawing on cognitive and decision science, it may provide consumers with somewhat less than they want to get, while demanding somewhat more than producers are comfortable giving. We support this compromise with examples of approaches that tilt toward one side or the other.
We may draw on (among other things):
Fischhoff, B. (2011). Applying the science of communication to the communication of science. Climatic Change, 108, 701-705.
Fischhoff, B. (2011). Communicating the risks of terrorism (and anything else). American Psychologist, 66, 520-531.
Fischhoff, B., & Kadvany, J. (2011). Risk: A very short introduction. Oxford: Oxford University Press.
Higgins JPT, Green S (eds). (2011). Cochrane Handbook for Systematic Reviews of Interventions (Version 5.1.0). www.cochrane-handbook.org.
National Research Council. (2011). Intelligence analysis for tomorrow (Consensus Report). http://www.nap.edu/catalog.php?record_id=13040
O’Hagan, A., Buck, C.E., Daneshkhah, A., Eiser, J.E. et al. (2006). Uncertain judgments: Eliciting expert probabilities. Chichester: Wiley.
Politi, M.C., Han, P.K.J., & Col. N. (2007). Communicating the uncertainty of harms and benefits of medical procedures. Medical Decision Making, 27, 681-695.
Turner, R. M., Spiegelhalter, D. J., Smith, G., & Thompson, S. G. (2009). Bias modelling in evidence synthesis. Journal of the Royal Statistical Society: Series A (Statistics in Society), 172(1), 21–47
Are reports concerning the death of Homo economicus greatly exaggerated?
Jeffrey Helzner
Abstract: There is a widespread belief within contemporary academia that physics sets the standard to which all of the other sciences ought to aspire. The best theories of physics are said to come closest to our ideals concerning a variety of relevant factors that range from explanatory value and testability to simplicity and elegance. In light of these considerations it is hardly surprising to find a good bit of “physics envy” among the special sciences, and nowhere is this more obvious than in economics. One way in which economics has followed physics is in adopting its concern to reduce macro phenomena to micro phenomena. That is, just as there is a concern in physics to explain all physical phenomena in terms of the behavior of elementary particles, there is a concern in economics to explain all economic phenomena in terms of the behavior of individual economic agents. In stark contrast to the volitionless trajectories of elementary particles, the most essential feature of these economic agents is that they make choices in light of their beliefs and in service of their desires.
But what does it mean to choose in light of one's beliefs and in service of one's desires? For example, is it enough that the agent simply recognizes its beliefs and desires while it “chooses” as a function of some irrelevant process, e.g., a reading of the tea leaves? Thus, if the thesis at issue is to have any bite, then more needs to be said about what counts as choosing in light of one's beliefs and in service of one's desires. The standard practice in economics has been to clarify the relevant matters by articulating a standard of individual rationality. Such a standard can serve to distinguish proper from improper ways in which beliefs and desires may inform choices.
The most commonly employed standards of individual rationality are the varieties of expected utility maximization. It is along such lines that the original thesis at issue { that all economic phenomena is in some sense reducible to the behavior of individual agents that make choices in light of their beliefs and in service of their desires { is replaced with the thesis that all economic phenomena is in some
sense reducible to the behavior of individual agents that make choices in accordance with the requirements of expected utility theory. Hence, the stronger thesis is focused on the sort of agent that restricts its selections to those available alternatives that maximize expectation with respect to its subjective probability distribution, representing the agent's beliefs concerning the various epistemic possibilities, and cardinal utility function, representing the agent's desires concerning the various possible consequences.
One plausible way to investigate theses of the sort mentioned above is to ask whether the underlying standard of individual rationality is at all descriptively adequate [3, 6]. The consensus among contemporary psychologists seems to be that the empirical evidence is overwhelmingly negative when it comes to the question of descriptive adequacy. This empirical evidence is often interpreted as being unfavorable to the first, and more general, of the two theses mentioned in the previous paragraph. There are at least two sorts of responses that can and have been offered in light of such an unfavorable interpretation.
(1) Most of the evidence at issue concerns versions of expected utility theory and so while such evidence might be viewed as unfavorable to the second, and less general, of the two theses mentioned in the previous paragraph, its relevance to the more general thesis is unclear, since there are other standards of individual rational choice that have been suggested [1, 4, 2].
(2) There are various ways to defend the more speci_c thesis in light of the evidence that has been amassed: (a) Maintain that the thesis might still have some sort of instrumental value.
(b) Maintain that the data are suspect, e.g., problems concerning reproducibility.
(c) Maintain that the data are not immediately relevant, e.g., the experimental setup is not sufficiently similar to the relevant parts of the economic systems at issue.
In this paper I will consider both sorts of responses to widely circulated reports concerning the death of homo economicus. Much of the discussion will draw upon the ways in which risk and uncertainty have been treated within behavioral economics. I will give particular attention to recent work on the distinction between decisions from experience and decisions from description [5].
References
1. D. Ellsberg, Risk, ambiguity, and the Savage axioms, The Quarterly Journal of Economics 75
(1961), 643{669.
2. P. Gardenfors and N.E. Sahlin, Unreliable probabilities, risk taking, and decision making,
Synthese 53 (1982), 361{386.
3. D. Kahneman and A. Tversky, Prospect theory: An analysis of decision under risk, Economet-
rica 47 (1979), no. 2, 263{291.
4. I. Levi, On indeterminate probabilities, Journal of Philosophy 71 (1974), 391{418.
5. E. Weber R. Hertwig, G. Barron and I. Erev, Decisions from experience and the e_ect of rare
events in risky choice, Psychological Science 15 (2003), no. 8.
6. A. Tversky and D. Kahneman, Judgment under uncertainty: Heuristics and biases, Science
185 (1974), 1124{1131.
Columbia University
E-mail address: jh2239@columbia.edu
Moral emotions and risk politics: An emotional deliberation approach to risk
Sofia Kaliarnta and Sabine Roeser
Absract:
Introduction
We live in a dynamic, technologically sophisticated world. Risks arising from technologies raise important ethical issues for people living in the 21st century. Although technologies such as nanotechnology, biotechnology, ICT, and nuclear energy have been generally developed for their potential benefits in order to advance human well-being, media reports almost daily attest to the fact that they can also create great risks for humans and for the environment at large, as the nuclear disaster in the Fukushima power plant station has very clearly shown. As a consequence of such side effects, technologies can trigger emotions, including fear and indignation, which often gives rise to heated and emotional debates (Slovic 2000, 2010) and leads to conflicts between experts and laypeople. How should we deal with such emotions in political decision making about risky technologies?
Discussion
When dealing with technological risk, one of the most common methods used to calculate risk is cost-benefit analysis, a quantitative method which calculates risks by multiplying probability of a technological hazard happening with the unwanted effect of such a hazard. Cost-benefit analysis is considered by many as a rational, objective and value neutral method that can indicate whether a particular technology should be implemented. However, cost-benefit analysis does not take into account important moral concerns, such as the fair distribution of costs versus benefits that arise from a particular technology, or whether said risks are freely chosen. These concerns showcase that cost-benefit analysis is not as value-neutral as many consider, since there is no place for considering important moral values and also it excludes another important factor: the emotions of the public and stakeholders (Kahan 2008). Emotions such as fear are important factors in laypeople’s risk perceptions (Finucane et al. 2000). However, some scholars think that emotions are irrational states and unreflected gut reactions that should be excluded from decision making about risk (Sunstein 2005) or that they should at most be accepted as a given in a democratic society (Loewenstein et al. 2001) or used instrumentally, in order to create acceptance for a technology (De Hollander and Hanemaaijer 2003). Such an approach is based on a deficient conception of emotions. Emotions can have cognitive aspects (Scherer 1984, Lazarus 1991, Solomon 1993, Nussbaum 2001) and they enable us to be practically rational (cf. Damasio 1994, Roberts 2003). Slovic writes that emotion and reason can interact and that we should take the emotions of the public seriously since they convey meaning (Slovic et al. 2004). For example, enthusiasm for a technology can point to benefits to our well-being, whereas fear and worry can indicate that a technology is a threat to our well-being; sympathy and empathy can give us insights into a just distribution of risks and benefits, and indignation can be an indication of violations of autonomy by technological risks that are imposed on us against our will (Roeser 2006). Risk policy should include the moral emotions of stakeholders.
Objectives
Little research has been done yet on how to genuinely, non-instrumentally include moral emotions in risk communication and risk policy. Although moral emotions are not explicitly taken into account in current approaches for risk politics, they might already play an implicit role. It might be possible to adjust these approaches in order to give moral emotions an explicit role. Aim of this paper is to compare various policy making approaches concerning risky technologies and assess the extent to which they do or are able to incorporate moral emotions. With the use of those insights, a theoretical framework and practical recommendations will be developed on how to effectively incorporate moral emotions and the concerns they reveal into political decision making and communication about risky technologies.
Results
Some of our preliminary results have shown that there are two prevalent approaches in dealing with emotions during political decision making about technological risk:
1. The technocratic approach, where emotions are viewed as irrational, counter-productive and based on unfounded intuitions. In this view, emotions are completely excluded during political decision making.
2. The populist approach, where the emotions of the public are taken as the only means (of lack thereof) of political support for a new technology. Therefore, if the public reacts negatively towards a new technology, said technology is often put ‘on the shelf’ in order to pacify the worried citizens.
What both these approaches have in common is that they consider emotion as irrational states that must be ignored (in the technocratic approach) in order to make way for technological progress, or blindly followed (in the populist approach) since otherwise it is assumed that the public will refuse to support the proposed measures.
Conclusions
Emotions are major determinants in risk perception. However, emotions are generally excluded from communication and political decision making about risky technologies, or they are used instrumentally to create support for a position. Emotional arguments might be misused for manipulation, discarded as being irrational or taken as dead end streets, dampening further discussion. However, moral emotions can be the source and the result of ethical reflection and deliberation. They can be legitimate, even necessary sources of insight concerning the moral acceptability of risks (Kaliarnta et al 2011).
We believe the technocratic and populist approaches in risk politics are insufficient to appropriately deal with emotions and moral concerns. What we propose is an emotional deliberation approach to risk, by taking emotions as the starting point of the discussion regarding the assessment and acceptability of technological risks (Roeser 2011). In order to avoid e.g. ‘probability neglect’ (Sunstein 2005), moral emotions about risk have to be informed by science and statistics. However, in order to avoid ‘complexity neglect’, decisions about risk have to be informed by moral emotions. By giving the emotional concerns of the public and the stakeholders to be included and evaluated during the political decision-making process, this approach will contribute to morally better political decisions about risks and a better understanding between laypeople and experts.
References
Damasio, A. (1994), Descartes’ Error, New York: Putnam
De Hollander, A.E.M. and Hanemaaijer, A.H. (eds.) (2003), Nuchter omgaan met risico’s, Bilthoven, RIVM
Finucane, M., Alhakami, A., Slovic P. and Johnson S. M. (2000), ‘The Affect Heuristic in Judgments of Risks and Benefits’, Journal of Behavioral Decision Making, 13, 1-17
Kaliarnta S, Nihlén-Fahlquist J, Roeser S. (2011) Emotions and Ethical Considerations of Women Undergoing IVF-Treatments, HEC Forum, published online Aug 6. DOI: 10.1007/s10730-011-9159-4
Kahan, D. M. (2008), ‘Two Conceptions of Emotion in Risk Regulation’, University of Pennsylvania Law Review, Vol. 156, 2008
Loewenstein, G.F., Weber, E.U, Hsee, C.K., and Welch, N. (2001), ‘ Risk as Feelings’, Psychological Bulletin 127, 267-286
Lazarus, R. (1991), Emotion and Adaptation, New York: Oxford University Press
Nussbaum, M. (2001), Upheavals of Thought, Cambridge: Cambridge University Press
Roberts, R. C. (2003), Emotions. An Essay in Aid of Moral Psychology, Cambridge: Cambridge University Press
Roeser, S. (2011). ‘Nuclear Energy, Risk and Emotions’, Philosophy and Technology 24, pp. 197-201
Roeser, S. (2006), ‘The Role of Emotions in Judging the Moral Acceptability of Risks’, Safety Science 44, 689-700
Scherer, Klaus R. (1984), ‘On the Nature and Function of Emotion: A Component Process Approach’, in Klaus R. Scherer and Paul Ekman (eds.), Approaches to Emotion, Hillsdale, London: Lawrence Erlbaum Associates, 293-317
Slovic, P. (2010). “If I look at the mass I will never act”: Psychic numbing and genocide. In S. Roeser (Ed.), Moral emotions about risky technologies. London: Earthscan.
Slovic, P., Finucane, M., Peters, E., MacGregor, D.G. (2004), ‘Risk as Analysis and Risk as Feelings: Some Thoughts about Affect, Reason, Risk, and Rationality’, Risk Analysis 24, 311-322.
Slovic, P. (2000), The Perception of Risk, Earthscan, London
Solomon, R. (1993), The Passions: Emotions and the Meaning of Life, Indianapolis: Hackett
Sunstein, C. R. (2005), Laws of Fear, Cambridge University Press, Cambridge.
The assessment of efficacy and safety data by data and safety monitoring boards
Roger Stanev
Rival accounts of statistical inference, such as Bayesianism and error-statistics, reflect profoundly different understandings of how probabilities are interpreted and used, and profoundly different objectives. Given such deep theoretical differences, it might seem like poor strategy to try to advance debate about foundational issues in statistical reasoning by shifting our focus to experimental practice – specifically, to clinical research in epidemiology via randomized controlled trials (RCTs). For in their choice of statistical methodology, clinical practitioners face not only the epistemological issues just noted, but also ethical and economic constraints. Nevertheless, such choices must be made and are made, and the practitioners who make them possess insights that can indeed advance our philosophical understanding of scientific inference under risk.
To make this point, my research has focused on the following general problem: how can a data and safety monitoring board (DSMB) decide when it should bring a trial to an early stop (i.e., halting the RCT before it reaches its originally scheduled stopping point)? This problem emerged with clarity during the clinical trials of HIV/AIDS treatments in the 1980s. There appeared to be a strong case for early stopping because the preliminary data suggested that the treatments were preventing patients from dying, yet scientific considerations seemed to require carrying the experiment to its pre-scheduled termination. Early stopping might also appear to be appropriate in cases where early trends suggest that the treatment is ineffective; and even more appropriate in cases of early harm to patients. But how are we to balance scientific and ethical considerations, and in particular, returning to our original problem, what statistical approaches will accommodate early stopping while still providing useful scientific evidence?
My work starts by rejecting the notion that any single statistical approach will always be the best choice and give the best answer. Instead, I propose a two-level framework for answering these practical questions that arise for biostatisticians. The first level is a qualitative framework: practical guidelines that dictate different considerations for cases of early stop due to benefit, harm, or futility. The second level is a more formal decision-theoretic model to rationalize the choice of an initial stopping rule and the interim monitoring policy. There must be separate justifications for the choice of stopping rule and for the interim decision following such a rule.
In my talk, I shall focus on a specific version of this more general problem. That is, because a clinical trial is a public experiment affecting human subjects, a clear rationale is required for the decisions of the DSMB associated with the trial. Early stopping decisions are not self-justifying. They must be able to meet legitimate challenges from skeptical reviewers. Both the formulation of challenges and the task of responding to challenges would be assisted if we had a systematic, formal (or quasi-formal) way of representing early stopping decisions. In my talk I introduce some useful apparatus to this end—although further work is needed in this area, formalizing the relation between the a priori (designated) stopping rule and the early stopping principle that is applied during the course of the experiment.
For instance, what do trial participants think about early stopping decisions of RCTs? Do asymptomatic HIV/AIDS participants have an opinion about how to balance early evidence for efficacy against the possibility of late side effects? Because DMCs have responsibilities to trial participants, yet those participants have no vote in early stopping decisions, it seems that an important voice is missing. In my research I propose a form of representing early stopping decisions that considers the expected losses of taking various actions at interim. At the very least, it seems that the opinions of trial participants should be taken into account in assessing these expected losses. Yet I am not aware of any systematic study of trial participants’ views about such matters.
My second-order decision framework is not arbitrary. It is sensitive to and based on what DSMBs often take to be relevant factors, including their own plans for how to conduct trials. Stanev (2011) is an example of such work.
To illustrate the framework, in my talk I go over an example of such trials: an RCT conducted in 1994 evaluating an intervention that could prevent a brain-infection in HIV+ individuals (Jacobson et al 1994). This is a case of early stop due to futility where the DMC disagreed with the principal investigator on the right course of action, based on the unexpected low event rate observed during interim analysis. The task of modeling and evaluating the pertinent early stopping decision (and consequently its disagreement) is illustrated by simulations of the framework while using the statistical method of conditional power—a method that can be used to assess whether an early unfavorable trend can still reverse itself, in a manner that is sufficiently enough to show a statistically significant favorable trend at the end of the trial.
Two Types of Risk
Paul Weirich
Abstract: Risks rest on probabilities. Probability theory recognizes (at least) two types of probability, physical probability and evidential probability. In a court case, the physical probability that a defendant is guilty is either 0% or 100%. It is not sensitive to information. In contrast, the evidential probability that the defendant is guilty varies with information presented during the trial. At an early stage of the trial, the probability may be 10%. At the end of the trial, it may be 50%. Evidential probability is personal, because it depends on a person’s evidence, and is accessible, because a person knows the evidence she possesses. When evidence is scanty or mixed, evidential probability may lack a sharp value.
Corresponding to the two types of probability are two types of risk. One is grounded in physical probabilities and is independent of information; the other is grounded in evidential probabilities and is information-sensitive.
The debate about marketing genetically modified (GM) food draws attention to the importance of attending to both types of risk. GM food generates both physical and information-sensitive risks. When tests of a new GM food reveal physical risks of allergic reaction, that new GM food is not marketed. However, tests of marketed foods may generate false negatives. Tests may fail to detect a physical risk because of insufficient diversity in the sample tested. Uncertainty about the physical risk of allergic reaction yields an information-sensitive risk of allergic reaction from marketed GM foods. Information-sensitive risks arise because of uncertainty that physical risks are absent.
Information-sensitive risks may be objectively assessed with respect to expert information. A reasonable person with all available scientific information may find that GM food carries an information-sensitive risk because of the unknown long-term effects of eating such food. If a consumer avoids GM food because of that risk, the consumer is not just responding to a perceived risk but also is avoiding an objective, information-sensitive risk. Moreover, it is reasonable to avoid an information-sensitive risk even if one has not personally assessed its magnitude with respect to expert information. Without all available information and so without knowing the magnitude of the objective information-sensitive risk of illness from some GM food, a reasonable consumer may believe that it is large enough to warrant avoiding that food.
Reasonable regulation attends to both physical and information-sensitive risks. The appropriate response to an information-sensitive risk may be further study to reduce it, whereas the appropriate response to a physical risk may be new procedures to reduce it. Information-sensitive risks are relevant to reasonable regulation. They should not be dismissed as baseless fears. Some information-sensitive risks and aversion to them persist given a rational assessment of all available scientific information. Principles of representative government allow for economically feasible regulations that reduce, or that permit citizens to reduce, information-sensitive risks.
|