Proponents of the Precautionary Principle (PP) herald it as an approach to making decisions about human health and the environment that differs radically from standard approaches such as cost-benefit analysis (CBA). PP is meant to provide guidance with respect to cases in which we have incomplete scientific knowledge of the harmful effects of either bringing to the market new technologies (e.g., genetically modified organisms (GMOs) or nanotechnology) or permitting continued use of old technologies (e.g., the burning of fossil fuels). The central idea is that even if the normal scientific standards for establishing causal connections are not met in the case of the relationship between one of these technologies and a potential harm to human health or to the environment, precaution can warrant the regulation of that technology. As attractive as the idea of precaution seems, however, the literature on PP is a tangle of competing, sometimes contradictory, ideas. Can order be imposed on this mess? That is what Daniel Steel tries to do in this ambitious new book.
His stated mission (p. 2) is to provide an interpretation whereby PP is a well-motivated principle of decision-making, is clearly distinct from alternative approaches such as CBA, meshes well with relevant scientific subfields, and does not threaten the integrity and reliability of scientific research. In the course of doing this, Steel addresses several "fundamental challenges" to PP. In chapter 2 he addresses "the dilemma objection" that, on a weak interpretation of it, PP is vacuous, while on a strong interpretation of it, PP is self-refuting and hence irrational. In chapter 3 he takes on the complaint that far too many conflicting ideas fly under the banner "the Precautionary Principle" for PP to be a unified, coherent idea. In chapter 4 he provides an historical argument for PP, citing the numerous cases in which lack of precaution led to severe adverse consequences and comparing them to the relatively few cases in which excessive precaution prevented or delayed useful technologies from emerging. Chapter 5 is devoted to clarifying the concept of scientific uncertainty at work in PP. In chapter 6 he develops a novel defense of the idea that the rights and interests of future generations of humans should be counted as equal to those of current humans. Finally, in chapters 7 and 8 he addresses the challenge that PP demands we abandon objectivity and neutrality, and thus threatens the integrity of science.
Steel develops throughout this book a position on PP distinctive in its scope. As he makes clear early on (pp. 10-11), in the literature on PP, PP is variously construed as (a) "a procedural requirement that places some general constraints on how decisions should be made," (b) "a decision rule that aims to guide choices among specific environmental policies," or (c) "an epistemic rule that makes claims about how scientific inferences should proceed in light of risks of error" (pp. 10-11). For Steel, "PP is all of the above: a procedural requirement, a decision rule, and an epistemic rule" (p. 11). To accomplish this grand unification, Steel proposes interpreting PP as involving three distinct ideas: the Meta-Precautionary Principle (MPP), the "tripod," and proportionality.
MPP is a procedural requirement that decision-makers avoid decision rules susceptible to paralysis by scientific uncertainty. MPP is supposed to rule against exclusive use of CBA, since, according to CBA, "precaution is warranted only if it can be shown that the expected benefits of the precaution outweigh its expected costs" (p. 21). In chapter 4, "The historical argument for precaution," Steel argues for both the inadequacy of CBA and the practical necessity of MPP.
The "tripod" refers to a concrete decision rule, specified in terms of a conditional of the following form: if the allegedly bad effect of a proposed technology meets the harm condition and if the evidential link between the technology and the bad effect meets the knowledge condition, then decision-makers ought to enact the recommended precaution. A harm condition specifies the characteristics of a possible effect of technology on human health or the environment in virtue of which precautionary measures should be considered; examples include "causing catastrophic damage" and "causing irreversible damage." The knowledge condition specifies our epistemic state regarding the causal connections between the technology and the allegedly damaging effect; examples include "precedents indicate the technology causes the effect" and "it has not been proven beyond a reasonable doubt that it is not the case that the technology causes the effect." Recommended precautions include postponing, banning, or further studying the proposed technology. A "version of PP" is just what results when a particular trio -- harm condition, knowledge condition, and recommended precaution -- is plugged into the tripod.
For Steel, distinct versions of PP can be constructed for distinct contexts, so long as that construction is guided by the principle of proportionality. Proportionality consists of two subsidiary constraints: consistency and efficiency. Consistency demands that the precaution recommended for by a version of PP not also be recommended against by application of that same version. Efficiency demands that a version of PP consistently recommend less costly precautions over more costly ones. As Steel argues in chapter 2, section 4, adhering to proportionality is crucial for blocking the objection that PP is incoherent or self-defeating.
If all three of MPP, the tripod, and proportionality are included under PP, then PP is both a procedural requirement and a decision rule. How is PP also an epistemic rule? Steel's argument for this is complex. It begins in chapter 5 with a case for not defining scientific uncertainty in terms of the standard decision-theoretic distinction between risk (in which both (i) the full array of possible outcomes of an action and (ii) the probabilities of the outcomes are known) and uncertainty (in which (i), (ii), or both are unknown). This means that MPP can apply, not just to cases of uncertainty, but to cases of risk. Steel's argument for PP as an epistemic rule next moves to the case in chapter 7 against the "value-free ideal of science" -- the idea that scientific inquiry ought not at all be guided by non-epistemic concerns like promoting human health and preserving the environment. The basic argument against the value-free ideal is "the argument from inductive risk," which is that the very act of accepting or rejecting a scientific hypothesis is partially determined by the seriousness of making a mistake (e.g., being wrong about pharmaceuticals vs. being wrong about quasars). According to proponents of the argument from inductive risk, in real-world science, choices regarding standards of evidence for the acceptance or rejection of hypotheses are made at every step of the process. Thus Steel says: "Abandoning the ideal of value-free science opens up space for an epistemic interpretation of PP, while calling for a rethinking of the difference between acceptable and unacceptable roles of values in science" (p. 160). In chapter 8 he develops the "values-in-science standard" as an alternative to the value-free ideal. This standard permits explicit inclusion of non-epistemic values as part of the scientific method. So long as these non-epistemic values never override the pursuit of truth, non-epistemic values can have an important place in science. That means it will be a legitimate application of MPP to favor decision rules that embody precautionary values. "The aims of protecting human health and the environment can legitimately influence methodological decisions in policy-relevant science," says Steel (pp. 7-8); "For example, it suggests that what should count as sufficient evidence that a new technology does not pose undue risks reflects a value judgment concerning the relative costs of unnecessary regulation versus harmful environmental or human health impacts" (p. 8).
Steel gets high marks for both novelty and effort, but I see two practical problems with the enactment of his vision of PP. First, he argues that a solely CBA-based approach tilts the approval process for new technologies against protecting human health and the environment. Yet it seems to me his PP goes too far in the other direction. By being all three things -- a procedural requirement, a decision rule, and an epistemic rule -- Steel's PP would double- or triple-count the harms (actual or potential) of a proposed technology. Take GMOs as an example. On Steel's interpretation, the values-in-science standard licenses precautionary science. Precautionary science will be much more likely than value-free science to evaluate GMOs as potentially harmful. Next, the report written up by the scientists researching GMOs will have to be evaluated by regulators. For the regulators to do their jobs, the general approach to regulation they employ must be directed by the governments that appoint them. If those governments are following Steel's PP, then MPP will result in those governments directing the regulators to make their decisions using some version of PP. (Governments do issue such directives. For example, in February 1981 Ronald Reagan signed Executive Order 12291, mandating that, prior to approval, all new federal regulations be subject to CBA.) Finally, through the prism of the tripod (plus proportionality), the regulators themselves will select for the evaluation of GMOs a custom-made version of PP. But this means that a report generated by applying precautionary science will be evaluated by a GMO-specific precautionary decision rule that was chosen as part of an overall approach to regulation favorable to precaution. Readers will have to judge for themselves whether such an approval process is fair. To me, it seems not to be.
Second, the applicability of the versions of PP produced through Steel's tripod will depend on the clarity of the elements plugged into the tripod. Like many of those writing on PP, Steel takes most of these elements for granted, yet they cry out for clarification. Harm conditions in particular are routinely specified using fuzzy concepts such as catastrophe, irreversibility, and irreplaceability. In developing his interpretation of PP, Steel largely ignores these elements. Since it is out of them that versions of PP get built, more work needs to go into specifying what they mean. For example, in developing his concept of proportionality, he argues that the potential effects of climate change "can reasonably be characterized as catastrophic" because "many millions of people could suffer severely harmful outcomes" (p. 31). On the other hand, he says "there is in fact little basis for the idea that substantial climate change mitigation, for instance, by means of a carbon tax, would lead to economic catastrophe" (p. 34) because the cost of an effective carbon tax is an estimated 1%-3.5% reduction in global gross domestic product by 2050. But what is it for something to be a health catastrophe, what is it for something to be an economic catastrophe, and how are we to compare these two different kinds of catastrophe? That Steel can provide a case we are all inclined to count as catastrophic and another that we are all inclined to count as non-catastrophic does not mean the concept 'catastrophic' is at all a clear one. So it is not at all clear to me on what basis those applying PP will judge whether an outcome is catastrophic or not. The same goes for 'irreversible', a concept Steel invokes in making the historical argument for precaution (chapter 4). He says that "from the perspective of PP, then, irreversibility is a red flag that signals the necessity of careful deliberation before proceeding" (p. 74), yet it is clear from his usage that he defines 'irreversible damage' to mean 'damage from which it takes a long time to recover'. Is that the right way to understand 'irreversible damage'? If so, how long is too long? I have tried elsewhere to make some initial sense of the concept of irreversibility that is relevant to PP, but that was only a small contribution to what ought to be a much larger project necessary for putting the building blocks of PP on a sound footing. In this book, Steel does not contribute to that project.
A final complaint: in subtle ways, Steel occasionally plays favorites. For example, he says
rather than being an evenhanded approach to reducing overall risks as its supporters claim, risk trade-off analysis is clearly biased against environmental regulation. While a fair approach to unintended consequences of regulation would emphasize both potentially positive and negative effects, risk trade-off analysis focuses exclusively on the negative side of the balance. (p. 87)
What is his evidence for this? He writes
unintended positive consequences of regulation are plainly not what advocates of risk trade-off analysis want to talk about. As a result, risk trade-off analysis typically functions as a recipe for finding creative ways of arguing against regulations aimed at protecting human health or the environment. (p. 87)
Yet that is not an objection to risk trade-off analysis per se. It is only an objection to the people who heretofore have developed and applied risk trade-off analysis. Steel warns against this very sort of mistake elsewhere. He notes of one author that she "take[s] the profession of risk analysis to task for being in the pocket of the industries whose products generate the risks they analyze and for systematically failing to consider a sufficiently broad range of alternatives" (p. 110). He responds that her points "should be regarded as criticisms of common practices rather than an indictment of endeavors to quantify risk per se" (p. 111). Why does Steel not extend the same courtesy to risk trade-off analysis that he does to risk analysis?
These objections made, this book clearly makes an important contribution to the literature on the Precautionary Principle. It belongs on the shelf of any library on environmental policy and environmental ethics, and environmental philosophers will profit from reading it.
 Manson, N. (2007) "The Concept of Irreversibility: Its Use in the Sustainable Development and Precautionary Principle Literatures," The Electronic Journal of Sustainable Development vol. 1, no. 1.