Is a Little Pollution Good for You? Incorporating Societal Values in Environmental Research

Placeholder book cover

Kevin C. Elliott, Is a Little Pollution Good for You? Incorporating Societal Values in Environmental Research, Oxford University Press, 2011, 246pp., $65.00 (hbk), ISBN 9780199755622.

Reviewed by Carl F. Cranor, University of California, Riverside

2011.07.28


Trace amounts of selenium, a naturally occurring substance, are needed for good health. Too much or chronic exposure to selenium through food, water or soil can disrupt hormone function, impair immune systems, kill kidney cells, contribute to bronchitis, asthma or diarrhea. With enough long-term exposure it can produce neurotoxicity, even leading to amyotrophic lateral sclerosis, Lou Gehrig's disease. In the extreme it can cause death. Ordinarily, we would seek the advice of physicians to determine the amount of selenium to ingest. However, there are toxicologists who advocate "hormesis," the idea that "some normally toxic chemicals … [have] seemingly beneficial low-dose effects." (9) Instead of consulting our physicians about the amount of selenium we ingest, should we leave it to the local metal industries, coal or oil fired power plants, or hazardous waste sites or to public health agencies to decide how much selenium to which we should be exposed through the environment for healthful doses while avoiding toxic amounts?

While the selenium example is fanciful, Kevin Elliott in "Is a Little Pollution Good for You?" uses the example of hormesis both to raise questions about this scientific claim and to discuss "how societal values can be more effectively incorporated into a number of judgments associated with policy-relevant environmental research." (5) Despite the main title, the central goal of the book is captured in the subtitle: to discuss whether societal values should be incorporated into scientific research; if so, which ones; where and when these should be utilized; and how this should be done. His developed view is then applied to hormesis, endocrine disruption and multiple chemical sensitivity.

One claim (H) for hormesis is that for some biological systems, "some low-dose toxicants and carcinogens exhibit hormesis on some biological endpoints." (11) A second claim (HG) is that "hormesis is widely generalizable across biological models, chemical classes and endpoints." A third claim (HP) is the hypothesis that hormesis is the predominant dose-response model, accurately representing the low-dose effects of toxicants more frequently than alternatives do. (11) A fourth claim (HD) is that "a strong case can be made for the use of hormesis as a default assumption in the risk-assessment process." (12) While H has sufficient plausibility that most researchers accept it, the other three clearly do not follow from H and many scientists disagree with them. (12)

In the early chapters Elliott carefully clarifies some of the ways in which societal values could influence various scientific and policy assertions that appear to have been made concerning hormesis. He finds that various challengeable decisions were made in choosing and designing studies, developing scientific language for hormesis, evaluating studies and applying the research. These categories of decisions guide his more general inquiry into the role of social values in scientific research.

Elliott argues that various non-epistemic values can and should affect which scientific questions are asked and pursued. Social values can also influence how scientific research is applied to public policy problems or technological developments. Values of character affect how scientists treat colleagues and graduate students in pursuit of research. However, he argues that such values can also affect scientific inferences from data to theories or hypotheses. (60) While there is considerable agreement on the first three influences of extra-epistemic values, other philosophers of science have pointed out the role of researchers' values even in the fourth instance. (61) Elliott then argues against a narrow conception of value-free epistemic considerations by noting the "gaps" between data and conclusions inferred (what he calls the gap argument) as well as the possibility of errors in inferences (the error argument). (62-70) At times some arguments here and in later chapters can be difficult to follow because of Elliott's careful attention to numerous objections and replies. One can track the arguments, but it takes careful reading.

Moreover, there are three institutional "bodies" within which social values can be incorporated into scientific research for public policy purposes in order to ensure more "trustworthy public-policy guidance from scientific experts." (190) Chapters four through six discuss the bodies that are central to this goal. These are the body of scientific knowledge itself that should be protected to the furthest extent possible from interest group influence. Next are the scientific advisory bodies for guiding policy makers, and how to create the best ones for public purposes. And, finally "the bodies of experts themselves", which assess the science, should act in accordance "with an ethics of expertise" for the problem at hand, in order to have the best interpretations of what the research shows (190).

He argues that there is an important role for social values in policy-relevant scientific research for several reasons. For one thing, "scientists have ethical responsibilities to consider the major societal consequences of their work and to take responsible steps to mitigate harmful effects that it might have." (191) For another, because research in this area often faces circumstances in which scientific data is either uncertain or incomplete, scientists must decide what standard of proof to require before drawing conclusions from the research. A third consideration is that because it may be socially harmful or impracticable "for scientists to respond to uncertainty by withholding their judgment or providing uninterrupted data to decision makers," there are ethical reasons for researchers to take a stand on what uncertain data show, because risks to policy decisions can arise when they do not. (55) Thus, he argues that four methodological and interpretative decisions in science (and exhibited in the hormesis case) should not be "entirely insulated from societal values." While much of this seems sensible, he clearly favors protecting the public's health. Would he also permit incorporation of various libertarian social values, which are now in vogue, in science that would minimize health protections to enhance economic activity? He has some considerations that would modify extreme social policy views (56), but is his theory too open-ended on social values?

How should his suggestions function in policy-relevant research? Beyond the hormesis example, he considers both multiple chemical sensitivity and endocrine-disrupting research for application. Consider the example of endocrine disrupters.

Important value judgments concerning endocrine disrupters fall into four categories. The first is choosing research topics and designing studies. Not surprisingly, advocates of better environmental protections urge one set of values and industries likely affected by findings of adverse effects advocate a different set. Elliott argues that Theo Colborn's "passionate desire to investigate environmental problems" led her to realize the connections between seemingly unconnected phenomena in wildlife and humans. In some places perhaps she overemphasized them, but was more cautious in others. (176) In contrast, industry groups seemed to choose and design studies that minimized adverse effects. He suggests that the hormesis studies might have been funded to deflect attention from low dose adverse effects from synthetic endocrines. (176) Parties to the endocrine disruption debate reveal sharp contrasts in studies investigating endocrine disruption, with 90% of 112 publicly-funded studies finding adverse effects and 0% of 11 industry-funded studies finding effects.

Choice of terminology and categories of research also reflect divergent values. The U.S. EPA defined an endocrine disrupter as any "exogenous agent that interferes with" the action or transport of natural hormones in the body (177), while the Organization of Economic Cooperation and Development, the European Union and World Health Organization all required harm to a whole organism or its progeny from exogenous agents that changed endocrine functions. (177) Elliott suggests that the differences in conception of endocrine disruption could lead to differences in who had the burden of proof for identifying harmful chemicals: industry under the EPA account (because the EPA definition only requires interference with the endocrine system), or public health agencies under the OECD account (which requires harm to the endocrine system). The importance of definitions or characterizations is not new to philosophers, but Elliott nicely shows their consequences and how they can be controversial both for the hormesis thesis and for endocrine disruption. (178)

A third, and not surprising, area of controversy in social values concerns the interpretation and evaluation of studies that have been completed. Are sperm counts in men declining, at least in part because of exposure to endocrine-disrupting chemicals? Whatever view one takes of this outcome, Elliott points out that there are a range of methodological judgments that must be addressed in evaluating the results. (179) However, it is not clear to what extent these judgments reflect "social value" judgments as opposed to more epistemic assessments of the studies and their results.

Finally, how should scientific research be applied to public policy? An obvious concern is "how much evidence to demand for endocrine-disrupting effects on humans before taking particular actions in response to the phenomenon." (179) There has been considerable debate concerning this. Environmental health advocates urge that one need not wait until one has nearly certain scientific evidence of a problem before taking action, while industry advocates argue that there should be greater scientific certainty for the same action.

Some distinctions would assist this debate. For one thing, there is a difference between postmarket and premarket contexts for the issues. A postmarket legal context is one in which a product has entered the market without any premarket toxicity testing and a scientific and legal case must be made to reduce exposures or withdraw it from the market.[1] This is the current legal context that applies to the vast majority of industrial chemicals—pesticides, pharmaceuticals and (to some extent) new food additives excepted --and the one Elliott is addressing. Environmental health advocates wish to reduce exposures based on less than full scientific certainty in order to try to protect the public health as early as an "appropriate level of evidence supports" (my gloss), while industry advocates likely insist on greater scientific evidence because their products are at stake and they can continue to make profits on their products as long as they remain in the market. Environmental health advocates recommend various forms of precaution while industry advocates urge greater scientific certainty; precaution takes the form of trying to find appropriate data burdens to better protect the public.

Under premarket laws that apply to pharmaceuticals, pesticides and new food additives products may not enter without toxicity testing. In this legal context, the positions likely would be reversed: public health advocates would want greater assurance about the safety of products before exposures occur, while some affected industries might argue for the need for lesser scientific evidence in order to place products in the market quicker. This occurred under new pharmaceutics with life-saving potential beginning about 1997, but at least some of them have now been found to pose unacceptable risks and withdrawn.

In addition, in postmarket contexts there will likely be many different courses of action that would do something, although perhaps not the best, to increase health protections. Merely keeping registries of sperm counts should be comparatively uncontroversial -- although in political contexts industry advocates sometime object to having even preliminary information that might smudge (if not blacken) their products. One could track populations known to have high exposures to endocrine disrupters, e.g., workers. Neither of these actions by themselves removes substances from the market nor requires the reduction of exposures. Thus, administrators could have different escalating courses of actions that they could take toward suspect substances based on different standards of proof.

Similarly, for premarket contexts products could receive approval for commercialization, but have conditions placed on their sale and use if there were slight but not conclusive evidence for toxicity. Some products could be more closely monitored than others based on evidence of toxicity from premarket tests. Other products might be constrained to be used in a select market, e.g., only in closed systems, if they had benefits but greater evidence of toxicity. And, of course, there are other possibilities. The overall point is that the law establishes who has the burden of proof. The assignment of burdens of proof determines where pressure will come from affected parties. Demands for data can have quite different effects in the different contexts. However, more generally in either postmarket circumstances or premarket contexts regulatory responses can be tailored in a more fine-grained way as is appropriate for the toxicity of a product if it is socially of sufficient importance.

Beyond issues of research topic choices, characterization of phenomena, data interpretation and application of research to policy issues, Elliott argues that (especially) writers of more popular pieces should follow what he has called the "consent principle" and "make controversial judgments clear." This permits readers to make a more autonomous judgment about a policy view with conflicting data (186). This applies to both authors of books such as Our Stolen Future as well as journalists and popular conveyers of scientific information. Of course, if scientists deliberately design studies for a particular outcome, e.g., false positives, they would be manipulating information that precludes autonomous assessment of it. Elliott is concerned that industry-funded studies of both bisphenol A and of Atrazine may suffer from these deficiencies.

Kevin Elliott has taken on a particularly controversial (many scientists would say dubious) scientific view, namely hormesis, and utilized it to clearly identify various places non-epistemic social values can and should be incorporated into core scientific activities that bear on public policy issues. This comprehensive, thoughtful, and careful discussion should now be part of the dialogue about social values in science-policy discussions.


[1] Carl F. Cranor, Legally Poisoned: How the Law Puts Us at Risk from Toxicants (Cambridge: Harvard University Press, 2011), pp. 135-136.