Nanoethics: Big Ethical Issues with Small Technology

Placeholder book cover

Donald P. O'Mathuna, Nanoethics: Big Ethical Issues with Small Technology, Continuum, 2009, 235pp., $19.95 (pbk), ISBN 9781847063953.

Reviewed by Russell Powell, University of Oxford

2010.08.08


 

Nanotechnology refers to the design and manipulation of matter at a scale between 1 and 100 nanometers (one nanometer is one billionth of a meter). Because nanoscience works with materials ranging from atoms to macromolecules, it is making important contributions to other emerging fields, including biomedical, informational, and artificial intelligence technologies. As with other technological revolutions, nanotechnology promises to deliver tremendous social benefits while simultaneously raising a host of ethical concerns, some of which may be peculiar to nanotechnology, others that it shares with sibling emerging technologies, and still others that apply to technological innovation more broadly.

Donald O’Mathuna’s Nanoethics: Big Ethical Issues with Small Technology is an intrepid attempt to make sense of this complex thicket of scientific, philosophical, and ethical issues. The book is clearly written and well researched, and it is suitable for both an interdisciplinary academic audience and the wider interested public. Particularly helpful is its combination of an extensive bibliography with copious in-text citations, which although excessive at times serve as a great resource and launching pad for students of nanoethics.

O’Mathuna’s thesis aims for the golden mean: It walks a line between the unbridled enthusiasm of those who claim that unregulated nanotechnology will solve all of humanity’s most pressing social problems and the hysterical paranoia of individuals who find the perils of nanotechnology so unbearable that its potential benefits should not even figure into our decision calculus. O’Mathuna embraces a cautious optimism, rejecting overly simple all-or-nothing approaches to the regulation of nanotechnological innovation in favor of an empirically sensitive and far more reasonable case-by-case approach.

As I see it, the book’s primary contribution lies not in the novelty of the substantive arguments it puts forward, but in its coherent synthesis of a wide-ranging body of scholarship, and in its use of film, literature and other forms of narrative as a means by which to explore our moral intuitions about emerging technologies. There are, however, many stylistic and substantive matters with which to take issue. For instance, the presentation is often too balanced, in that the author has a tendency to present opposing positions without explicitly evaluating them or taking a stand as to which (or neither) has the better of the argument. As a result, the author’s own view is often concealed behind the claims and quotations of other writers that are not explicitly endorsed, rejected or otherwise assessed. On a substantive level, I disagree in particular with the author’s reasoning in relation to the precautionary principle, human enhancement, and post-humanism, for reasons that I will explain in the overview below.

In Chapter 1, the author aims to settle on a reasonably precise definition of nanotechnology, given that a lack of definitional clarity can hinder ethical discussion. Yet after considering a range of definitions, he does not then go on to show how the definitional controversy threatens to confound ethical deliberation. Also noteworthy in the first chapter is the author’s introduction of a broad and in my view useful distinction between ‘normal’ and ‘futuristic’ nanotechnology. Normal nanotechnology is that which is being incorporated into products that are appearing on the market regularly, from miniaturized electronic devices to chocolate chewing gum. Futuristic nanotechnology, on the other hand, refers to the hypothetical products of visionary engineering that, while theoretically sound, may be a long way from entering the marketplace. Examples include nanorobots that are injected into the blood stream to repair cellular damage and Star-Trek-style replicators that use molecular assemblers to create virtually any object. The distinction between normal and futuristic nanotechnology is helpful, I think, because it acknowledges the heterogeneity of the nanotech realm, and indicates the corresponding flexibility that will be required of any regulatory framework designed to manage its risks and secure its benefits.

Chapter 2 describes the historical use of nanoparticles, the theoretical origins of nanotechnology in the work of Richard Feynman and K. Eric Drexler (among others), and the existence of ‘natural nanomachines’ in the form of sub-cellular components like ribosomes (which translate RNA into proteins that build more complex structures). These discussions show that nanotechnology is not completely without precedent, and that it is in a sense perfectly natural since it has evolved through mechanistic evolution to support the basic processes of life.

Chapter 3 aims to justify the use of fiction in moral inquiry. It argues that ethics is not a purely rational activity and that reason is often driven by emotion and guided by imagination. Although the author invokes Blaise Pascal and Bertrand Russell in support of this contention, a stronger case could be made by citing the growing corpus of contemporary literature in cognitive psychology demonstrating that reason tends to play an ex post facto role in moral judgment, one that amounts to rationalizing pre-existing, emotion-modulated moral intuitions (see e.g. Haidt 2001). The response of the rationalist, of course, is to say that this simply describes how most people come to arrive at moral judgments, now how they should do so, and hence it begs the question regarding the proper role of reasoning and emotion in moral judgment. Personally, I find it doubtful that the psychological nature of moral judgment can ground a moral methodology that is based heavily on narrative. The author’s consideration of futuristic scenarios not for the purposes of assessing their feasibility but as thought experiments designed to probe our moral intuitions has much to be said for it, but I worry about the potential for dystopian (or utopian) stories to play on knee-jerk emotional reactions and social prejudices. Literature like Michael Crichton’s Prey, which depicts a swarm of self-replicating nanobots on a murderous rampage, may raise the general awareness of risk, but it is unclear what role it should play, if any, in technical philosophical discussions.

Chapter 4 is devoted to the risk of unintended bad consequences that could ensue from well-intentioned developments in nanotechnology. These include harms to health, ecosystems and the environment. Simply put, smaller does not always mean safer, but neither does it always mean more dangerous. The fact that there are no generally applicable rules regarding nanotoxicity compels the author to adopt a sensible case-by-case approach and to call for greater investment in research investigating the implications of the development, production, use, and disposal of products containing nanoparticles for health, occupational safety, and the environment.

Chapter 5 is a defense of the precautionary principle as a framework in which to manage the risks associated with emerging technologies. According to widely accepted principles of rational decision theory, the precautionary approach is only a legitimate substitute for risk-cost-benefit analysis when we are in a genuine condition of uncertainty — that is, when the possible outcomes are known but their approximate likelihoods cannot be assigned. The author claims that we know so little about nanotechnology that no meaningful probability assessments are possible (p. 71). However, I find it hard to believe that for any given proposed nanotechnology, the probabilities of harm to health or the environment are completely unknown, given that nanotechnology did not emerge from a primordial technological soup, but rather was built on long-standing disciplines with extensive safety records such as physics, chemistry and biology. In any case, the relevant probabilities can usually be obtained under controlled experimental conditions. But even if the probabilities are unknowable, precaution as an action-guiding principle is only justified when there are no substantial benefits at stake (Gardiner 2006). Where a nanoproduct promises significant benefits, this application condition will not be met, and thus the precautionary principle cannot possibly govern the entire class of nanotechnology.

The precautionary principle has been subjected to fierce criticism in philosophical circles: weaker versions are accused of providing vacuous slogans (such as “better safe than sorry”) that fail to provide any substantive guidance in decision-making; stronger versions, on the other hand, are seen as overly restrictive, hyper-risk averse, self-contradictory, and failing to provide for the acquisition of relevant safety knowledge. In order to avoid these problems, the author proposes a precautionary approach that considers products on a case-by-case basis, takes the benefits of weaker restrictions into account, considers the risks of stronger regulations, conditionalizes on evidence, and provides for the acquisition of relevant data. This is a sound approach indeed — but one that does not resemble anything like the ‘precautionary principle’ as traditionally understood, which was meant to avoid a product-by-product assessment of risks and benefits.

In accordance with some influential formulations of the precautionary principle, the author holds (p. 80) that the burden of proof should be on the promoters of products to show that they are safe, rather than on regulators to demonstrate the likelihood of harm. However, this proposal is virtually indistinguishable from standard approaches to health, safety and environmental regulation, as it applies (for example) to many products that fall under the auspices of the Food and Drug Administration (Soule 2005). The author is right to criticize the currently weak regulation of dietary supplements containing nanoparticles, which like all dietary supplements do not require pre-market approval in the U.S.; but the same criticism applies to all potentially dangerous dietary supplements whether or not they contain nanoparticles.

The author argues in this and other chapters that had regulatory authorities adopted a precautionary approach to materials like asbestos and Agent Orange, many of the harms associated with these products would have been avoided. However, pointing to cases in which the risks and benefits of a novel technology were miscalculated does not in itself justify a sweeping overhaul of the regulatory regime. It is simply implausible to think that a single, overarching principle of risk management will be able to accommodate the complexities of commercial regulation. To the extent that the precautionary principle is modified to accommodate this diversity, it loses its supposed virtue of simplicity and is no longer the same regulatory beast.

Chapter 6 focuses on considerations of justice, examining the implications of nanotechnology for developing countries and the extent to which it could reinforce or exacerbate existing inequalities. Nanotechnology could improve the quality of life in impoverished countries by providing clean energy and water, super-efficient agriculture, and affordable information technology. However, there is a risk that nanotechnologies will, like biomedical innovation more generally, be geared toward consumers in developed countries. As the author points out, due to a lack of financial incentives, there will likely be underinvestment in nanoproducts that are desperately needed in the developing world, despite the fact that diseases of poverty constitute an overwhelming fraction of the world’s disease burden. Citing Thomas Pogge (p. 96), the author argues that this amounts to global injustice because the suffering of the poor is avoidable through modest social and political reforms. Here the author could have made the stronger claim, equally attributable to Pogge (2005), that the duty to help the world’s poor is in fact a negative one stemming from the fact that individuals of developed nations are affirmatively harming people of developing countries and directly benefiting from activities that contribute to their suffering.

Chapter 7 examines the potential medical applications of nanotechnology, including (inter alia) improved drug delivery systems, the active targeting of diseased structures at cellular and sub-cellular levels, functional implants, and highly personalized medicine via ‘lab-on-a-chip’ technology.

Chapter 8 explores the ethics of using nanotechnology not to restore health but to enhance human capacities. The author’s argument against enhancement rests on the claim that the legitimate goal of medicine is to treat or prevent disease, not to improve human lives beyond the parameters of normal health. Yet the author does precious little to defend this claim. One might reasonably think that the aim of modern medical science is to improve human life through biomedical intervention, with the restoration of ‘normal’ function being central (but nonetheless instrumental) to achieving this goal. Medical interventions are not carried out for the sake of restoring function to a non-conscious body, but rather on behalf of a person and in furtherance of her ability to live a meaningful life. The daunting conceptual hurdles confronting any attempt to define normal biological function are well known in theoretical biology and the philosophy science (see e.g. Hull 1986), and even if these obstacles could be overcome, showing that there is something intrinsically valuable in “the normal” is even more problematic. There may be reasons that medicine should, generally speaking, prioritize treatment over enhancement, but these need not be based on the questionable assumption that there is an intrinsic moral difference between the two.

The author’s critique of enhancement echoes that of other bioconservatives, such as Leon Kass and Michael Sandel, in attacking the motives and character traits of individuals pursuing enhancement technologies. The target here is not the unintended consequences of laudable aims, but the “frenzied pursuit of enhancement” (pp. 155, 195) that is allegedly driven by pernicious attitudes, values, and dispositions. These include the futile pursuit of perfection, the quest for immortality, the Promethean desire to be masters of our own destinies, a selfish and narcissistic drive to outcompete others, and the naïve view that technology will solve all of the world’s problems (see pp. 153-7). Rather than pursuing enhancements, we should have the courage and wisdom to recognize that some things are out of our control, accept the “givenness” of suffering, feel gratitude toward our caretakers and society, and realize that as gifts “our lives are not only ours to do with as we please” (pp. 155-6).

So it is that in a single rhetorical stroke, the author impugns the motives and character of countless scholars, biomedical professionals, and potential consumers who promote or defend the enhancement program. He rejects biomedical enhancements across the board on character grounds, without considering the social and psychological consequences of enhancement or the potential benefits at stake. As Allen Buchanan states (forthcoming), “The claim that the pursuit of biomedical enhancement is the pursuit of perfection is a sweeping, wildly implausible generalization about human motivation — and an extremely uncharitable one.” Even if some people pursue enhancements out of a desire to achieve perfection, what evidence is there to believe that everyone who seeks to improve a capacity does so for this reason? Moreover, as Buchanan also points out, the worry about bad character motivations is simply that — a concern to be taken into account — not an all-things-considered objection to the enhancement enterprise.

Equally unfair is the author’s association of the pursuit of enhancement with narcissism and apathy toward the suffering of others (pp. 157, 170). Many of the scholars who champion or are otherwise sympathetic to biomedical enhancement are also powerful advocates for affirmative duties to aid the global poor, and have carefully considered the distributional dimensions of biomedical innovation (see e.g. Buchanan et al. 2001).

In addition, the reader is left wondering about the social implications of the proposed ban on enhancement and the restricted role of medicine on which it is based. For instance, how would the author address the concern (originally raised by Buchanan et al. 2001) that a predictable result of prohibiting enhancements is that they will come in through the “back-door” (as exemplified by the widespread off-label use of cognitive enhancing drugs by college students)? Would it not be better to regulate them in the open, rather than to push them underground where their effects cannot be monitored?

Chapter 9 is an attack on post-humanism, a philosophy centered around the technological enhancement of human cognitive, physical, emotional, and psychological capacities. Nanotechnology could play an important role in facilitating some of the central goals of post-humanism, including life extension, cognitive enhancement, and space travel. The author states that post-humanism, like the enhancement project, is premised on the “dangerous” liberal assumption that people should be free to choose whatever lifestyle they wish so long as they are not harming others (p. 167). He also criticizes post-humanism on scientific grounds. The life-extension goals of post-humanism are biologically unachievable, he claims, because even if we were to cure all post-reproductive diseases (such as heart disease and cancer), this will only minimally increase lifespan. The author is correct to stress that senescence is a more complicated phenomenon than either of the above diseases, but he is wrong to suggest that the upper biological limit cannot in principle be modified. Evolutionary biology shows that life history, including patterns of senescence and their underlying mechanisms of cellular repair, are under genetic and selective control. If natural selection can radically alter patterns of senescence by modifying the genome when it is conducive to fitness, then (in principle) intentional genetic modification can do the same — not so that the human body is perfect forever, but so that it is better for longer.

The author asserts (pp. 174-175) that post-humanists who contend that we need to take control of our own evolutionary destiny are advancing an incoherent position, since evolution is a slow, blind process that is driven by random mutations and their fitness consequences, whereas the interventions contemplated by post-humanists exhibit none of these characteristics. I am afraid that here the author simply misses the point: ‘evolution’ by definition refers to a change in the distribution of genes or genetically heritable traits over generational time — it is agnostic to the mechanism underlying this change. The whole idea behind actively intervening in the evolutionary process is that mechanistic evolution is an extremely unreliable and often morally unacceptable means for producing outcomes consistent with human good. Post-humanists have a goal in mind, and they see natural selection as an ineffective means by which to get there. Whether this goal is admirable is an entirely separate question.

Like the 2003 Report of the U.S. Presidential Commission on Bioethics, the author argues in favor of a strong deference to the “wisdom of nature” and compares the products of natural selection to those of a master engineer (p. 174) — an analogy which he uses to justify skepticism of biomedical enhancement, especially germ-line modification. For reasons of space, I can only point to a forthcoming paper in which Allen Buchanan and I present a detailed argument showing that the “master engineer” analogy stems from pre-Darwinian conceptions of the natural world that have been thoroughly refuted by evolutionary theory but which continue to distort assessments of risk associated with genetic modification technologies.

To conclude, I think that the first seven chapters of the book offer a relatively balanced overview of the big ethical issues with tiny technology, while the arguments in chapters 8 and 9 are sometimes unfair, ad hominem, and fallacious. At times the author uses a rhetorical pick ax where a scalpel — or better yet, a nanosurgeon — is more cut out for the task. One important ethical issue that was not discussed is the problem of “dual use”: the possibility that nanotechnology developed for civilian purposes could be co-opted for less benevolent ends, such as for the manufacture of highly destructive weapons either by terrorists, rogue states, or the governments of developed states for allegedly ‘defensive’ programs. Indeed, dual use may be the most serious risk associated with nanotechnology, synthetic biology, and other related disciplines. Lastly, although no book on emerging biotechnology is apparently complete without vague admonitions about playing God and “tempting fate” (p. 177), I think the author could have presented these notes of caution in a more critical light.

References

Buchanan, A., Brock, D.W., Daniels, N., and Wikler, D., (2001), From Chance to Choice, Cambridge UP.

Gardiner, S.M. (2006). “A Core Precautionary Principle.” Journal of Political Philosophy 14(1): 33-60.

Haidt, J. (2001). “The emotional dog and its rational tail: A social intuitionist approach to moral judgment.” Psychological Review 108: 814-834.

Hull, D.L. (1986). “On Human Nature.” Proceedings of the Philosophy of Science Association 2: 3-13.

Pogge, T. (2005). “Severe Poverty as a Violation of Negative Duties.” Ethics and International Affairs 19(1): 55-84.

Powell, R. and A. Buchanan (forthcoming). “Breaking Evolution’s Chains: The Promise of Deliberate Genetic Modification in Humans.” Journal of Medicine and Philosophy.

President’s Council on Bioethics (2003). Beyond Therapy: Biotechnology and the Pursuit of Happiness. Washington DC: National Bioethics Advisory Commission.

Soule, E. (2004). “The Precautionary Principle and the Regulation of U.S. Food and Drug Safety.” Journal of Medicine and Philosophy 29(3): 33-50.