The Machine Question: Critical Perspectives on AI, Robots, and Ethics

Placeholder book cover

David J. Gunkel, The Machine Question: Critical Perspectives on AI, Robots, and Ethics, MIT Press, 2012, 270pp., $35.00 (hbk), ISBN 9780262017435.

Reviewed by Colin Allen, Indiana University

2013.02.13


What is "the machine question" and does it have an answer? David J. Gunkel summarizes his main finding on the penultimate page of the book, writing that "the machine institutes a kind of fundamental and irresolvable questioning" (p. 211) -- in other words, being irresolvable, there is no answer. But what was the question? In his introduction, Gunkel gestures towards a burgeoning literature in robot ethics and machine morality, and asserts that before this literature "advances too far", it is necessary to address three questions: "Namely, what kind of moral claim might such mechanisms have? What are the philosophical grounds for such a claim? And what would it mean to articulate and practice an ethics of this subject?" (p. 2). If the first of these is the machine question, the other two constitute the methodological spiral within which the primary question becomes unanswerable.

Gunkel's methodology is explicitly (albeit somewhat apologetically) deconstructionist. The book comprises three long chapters of 78, 66, and 58 pages respectively. The first, titled "Moral Agency", partially surveys some of the aforementioned burgeoning literature. Gunkel's critique of this literature is that it has treated the topic of artificial moral agency too much in isolation from questions about the moral subjecthood, or 'moral patiency' of machines -- namely whether machines might have some claim to status within the moral community. The second chapter, "Moral Patiency", develops the thesis that the machine question cannot be adequately addressed using the traditional distinction between moral agents and moral patients. The goal of the third chapter, "Thinking Otherwise", whose title is more than a nod to Levinas, is to investigate the concept of the machine, which is found to lie permanently at the boundary between those inside the moral community and the others outside it. In Gunkel's hands, the practice of articulating a machine ethics turns out to be a kind of philosophical perpetual motion machine: the "irresolvable questioning" that "the machine institutes." It is driven by ceaseless, recursive questioning of the "alterity" of "the machine". Gunkel regards machines (by which he really means the idea of a machine) as an inevitable fabrication of any philosophy that draws a line between "us" and "them". "The machine," he writes, "therefore, exceeds difference . . . It is otherwise than the Other and still other than every other Other" (p. 207).

Passages such as this are likely to cause readers with analytic sympathies to set the book aside for a rainier day, if not to throw it in the pile for used book buyer. Gunkel, however, recognizes the need to sell his approach to a skeptical readership. He is forthright about the likely effect of invoking Derrida and deconstruction, and he sympathizes. His avowed goal is nevertheless a Derridean one of "shaking things up" while not stopping there, of promoting "the irruptive emergence of a new concept" (p. 10). In shaking up the traditional distinction between moral agency and moral patiency, he aims to provide "another thinking of patiency" -- a primordial patiency that is "not derived from or the mere counterpart of agency" (p. 11). The result of this pursuit is, he admits, no practical advice -- that is, no answers to the questions that motivated the books and articles that motivated his.

Those books attempt to address immediate questions about how and whether it is possible to improve software control of machines designed within reach of current technology but operating outside direct human supervision so as to improve their ability to produce ethically desirable outcomes. Curiously, however, Gunkel is silent on the specific issues driving much of this literature, and he ignores contributions by roboticists themselves. Missing, for example, is any reference to roboticist Ronald Arkin, who proposes a control architecture for battlefield robots that he claims will be less susceptible to war crimes than human soldiers (Arkin, 2009), as well as to Arkin's critics such as Noel Sharkey (2008), who suggests that the perceptual capacities of battlefield robots are now and for the foreseeable future too crude to support some of the basic combatant-noncombatant discriminations that are necessary for ethical outcomes on the battlefield. Regardless of one's view of this matter, the parties to this debate concern themselves with the actual capacities of engineered systems. In contrast, Gunkel offers a conception of "the machine" that is a philosophical will-o'-the-wisp, always escaping any attempt to characterize what is inside and outside human moral consideration. He recognizes that this inconclusive outcome will frustrate many. But, he suggests, that's the nature of ethics.

I have concentrated, so far, on the broad arc of the book, but what about the arguments along that arc? These are often hard to discern. Gunkel criticizes "the vast majority of research in the field" for sidestepping the question of whether machines are moral patients. For example, my coauthor Wendell Wallach and I (Wallach and Allen, 2009) are accused of an "investigative bait and switch" for raising the issue of the moral status of machines only to "immediately recoil from the complications it entails" (p. 105). However, Gunkel's assertion that the concept of an agent that is not a patient is "unrealistic" is unconvincing in itself. Examples of agency without patiency abound: One can be a good surgeon without ever having undergone an operation, or a good literary agent without even having the capacity to write a good book. Hence it requires a specific argument about why morality is the kind of domain where this agency without patiency cannot exist. The accusation (borrowed from Floridi and Sanders, 2004) that positing such an agent is akin to positing a supernatural entity that can affect but cannot be affected by the world (ibid., p. 377, n. 1), seems hyperbolic. Admittedly, our current machines, built upon existing robotic and AI technology, are not capable of suffering or any other ethically significant condition. It does not follow that they cannot engage in behaviors that are ethically significant and which therefore should be subject to appropriate controls, some of them built into the software directly. If one doesn't want to call this "artificial moral agency" because of some metaphysical claims about the nature of agents, nothing really important follows. Questions about how to make sure such machines behave within the ethical constraints that we wish them to respect still stand -- albeit that Gunkel would dismiss such questions as dangerously anthropocentric.

If one is inclined to grant Gunkel the step towards necessary inclusion of machine moral patiency in any discussion of artificial moral agency, what then? In his second chapter, Gunkel turns to the animal ethics literature where he discerns much of use, but also the residue of a "Cartesian strategy" of policing boundaries that despite becoming ever more inclusive, continue to characterize the "excluded others as mere mechanisms" (p. 156). Environmental ethics pushes the boundaries of inclusion yet further, while still excluding machines. Gunkel finds value in Floridi's "information ethics" (Floridi, 1999) for its willingness to include machines within the ethical ambit, but ultimately Gunkel rejects information ethics for being "part and parcel of a totalizing, imperialist program." I make no comment on Floridi's designs on the title of "Emperor". But here is where another postmodernist meme is central to Gunkel's thinking: moral inclusion is ultimately a matter of decision about who is "us" and who is "other". Thus, in his view, we cannot look to properties of the machines to determine their suitability as moral agents, patients, or members of the moral community; we must instead look to ourselves. In the final sentence of his final chapter, he writes, "The machine puts 'the questioning of the other'" (Levinas, 1969, p. 178) into question and asks us to reconsider without end "what respond means" (Derrida, 2008, p. 8) (p. 216).

I have already signaled my unwillingness to take the initial step of entangling questions about the moral patiency of machines with the immediate, non science-fictional questions raised by attempts to design and build artificial moral agents using current technologies for current applications. I believe that keeping a keen eye on the actual technologies under discussion and their immediate applications serves to prevent the discussion from advancing "too far". This is much better than trading these applications for an eternal quest for an ever-shifting understanding of "the machine". No doubt this will appear flat-footed to those who find it exciting to attempt to liberate philosophy and cognitive science from all traces of Cartesianism, including its materialist and mechanist progeny.

Nevertheless, there is something right about Gunkel's recognition that one can hardly consider the questions of machine morality without being led to more fundamental methodological and meta-ethical issues. Despite my dissatisfaction with Gunkel's approach and the inconclusive outcome of his investigation, he nevertheless succeeded in connecting the ethics of robots and AI to a much broader ethical discussion than has been represented in the literature on machine ethics to date. Whether this achievement is sufficient to reward the investment of a rainy day spent reading the book is a question that cannot be answered categorically, and that this reviewer is in no position to address for others individually.

ACKNOWLEDGEMENT

I am grateful to Tony Beavers for reading through a draft, catching numerous small errors, and helping me to avoid some common misunderstandings of Levinas.

REFERENCES

Arkin, R. (2009). Governing Lethal Behavior in Autonomous Robots. Boca Raton, FL: Chapman and Hall.

Derrida, J. (2008). The Animal That Therefore I Am. Marie-Louise Mallet (ed.). Translated by David Wills. New York: Fordham University Press.

Floridi, L. (1999). Information ethics: On the philosophical foundation of information ethics. Ethics and Information Technology 4:287-304.

Floridi, L. and Sanders, J.W. (2004). On the morality of artificial agents. Minds and Machines 14:349-379.

Levinas, E. (1969). Totality and Infinity: An Essay on Exteriority. Translated by Alphonso Lingis. Pittsburg, PA: Duquesne University.

Sharkey, N.E. (2008) The Ethical Frontiers of Robotics, Science, 322: 1800-1801.

Wallach, W. and Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong. Cambridge, MA: MIT Press.