Sven Nyholm’s book is about the ethics of human-robot interaction, which he neatly breaks down into two questions. On the one hand, how should robots be designed in order to behave appropriately around people? On the other hand, how should people conduct themselves around different kinds of robots? (cf. p.4) Unlike very narrow approaches that consider robots in isolation, the book widens the focus to include human behavior and human-robot interaction. The book mostly considers robots that have stirred wide public attention, such as the humanoid robot Sophia who famously received honorary citizenship of Saudi Arabia. Other examples are sex robots and “robotic looking” (8) robots such as the commercially available robot Pepper. The first half of the book also discusses a number of other typical examples such as self-driving cars.
The arguments presented in the book are well-known from recent discussions of robot and AI ethics. Those familiar with these discussions will learn little new but may find it interesting to rehearse the selected lines of arguments presented and to rethink the examples discussed or to learn Nyholm’s thoughts on them. In that case, however, I recommend considering Nyholm’s later papers. Those who are not familiar with the discussion will benefit from a selection of accessible arguments concerning robot ethics. The book does not presuppose any philosophical background knowledge. It is full of vivid examples, provides simple definitions and arguments, and is written in a very clear manner. It does not demand much from the reader apart from some patience to work through argumentations that at times may seem repetitive. The book is hence probably most rewarding for students with a sustained interest in robot ethics who want to study analytical arguments concerning human-robot interaction.
The book’s focus on human-robot interaction makes sense not only because such interactions are becoming increasingly common, but also because it may contribute to overcoming the traditionally widespread obsession with issues that may one day arise from human-level or superintelligent AI. By now it is clear that numerous important issues are arising well before the advent of artificial intelligence that matches or supersedes human intelligence. Nyholm writes that “At the present, it is more pressing to ask how people should conduct themselves around currently existing robots or robots that might be in our midst within the foreseeable future” (199, cf. p.146). Nyholm also recognizes that “most real-world robots” do not look like humans and not even like “paradigmatic robots” (8), which have a more remote resemblance to humans. Nevertheless, the book mainly concentrates on humanoid robots. It starts with a detailed description of the controversies around the humanoid robot Sophia, and the second half of the book is dedicated to humanoid robots.
Nyholm specifies his approach with respect to “(a) the differences in the kinds of agency human beings and robots are capable of and (b) people’s tendency to anthropomorphize robots” (4). I think that both topics are important in the context of the contemporary discussion of robots. Human agency and robot agency are indeed two very different and often insufficiently distinguished kinds of agency, and people, including some authors writing on robots, do tend to anthropomorphize robots. Hence, I find both the topic and Nyholm’s first approach promising. Regrettably, however, there are also fundamental shortcomings that prevent the book from living up to its potential.
Nyholm claims that differences regarding agency are founded in the different nature of robots and humans: “the inner life of a robot is robotic in nature. It should not be confused with the inner life of a human being or any nonhuman animal” (201). The book explicitly discusses the nature of robots in chapter 1.3 (What is a “Robot”) and that of humans in chapter 1.4 (What is a Human Being). The definitions of both remain rather vague, however. With regard to robots, Nyholm points to various general definitions without settling for one. He concludes that “Throughout most of this book, it will not matter greatly whether it is possible to give a precise definition that captures all things we might want to label ‘robots’” (11). But even if a precise definition is impossible, that does not mean that conceptual clarifications are superfluous. For instance, it would have been helpful to clarify the relation of robots to AI, which is another concept that Nyholm leaves vague. Accordingly, Nyholm does not discuss possible borderline cases, such as certain kinds of chatbots, which would have helped explain what Nyholm finds essential to robots.
Regarding humans, the book goes beyond concepts that describe humans as essentially patterns of information, and instead holds that humans are embodied beings. By ‘embodied,’ Nyholm means that we are beings “with not only human minds, but also human bodies,” respectively “with our distinctive types of bodies, brains, minds, and biological and cultural features” (12). While this vague definition is unlikely to meet opposition from anybody, it does little to clarify what is at stake. Given that Nyholm recognizes that it is important to clearly distinguish between robots and humans, I find the book could have profited from a more careful discussion of the particular characteristics of robots and humans.
Further clues to the nature of humans can be derived, however, from Nyholm’s description of problematic features of the human mind. As mentioned, the book ascribes central importance to the tendency to anthropomorphize robots and features ‘anthropomorphism’ in the subtitle. Notwithstanding, the last time the word anthropomorphism or other words with the same root appear is on p.6. Nyholm does, however, name further “key aspects of our minds that complicate our interaction with robots and AI significantly” (20). These are “so-called mind-reading, dual processing, our tribal tendencies, and (for lack of a better term) our laziness” (16).
The concept of mind-reading picks up some of the thoughts that were introduced regarding the tendency to anthropomorphize robots. It refers to the tendency to “attribute mental states and other mental attributes” not only to people but also animals and things such as robots (16). Nyholm here makes a convincing case for the claim “that human beings are naturally inclined to think in ways that explain perceived behavior in terms of the sorts of mental states that we usually attribute to fellow human beings when we engage in mind-reading” (138). He does not argue against the possibility of robotic minds but resists the tendency to ascribe human-like minds to robots (146), which he describes in some of the literature.
While the whole sixth chapter is dedicated to mind-reading, the other key aspects are less attended to. “Dual processing” is derived from Daniel Kahneman’s distinction between, on the one hand, quick, intuitive, and emotional mental processes and, on the other hand, slow, deliberative, and effortful mental processes. Nyholm holds that between these two “systems,” conflicts may arise, such as that between the spontaneous impression that robots are persons and one’s reasoning that they are not. “Tribal tendencies” refers to Joshua Greene’s diagnosis that humans tend to think in “in-group and out-group distinctions,” which may lead to polarization (17). Nyholm borrows the term laziness from René van de Molengraft’s “lazy robots.” However, for Nyholm, the term does not refer to robots but the tendency of humans to save energy and resources whenever we can (18). Laziness makes humans behave very differently from robots, which can inhibit human-robot interaction (18). On these pages there is only minimal discussion and critical evaluation of these key aspects, and in the rest of the book they are presupposed as given.
Although Nyholm does not elaborate on these problematic features of the human mind, it seems plausible that all of them can impact human-robot interaction. While the ad hoc character of the collection of problematic features could be overcome by a later clarification in the context of individual cases, unfortunately Nyholm does not do so in a systematic manner. For instance, he refers to tribalism only one more time, when he writes that “as it happens, Sophia is also awaking our tribal tendencies, it seems” (21). The question of how exactly tribalism distorts human-robot interaction remains vague, as does the question of what one ought to do about this.
Nyholm proposes, however, an “overarching argument” (20) in favor of the conclusion that “for our sake, we need to either adapt the AI and robots that we create to fit with our human nature (our human minds), or else we need to try to adapt our human minds to make them better-adapted to interact with robots and AI” (16). I find the conclusion very plausible since the effective use of technology nearly always involves a certain degree of adjustment to the technology, and in particular robots and AI raise questions that defy common ways of thinking.
Nyholm repeatedly reminds us that human minds and ethical and legal frameworks were developed long before there were any robots and AI (14, 15, 16, 20, 23, 35, 36, 137, 199), which is also the first premise to his argument (15). However, in spite of the importance he attaches to his observation, he does not elaborate on how this development has led to distortions of our view of robots and AI, such as those due to the human characteristics he delineates. I find this rather disappointing since a clarification of the evolution of the problematic features diagnosed by Nyholm would not only have been helpful to understanding the conditions of the misunderstandings, but also the possibilities for overcoming them.
With regard to the overarching argument Nyholm is right that “the exact details of how different aspects of our minds evolved over time do not matter greatly.” The reason he gives is that the argument he sketches “is on a very general level” (15), but one may add that, on that very general level the whole of the first premise is unnecessary since the other two premises, together with an assumption, suffice. The second premise is that human minds and ethical and legal frameworks are either bad or not ideal when dealing with robots and AI (15), and the third is that the lack of adaptation can harm us or put us at risk when dealing with robots and AI. Together with the assumption that we should protect ourselves from the risk of harm, it would seem right to conclude that either we need to adapt AI and robots to human minds, or human minds to AI and robots. As soon as we leave this very general level, however, and ask how we should adapt to robots, details begin to matter. For instance, adaptation may not mean becoming robot-like, but simply learning how to interact with specific robots in view of their specific capabilities.
While in principle I agree with both the conclusion and the argument on that very general level, I do not see how they embrace the descriptions of individual cases and the arguments put forward in their contexts. This would not only clarify the connections between the very general argument and the discussions of individual cases, but also sharpen the discussion of those individual cases. A more systematic approach would be especially important with regard to humanoid robots, where it is particularly tempting to simply follow the lure of controversial instances and lose the bigger picture. Unlike some other contributions to the debate, however, the book sketches a number of valuable arguments concerning some of the most widely discussed robots in a rather sober, reasonable, and balanced way.
Many thanks to David Winkens for his insightful comments on a draft of this review. Work on this review was supported by funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 754340.