Living with Robots

Placeholder book cover

Paul Dumouchel and Luisa Damiano, Living with Robots, Malcolm DeBevoise (tr.), Harvard University Press, 2017, 252 pp., $29.95, ISBN 9780674971738.

Reviewed by Craig DeLancey, SUNY Oswego

2018.02.03


Every day I get a call from Stacey. She calls me from different numbers, so I can't avoid her. "Hello . . ." then a clatter of the phone being dropped. A pause while she picks it up. She gives a nervous, apologetic short laugh, and then says, "Oh, sorry, I dropped the phone. Hi, I'm Stacey. I'm from Travel Rewards!" That's when I hang up, before Stacey can try to sell me a trip to the Caribbean.

Stacey is a program. This program does not think or feel. It is far less complex than a bacterium. And it is run, is copied, and is distributed solely if it succeeds in deceiving me and others, exploiting our sympathies for other human beings. The world would be a better place if we wiped out all these Stacey programs, and no harm of any kind would have been done to Stacey by such an act, since Stacey cannot suffer harm.

Alas, in their book, Paul Dumouchel and Luisa Damiano are on Stacey's side. They are concerned not with programs, but with robots that they call "social robots" and sometimes (aptly, and without irony) "substitutes." Substitutes are going to co-evolve with us, share emotions with us, and allow the formation of a "synthetic ethics" in which new ethical precepts emerge through the interaction and coevolution of humans and their substitutes.

Substitutes include things like artificial nurse aids: robots developed for social assistance and social interaction. These social robots are very general (with no single specific purpose), they are "present" (which means, they are meant to be seen and interacted with, as opposed to disappearing from attention like a toaster), to some degree autonomous, and they must have some authority in order to do their tasks. But, of special importance is that, to be effective at their tasks, such robots should be able to "recognize" and emulate emotions. Consideration of how this will be possible leads Dumouchel and Damiano to offer the usual attacks on Descartes, on methodological solipsism, and on first person authority. Their alternative to these errors is a version of radical embodiment à la Francisco Varela.

You might think the questions we should ask are: Can a robot have emotions? What kind of architecture would be required for it to have emotions? When can we be sure it is having real emotions? Dumouchel and Damiano claim these are all confused questions, arising from the many dualisms to which the rest of us fall prey. Rather, "the emotions of an agent, whether natural or artificial, are interactive, distributed phenomena, in which two or more interacting agents participate" (144).

They claim it is a mistake to think of emotions as something that can be faked. Indeed, they claim that the idea that emotions can be faked is a naïve presupposition: "Why should the internal aspects of emotion be true and the external aspects false? The question seems never to have been posed" (127). Surely the question is as old as humanity. Has there ever been a work of literature in which that question was not posed? (That one may smile, and smile, and be a villain.) We are all familiar with feigned emotions. We know they're feigned because the outward appearance is not consistent with what the agent feels and what the agent does at most other times when honestly expressing her emotions. We have all feigned emotions. We even pay some people to feign emotions. So, the feigning of emotions is a universal human experience. But instead, for Dumouchel and Damiano, all these claims I just made arise from a confused notion that emotions are internal events with external expressions. Emotions are wholly social, and they are wholly a relation between agents.

The problem with the claim that emotions can be fully explained by their social roles should be obvious, but it is never taken up in this text: we humans (and other animals) can have emotions outside of social contexts. I can be alone, far from other humans, and be terrified by a flood, delighted to find a shiny rock on the beach, sad to learn that a tree I loved fell down, or angry at a cow that tramples my flowers. Of course, emotions play social roles; it is the working assumption of everyone studying emotions that emotional expression exists for this purpose. But emotions are not exhausted by their social roles, and our evolutionary explanations need to explain more than their social roles.

Dumouchel and Damiano's claim that emotions are wholly relational appears to be a consequence of their version of radical embodiment. However, they do defend it with several arguments. For example, we are told that it is a mistake to think of emotional interaction as starting with correctly identifying the emotion of an agent with whom you are about to interact; their view "rejects not only the idea that access to others' emotions is obtained by analysis or simulation of expressive behavior, but also that such access rests ultimately on recognition" (139). Rather, "I can respond to another's anger by fear, by shame, by anger, or even by laughter. None of these responses . . . is a priori a false answer" (140). None of those responses is an answer, so none of them is true or false. But this is misdirection. Tom is angry, and, seeing this, Steve is afraid. This will coordinate a form of interaction. But Steve is afraid because he believes (or formed some kind of judgment, perhaps even an unconscious recognition) that Tom is angry. To evaluate why he is afraid, why that may or may not be a useful strategy, and so on, we need to assess whether he is warranted to "judge" that Steve is angry. Dumouchel and Damiano seem to conclude that such emotional responses are not based on judgments about internal states of an individual because we can react at a subpersonal level. But no one who claims emotions are internal events need claim that the recognition of another's emotion, or one's reactions to another person's emotions, needs to be conscious, reflective, or personal. A vast body of psychological research assumes emotions have essential internal structure and can happen fast and preconsciously in social contexts.

Dumouchel and Damiano also claim that mirror neurons provide some evidence that an emotion is an automatic relation between two agents: "The same neurons are aroused in the observer as the ones that in the observed agent are responsible for the performance of an action or for the display of an affective expression" (140). But that is to admit that we can identify, using some functionalist criteria, the emotion in the observed in this case. The emotions in this example are not a single "joint enterprise" (141); this is a case of the observed having an emotion, and the observer recognizing the emotion and then having some sympathetic reaction.

I should point out that, contrary to what Dumouchel and Damiano claim, it is because an emotion is primarily constituted by internal states of a single organism that we can use emotions to coordinate action. Suppose I can see Tom, and he is alone and cannot see me or anyone else, and that I recognize that Tom is angry. Suppose I can see Steve, and he is alone and cannot see me or anyone else, and that I recognize that Steve is not angry but is happy. These observations will shape how I will coordinate action with Tom or how I will coordinate action with Steve. I judge what kind of interactions are most likely possible with each agent based on my judgment about each agent's internal emotional state. If an emotion were solely a relation between two or more agents, then Steve and Tom could not even have the emotions I observe in this thought experiment, since they are alone.

Dumouchel and Damiano sometimes state their position in ways that make it sound like common sense: "The synthetic method in social robotics seeks an explanation of the phenomena it studies in the joint relationship between the agent and the environment in which the agent operates" (128). Every account of mind being developed today could embrace that sentiment. The issue is not whether we model mental phenomena in isolation from the environment; what's at issue is what kind of mechanisms and events are required to constitute the mental phenomenon. Some of these may be individual, internal features of one organism, and they may have evolved or developed in part through interaction or social role, and they may be essentially and irreducibly interactive or social in their operation in some situations. But they still require some brain events of particular kinds. This was, after all, the whole point of the cognitive revolution. Surely mental rotation, for example, is something that must be understood as a relationship between the agent and the environment: the organism uses mental rotation to manage motor control in a world of three dimensional objects. But that does not change the fact that the organism has and manipulates internal representations. And with emotions, the question is the same: is some particular human emotion, and any close analog, constituted by some kind of rich event within the organism, such that you cannot best explain the phenomenon without reference to these rich internal events?

But, if we believe emotions are wholly social and relational, then we should stop asking whether robots, or my nagging caller Stacey, have real emotions. Instead, we should be satisfied if and when they react "appropriately to the affective expression of human partners" (142). Those of us who still insist that there is something to having an emotion more than reacting to humans in an "appropriate way" are just falling prey to a series of false dualisms about the mind and about emotion. What is required for the robots to be interactive in the appropriate ways? "Only the coevolution of humans and robots will eventually be able to tell us" (148).

This view reaches its reductio ad absurdum when Dumouchel and Damiano claim that "Living with emotive and empathetic robots will amount to sharing with them an affective experience that is more or less similar to the one we have in our relationships with pets or that a child has with a stuffed toy animal" (143).

The fact that a dog can have emotions is sufficient for us to owe it some moral respect (there are other reasons). But if emotions are merely an "interactive, distributed phenomena, in which two or more interacting agents participate" -- a "joint enterprise" and not something internal to an agent -- then to say the dog can have an emotion is to say it can be one of the relata of such an emotion-relation. But we just learned that a stuffed toy animal can also be a relatum in such a relation. It follows we owe the stuffed toy animal some moral respect.

A day will come when some robots propagate themselves by emulating smiles and praising their owners constantly (corporations will call this "customer service"), while feeling nothing and while having less internal purpose than a flea; some of these machines will even convince some human beings to neglect other human beings and to take resources from other human beings in order to support the maintenance and propagation of these robots (the corporations will call this "effective marketing"). The appropriate moral response to this will be to point out that these robots do not have the emotions that they fake; and they do not have emotions because they are neither persons nor even sentient beings; and only sentient or living beings deserve some moral respect and a share of our communal resources. This is exactly the response that Dumouchel and Damiano are claiming is fundamentally illegitimate.