While robots were originally conceived of as laborers, advances in AI and emotional modeling have led to “companion robots” like Aldebaran’s Pepper and Intelligent Systems Co.’s Paro. But a companion is fundamentally unlike a standard worker[1]: labor is by its nature fungible, and companions, if we understand companions as something like friends, are, presumably, non-fungible. Workers can be replaced with new hires and some training, but the loss of a friend cannot be assuaged by simply making a new friend; it is not the role of “friend” that is lost, but an individual with, as Aristotle and Julia Annas note, highly specific characteristics and, as narrativists like Jennifer Whiting and Marya Schechtman emphasize, shared history. While companion laborers might not strictly be friends, they do share these qualities: one companion is not just as good as another, and a long-time companion has qualities that are irreplaceable.
This can be duplicated in robot companions. With adaptive programming and learning technologies, they will become more individuated and develop shared histories with their companion-employers. Seemingly, then an off-the-shelf, non-adapted robot companion would then not be a sufficient replacement for the loss of an existing companion robot. Except, robots’ “mental content,” their memories, dispositions and personalities, are easily transferrable to a new body. Since, as Nina Strohminger and Shaun Nichols’ research shows, identity is held by most people to reside in this mental content, it seems that the robot would be an effectively immortal friend.
This technology creates the possibility of breaking down the distinction between worker and friend. Existing companion-workers, as well as some workers in non-companion jobs who have developed friendships with their employers, are employed in part because of their capacity to act as a friend. But insofar as the friendship is a condition of employment, the “friend” is only acting as a friend. But the robot, lacking any desires not programmed into it, becomes a pure friendship-laborer, unconflicted in its combination of friendship and duty. Further, just as friends are marked by “growing together,” robots with adaptive AI and machine learning can grow towards their “friend” or owner, becoming more the kind of friend that their human companion would prefer. Beyond auto-adaptation, the robot friend can also be instantly reprogrammed. If a companion is found to be, for example, insufficiently interested in hearing about one’s grandchildren, this can be fixed nearly instantly if the proper programming is available.
Highly reprogrammable robots that have come to know and share in their owner-friend’s concerns and interests create a strongly assymetrical friendship: as the technology adapts, the human will have less and less need to adapt to the robot, and the robot will be more and more adapted to the human. Habituation by people to friends of this sort could make encounters with non-controllable others uncomfortable or unpleasant. Perhaps the ideal state would be to be surrounded only by robot-friends, which take on the labor of friendship without asking anything in return. But at this point, many of the essential characteristics of friendship are lost. If people begin their lives with child-care robots and toy robots, and end their lives with elder-care robots, they eventually squeeze out the space formerly reserved for actual friendships, preferring these easily moldable, highly responsive artificial friends to the hard work of friendship with people who cannot be controlled and reprogrammed at will. And what aspects of the good life, moral development, and the general good are lost when we eliminate friendship in favor of its artificial replacement?
This is reminiscent of Nozick’s experience machine, and points out, what I think, is an incorrect assumption on Nozick’s part. He says that “we” would not choose to go into the experience machine[2]. I think this is false for many of us. “We” spend hours playing immersive video games, leaving only because we need to eat, sleep, work, and tend to our carpal tunnel syndrome. “We” watch over five hours of television a day. If “we” can escape, many of us do.
Imagine we have a robot friend. One of the things that makes a friend valuable is difference: we have to overcome difference, work to understand our friend, and accept that they disagree with us, may not always want to do what we want, and, in short, are separate people, not mere tools that we can use. If we could throw a switch such that our friends were more attentive, more interested in our needs, less in theirs, would we? I would hope not. Part of what makes friends valuable is this difference and this resistance: they cannot simply be turned like a key or threaded like a screw. We have to figure out who they are, and part of our love for them comes from their not being us.
But how many of us have “turned off” a friend on social media for having opinions we don’t like? If we disagreed with a reprogrammable friend, at what point would we just press the button that made our friend agree? If we wanted to do X, and robot wanted to do Y, would we compromise, maybe even do what the robot wants, and thus expand our horizons, try something that doesn’t seem like it would appeal to us, and, from that experience, learn something about ourselves? Or would “we” just type in the proper command to make the robot do as we wish. By being perfect friends, friends who care for us infinitely, and themselves not at all, friends who support us in all we want and do, and who put aside all of their feelings, aspirations, and values for ours, a robot would prevent its human companion from learning how to be a friend, because the robot would do all the friendship work. Part of what makes us persons is our inability to get what we want from others, our need to respond to resistance, our need to grow and compromise, and change ourselves, and learn about ourselves when put in situations that we cannot control. The extreme malleability of the robot-friend could undermine our own malleability and adaptability.
[1] It’s notable that the role of “companion” jobs in the economy increased with the social shift away from extended families. Currently, in the U.S. and some other industrialized nations, care for the elderly is ceasing to be the job of their children and becoming job of immigrant labor and the poor.
[2] A perfect virtual-reality machine that can give the user any experience desired.
Clare Flourish
June 18, 2016
“We” also seek out challenges. We test and develop ourselves. We learn.Would we not still? Is this not more a question for psychologists than philosophers?
I have been thinking whether to switch off my philosopher friend. I sent a text imagining it would provoke a particular response. If he had reacted in that way, I would have texted saying in effect “It has been lovely knowing you”. I did a cost-benefit analysis. But this is merely anecdotal evidence.
LikeLike
James DiGiovanna
June 18, 2016
Yes, we seek challenge. My point was only that Nozick’s “we” was overly broad in assuming we would not go into the experience machine. I’m pretty sure some of us would.
LikeLiked by 1 person
Lage
June 19, 2016
My comment here was going to be too verbose so I decided to write a blog post on the topic at:
https://lagevondissen.wordpress.com/2016/06/19/co-evolution-of-humans-artificial-intelligence/
LikeLike
Justin Caouette
June 21, 2016
Excellent post, James. Thought-provoking!
I agree that we would lose something in the scanrio you played out. I’m wondering if this would lead us to a necessary condition for human flourishing: personal non-robot friendship?
LikeLike
James DiGiovanna
June 21, 2016
It seems like the ability to be a friend is a virtue, or a set of virtues, and it’s definitely hard to imagine some kind of full flourishing without it. I wonder if we could break that virtue down into parts: like, the ability to listen to others without simply waiting for one’s turn to speak; the ability to put oneself aside so as to take on, for a time, the interests or concerns of a another; the ability to adapt to being around others; etc.
It might be interesting to explore the virtues of being with others and the virtues of being alone; both of these might be damaged by the companion robot. One never needs to learn to be alone with the companion robot, and one never really learns to respond to another.
LikeLike