Responsibility, Identity and Artificial Beings: Persons, Supra-Persons, and Para-Persons

Posted on June 2, 2016 by


 

Thanks to Justin Caouette for inviting me to the blog. I’ll start with a bit that draws ideas from a paper I’m working on for a book on Robot Ethics:

The standard criteria for personhood are not obviously inherently impossible for AIs to obtain: they could be self-conscious, they could regard others as persons, they could be responsible for actions, they could have relations to others and to communities, they could be intelligent, rational, desirous, they could have second order intentions, and, at least on compatibilist notions of free will, they could have free will, presuming we could program emotions into their software. Bostrom[1] makes a good case that all of this is ultimately doable, and even AI skeptics like Searle don’t rule it out in the long run, as long as we understand the different possibilities of hardware and software.

But even if AIs acquired all the standard personhood traits, there is one area where they may be so unlike persons that we would have to rethink some of the limits of the concept: AIs would not abide by norms of personal identity. Personal identity across time is usually tracked by either psychological or physical continuity. Both of the physical and the psychological can change while identity remains constant, but only if the change is either piecemeal or gradual. That is, if a person grows and loses cells or memories over many years, they retain identity. But if I change all of someone’s cells or psychological content at once, then I’ve created a new person. But with an artificial person, the standard modes of psychological and physical change that our identity concepts are adapted to are wiped away.

Not only would sudden and radical change in both the physical and mental becomes possible, it would be standard for a machine updating its data banks, reprogramming itself, and swapping out hardware. A similar effect could potentially occur for humans, if we acquire the enhancement technology to reprogram our desires and beliefs, and the prosthetic technology to swap out body parts at will.

So, human enhancement and strong AI converge upon the creation of entities that are largely capable of rewriting or recreating their personalities, bodies, and, by extension, themselves. As so many philosophers have recently noted, human enhancement creates moral dilemmas not envisaged in standard ethical theories because it alters the possibilities under which ethical beings operate. What is less commonly noted is that, at root, many of these problems stem from the increased malleability of personal identity that this technology affords. If a self becomes so re-workable that it can, at will, jettison essential identity-giving characteristics, how are we to judge, befriend, rely upon, hold responsible, or trust it? We’re used to re-identifying people across time. We assume that persons are accountable for and identifiable with prior actions. But can a being that is capable of self-rewriting be said to have an identity? Since most human interactions that take place across more than a few minutes rely upon a steadiness in the being of the other person, a new form of person, capable of rapidly altering its own memories, principles, desires and attitudes creates tremendous problems not only ethically, but metaphysically as well. If all the identity-making properties of a person are unstable, how can we track that person across time?

So, while an AI could be a person[2], that might require that we prevent it from learning, adapting and changing itself at the rate at which it would be capable. That is: to make an AI-person, we would need not only to achieve a high-level of technical expertise in programming and hardware, we would also need to stop it from utilizing the benefits of that hardware and software. We would need both technological advances, and a technological brake. As Tom Douglas[3] has noted, AIs might wind up being, in an ethical sense, a supra-persons. That is, they might be so much better at ethical thinking, so much smarter and more efficient and more productive and more cooperative and less selfish, that we would fall behind them in ethical standing. Douglas assumes that such beings would be, if anything, even more responsible for their actions than we are. But again, this would require that they have some kind of stable identity to attach blame and praise to. So while a supra-personal AI is conceivable, it would again require that we not allow it to utilize its ability to change to its full capacity. Otherwise, it may cease to be on the person scale at all. What we are likely to create, though, if we allow AIs all the benefits that emerging technologies can bring, are para-persons, things that have all the personhood qualities, or pass all the tests for personhood, that philosophers have set up (self-awareness, ethical cognition, other-awareness, self-respect, linguisticially expressible concerns etc.), but also have an ability that makes them not supra-persons, but something outside of personhood. That is, even if it had all the personhood qualities, it could also have an additional, defeating quality for personhood: the ability to change instantly and without effort. Our ethical systems are designed or adapted to apportion blame and praise to persons. They could do the same for supra-persons. But it’s not clear that they will work with the kind extremely malleable para-persons that strong AI or strong enhancement will produce.

To give some quick examples: suppose an AI commits a crime, and then, judging its actions wrong, immediately reforms itself so that it will never commit a crime again. Further, it makes restitution. Would it make sense to punish the AI? What if it had completely rewritten its memory and personality, so that, while there was still a physical continuity, it had no psychological content in common with the prior being? Or suppose an AI commits a crime, and then destroys itself. If a duplicate of its programming was started elsewhere, would it be guilty of the crime? What if twelve duplicates were made? Should they each be punished? And would we have competing intuitions if we asked: suppose an AI performed a heroic act, but then completely reprogrammed itself. Should we give an award to the new being that occupies the same “body”? Suppose it did its heroism and then was destroyed, but years later a copy of its programming was found and started up on hundred new machines. Should we award each of them? What if it was on only one machine? I don’t think these questions have clear answers, even if you or I might have strong intuitions about them, because it’s fairly easy to create conflicting intuitions, or to find strong areas of disagreement. This is, I think, because we’ve exceeded the current bounds of our identity concepts, and if that’s true, and if it’s a central part of what it is to be one of these AIs, then I think they may be more para-persons than persons, as identity, which is central to personhood, is no longer applicable in its current form.

[1] Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. OUP Oxford. See chapter 2 for a good argument for the possibility of person-level AI.

[2] There’s certainly a lot written on this, but I’ll just plug an early, and often overlooked essay, Dolby, R. G. A. (1989). “The possibility of computers becoming persons,” Social Epistemology, volume 3, issue 4, pp 321-336; as I think it provides a strong argument that is unlike that found in the more recent papers on this topic.

[3] Douglas, T. (2013). Human enhancement and supra-personal moral status. Philosophical studies162(3), 473-497.