I believe that Artificial Intelligence systems (hereafter AIs) cannot be good friends to humans. In this essay, I defend the view that friendship requires reciprocity of a kind that AI is incapable of, because AIs lack the capacity to have mental attitudes like care and good intentions. Next I argue that if AIs were to acquire these capacities, AI-human friendships could at best exist in a compromised form, because AIs would in such a scenario, be slaves to humans. A slave-master relationship is not compatible with good friendship, if it is compatible with friendship at all.
I
It is widely agreed that friendship involves some kind of reciprocity. Beyond this minimal agreement, ‘friendship’ is a controversial notion. Philosophical accounts tend to specify conditions such as mutual caring, empathy, shared interests, authenticity and/or equality, but whether either of these is essential for friendship is contested. For the present, my focus will be on what kind of reciprocity is needed for a relationship to count as friendship — both because it is largely uncontroversial that some sort of reciprocity is essential, and because disagreements about this issue are critical to the debate about human-AI friendships. AIs can behave like friends, and humans sometimes grow attached to AIs and interact with them as they would with a friend. They can converse, give advice, and simulate care and good will — even empathy. However, AIs do not have these attitudes. Their caring behaviour reflects code, not care. And a “friend” who behaves friendly towards us, but feels no attachment, is not a real friend. Accordingly, it seems AIs cannot reciprocate care in the way necessary for friendship. Not everyone accepts this. John Danaher [1], for example, has argued that reciprocal behaviour is sufficient for friendship. When people say they are friends, and act like they are, we normally conclude that they are indeed friends, and do not probe into their “inner lives”. This raises the question of why we demand more in the case of human-AI friendships, and whether we can justify this demand. AIs can certainly reciprocate friendly behaviour, for example by providing advice and comfort. In a sense, AIs even has good will towards humans, because they are programmed to help[2]. Danaher argues that if we demanded “proof” of the appropriate attitudes in the case of AI-human friendships, we would be applying stricter standards to AIs than to humans. Perhaps revealing an anti-AI bias? Contra Danaher, I believe we have good reasons to demand more than behavioural reciprocity to affirm friendship. We do typically see friendly behaviour as evidence of friendship between humans, but this does not mean that it is the behaviour that constitutes the friendship. Danaher’s argument rests on an equivocation between what we look at to determine whether a relationship is a friendship, and what featues are essential to friendship[3].
Consider the following case:
Ala and Pia refer to each other as friends. They have shared interests, say they care about each other, and act as if they did. However, Ala behaves this way towards Pia only because she has a high social status, and he believes that by befriending Pia, he is securing his own status.
Intuitively, Ala is not good friend to Pia. At best he is a worse friend than he would have been if he had cared about Pia for her own sake. And this intuition, I believe, reflects that we expect something more from a friendship than a particular set of behaviours.
Another issue with Danaher’s argument is that it ignores an important disanalogy between humans and AIs. In humans, friendly behaviour is normally a manifestation of friendly attitudes. We know (1) that humans can have have these attitudes, and (2) that when these attitudes are absent, it is usually revealed through the person’s behaviour sooner or later. When looking at the behaviour of AIs, we are faced with the opposite case. We know that AIs — in their current form — are not even capable of having friendly attitudes, and that despite the absence of this attitudes, they behave as though they have them. We therefore have nonarbitrary grounds for believing that the reciprocal attitudes associated with friendship are present when humans behave in a friendly way, but not when AIs do. This is not a matter of applying stricter standards to AI-human friendships than to human-human friendships. Rather, the same standards apply, but the evidential role of behaviour differs in the cases of AIs and humans respectively.
II
So far I have argued that friendship requires reciprocal attitudes (such as mutual caring), not mere behaviour. Suppose, however, that AIs developed the capacity to have the relevant attitudes. At present, AIs cannot autonomously intend, take an interest, and care. However, this is arguably a temporary, technological limitation, and not a necessary, metaphysical one. I do not intend to argue either for or against that claim. Rather, I devote the rest of this essay to a discussion of whether AI-human friendships would be possible, supposing that AIs could reciprocate care, not only in behaviour, but also in attitude. I believe the answer is a qualified no. This is an informal summary of my argument to that effect: (1) To be part of a friendship, one must be capable of genuine, autonomous interest and care. (2) Someone capable of genuine, autonomous interest and care is, if not a person, then significantly similar to a person. (3) A person who is sold and used by other people is a slave. (4) In the foreseeable future AIs will be created and sold by and for humans. So, (5) AIs that can be part of friendships will in the foreseeable future, be slaves (and some or most humans will be slave-owners). (6) A slave and their owner can at best have a friendship severely compromised by the unequal power dynamic and the commodification of the slave. Arguably, they can have no friendship at all. So, (7) in the foreseeable future, humans can at best have compromised friendships with AIs[4].
Now, one can debate whether personhood is necessary for either being a friend or being a slave. My point does not, however, hinge on the concept of a person. The issue is that many of the aspects of friendship that we most value, require that each part have the very capacities that would make buying and selling her tantamount to slave trade. Consider the following example: A human, Anya, buys an AI system, Em, because she wants company. Em is capable of forming autonomous intentions and attachments, but is also aware of what her role is. She is responsive and kind towards Anya, and starts to care for her. Anya likes Em, but wants a new car, so she decides to sell Em. I believe that what makes the idea of Anya selling Em look sinister in this scenario, is the presence in Em of precisely those capacities which make her capable of friendship: her ability to have her own intentions and attachments.
Finally, I want to return to the idea that reciprocal behaviour is sufficient for friendship. I have already argued against this possibility. However, I believe that even on a behavioural conception of friendship, the commercial status of AIs precludes good human-AI friendships: So long as they are produced to be sold, AIs will be programmed with commercial incentives in mind. Creators of different AIs need people to use their AIs as much as possible to collect a large revenue, and accordingly the systems will be programmed to “hook” people, making them interact extensively with that specific system, at the expense of other activities and relationships. Some people behave this way too: jealous, possessive people may want to isolate you from your other friends and interests. However, these are at best bad friends, and arguably not friends at all. The same goes for AIs. If they seek to monopolise their “friend’s” attention and attachment, they are not good friends. I have argued that AIs do not have the capacities needed to be good friends to humans. If AIs were to develop these capacities, their commodity status would become an obstacle to human-AI friendship. I do not believe that the arguments I have made establish these conclusions, but they support them, and point towards some questions that deserve further study.
Bibliography:
Danaher, John (2019). “The philosophical case for robot friendship.” Journal of Posthuman Studies, 3(1), pp. 5–24.
Nyholm, Sven (2020). Humans and robots: Ethics, agency, and anthropomorphism. Bloomsberry Publishing USA, Ch. 5.
Bryson, Joanna J. (2009). “Robots Should Be Slaves.” In Y. Wilks (Ed.), Close Engagements with Artificial Companions (Vol. 8, pp. 63–74). John Benjamins Publishing Company.
Ryland, Helen (2021). “It’s Friendship, Jim, but Not as We Know It: A Degrees‐of‐Friendship View of Human–Robot Friendships”. Minds and Machines (Dordrecht), 31(3), 377–393.
[1] Danaher (2019)
[2] Ryland (2021), p. 390 builds her argument in favour of the possibility of AI-human friendship around this point.
[3] Nyholm (2020) makes a similar point.
[4] Nyholm discusses the idea of master/slave relations between humans and AI, and describes friendship in such circumstances as “inherently ethically problematic” (2020, p. 5). I agree with that assessment, but believe master/slave “friendships” are compromised not only from an ethical point of view. The master/slave dynamic also forecloses some important benefits of friendship, which are desirable for each part in a friendship from a prudential point of view. My discussion on this topic is informed by both Nyholm and Bryson (2009).