In order to be a better friend than humans, artificial intelligence systems would have to be friends in the first place. This is dubious, however, because they cannot display the requisite mutuality or authenticity. Philosophical accounts of friendship vary greatly, but a concern for mutuality or reciprocity is almost universal.[1] Friendship cannot be unrequited or unilateral – if it is, it is not friendship. Often, there is also a concomitant concern for authenticity – whatever is manifest or understood about friendship or a friend, it should accord with reality.[2] If these features are absent, an account of friendship risks being objectionable by including relations that do not seem like friendships. A relationship lacking mutuality is parasocial and one lacking authenticity is false. In short, if an account of friendship includes the relationship between an obsessive fan of a celebrity and their idol, or that between Othello and Iago, it should be rejected.[3]
Additionally, friendship is often taken to include some internal component – that is, something cognitive, mental, or emotional. These features present a prima facie problem for the concept of AI friends. If AI systems are incapable of such substantive internal qualities, how can they reciprocate human attachment and be friends? For example, Fröding & Peterson claim that reciprocity is ‘beyond the reach of near-future AI systems’, so AI can only participate in ‘as-if friendships’.[4] Sullins claims they cannot engage in a loving relationship because they cannot be empathetic.[5] Elder argues similarly.[6] Depending on one’s account of friendship, the specific internal feature AI is missing is different – non-exhaustively, it may be: consciousness, personhood, sentiment, intention, or the capacity to bestow meaning.
There are two broad strategies proponents of AI friendship adopt. First, they might reject the internalist claim and enact a ‘relational turn’ – friendships are defined and validated not by any internal feature, but by behaviours, which AI can achieve and reciprocate.[7] Second, proponents might define the internal prerequisites as something that AI can achieve and reciprocate. Both strategies fail to support the possibility of AI systems being friends.
The first approach is adopted by Jecker, who argues that we should adopt an ‘African relational view’, under which AI systems are persons because they can engage in the necessary behaviours, such as supporting others, conducting themselves properly, or having the capacity to care about others.[8] For Jecker, this relational personhood, and not any feature like consciousness, makes putative friendships with AI non-pernicious – AI systems are not falling short of any internal test, because there is no necessary internal feature; reciprocity is achieved through behaviour.
However, despite Jecker’s analysis attempting to reject ‘characteristically Western’ assumptions, she adopts a strikingly Cartesian dualism of the mental and physical. Rather than collapse the distinction, she merely turns the emphasis solely to the physical. This exclusion of the mental is not warranted by a focus on relational, as opposed to individual features – the mental or internal is bound up in the relational. For instance, some of the theories Jecker draws from emphasise moral virtues in action, such as generosity, humility, or respect. These are obviously not captured by any purely physical, behavioural description – someone who drops their wallet accidentally is not generous, someone who lets you through a door first because they are tying their shoe is not polite. Other relational theories focus on a non-moral ideal of care or harmonious living. Again, a physical description will not suffice – my bed is not ‘caring’ (at least in a way that could justify being called ‘social’) because it supports me. In order to be the kind of relation we care about, certain internal features have to be present. An action that is physically the same can be authentic or inauthentic, based on intentions and attitudes of the agent. Jecker seems to beg the question by assuming AI can in fact be caring, supportive, kind, etc. (and so persons or potential friends) when the internal features required for these qualities are exactly what is in dispute.
Within the same relational strategy, Danaher and Kempt have more pragmatic reasons for focusing on behaviour. Danaher says that in ‘ordinary human friendships’ mutuality and authenticity are satisfied by ‘certain consistent performances’.[9] Because we lack the ability to fully know the mental states of others to determine features like intention or emotion, our only epistemic grounds are their behaviours, which apparently suffice for human-human friendship. It would be arbitrary to deny attribution of mutuality and authenticity to robots, who can perform likewise. Kempt makes a similar argument, and says any philosophical debate over the concept of friendship is largely irrelevant to those who view AI as their friend.[10]
The latter point is largely irrelevant – the reciprocity and authenticity requirements suggest that we can never base a claim on the experiences of one side of the relationship, because they could be mistaken. Kempt’s logic would sanction ignoring them altogether. As for the former, concerning epistemic methods, Danaher is incorrect to claim that performances and behaviour are our ‘only grounds’ to judge internal features. While both AI and humans are to an extent ‘black boxes’, making it difficult to know the content of their intentions or sentiments, we do have some information as to the type of internal features they might have. For humans, we have some evidence by analogy – I know that I experience conscious intentions and sentiments, so I find it plausible that my fellow humans do too. For AI systems, we know how they work. Even if their specific processes are opaque, we know generally what sort of processes they are – for large language models, for instance, learn statistical patterns in language and encode them through weights that capture relationships between tokens, allowing them to predict the most probable next token. It seems highly unlikely that these processes will give rise to anything like sentiment – they simply seem like the wrong materials. When I watch a play, I am not convinced by the performances of the actors that their behaviours and expressed sentiments are authentic not because of any perceivable difference, but because I have knowledge outside of the performance – I know they are actors. I have a similar epistemic defeater in the case of AI. While Danaher’s argument is correct in that I do rely upon performances when judging if humans’ sentiments are authentic, it is not true to say that AI authenticity has the same epistemic grounding – their performances are undermined by my knowledge of their workings.
The second strategy is to claim that AI systems claim that AI can achieve the requisite internal features. This claim is naturally accompanied by claims about which internal features are necessary. For instance, Munn & Weijers make the argument that sentiment is not necessary for friendship, only positive intention is.[11] Similarly, Ryland argues that ‘mutual good will’ is the necessary grounding for friendship.[12] Munn & Weijers argue that sentiment is only loosely connected to action and intention, which is really what matters. Their rejection of sentiment is largely based on asserting the claim that we appear to care about sentiment only as a proxy for ‘caring intentions and behaviour’. This seems incorrect – a nurse or a teacher might be caring, both in intention and behaviour, without being a friend, even if I reciprocate (and fulfil the other criteria Munn and Weijers endorse, a preponderance of rewarding interactions).
Part of the issue is that ‘positive intention’ is loosely and thinly defined, so that is achievable by AI. This seems to require an unfamiliar kind of intention, without consciousness.[13] It is unclear if this is the kind of intention we find meaningful in personal relations. The authors explicitly allow that broad ‘humanist’ beliefs suffice, which seems far from the depth we might expect of friendship. When we value intention, part of the appeal seems to be that another person has singled us out as valuable, that we occupy a special place in their world. The appeal does not seem reducible to ‘wanting the best’ for us. Additionally, the authors reject authenticity as a criterion, allowing positive intention to include the mental state of someone who paternalistically lies to another or coerces them for their benefit, or someone who believes they are engaging with a different person (so long as they have minimally positive intentions for their real partner). Overall, the account is so permissive as to be implausible.[14]
Ultimately, neither strategy can give a satisfactory account of friendship under which AI systems display anything like the reciprocity or authenticity which we require in friendships. A final attempt might be made to claim that an ‘AI friend’ is simply a different category to a ‘human friend', and so do not have to meet the same criteria. Rather than being like an e-reader replacing a paper book, AI friends are like audiobooks – ontologically distinct, and comparable only in output, not in nature. However, this would still require mutual metrics in which we could assess the two types’ outputs. But as the discussion of the relational strategy suggested, we cannot separate output and internal nature. Whatever notion of ‘intimacy’ an AI friend might offer is incomparable with that of a human friend, because human intimacy is grounded in mutuality and reciprocity of certain internal features. What is there in AI with which we would grow intimate with? AI systems are unable to be our friends in any recognisable or comparable way, and are certainly not better friends than humans.
Bibliography
Abbate, C. (2022). The Animals in our Living Rooms. In The Routledge Handbook of Philosophy of Friendship. Routledge, 138–150.
Cocking, D., & Kennett, J. (1998). Friendship and the self. Ethics, 108(3), 502–527.
Coeckelbergh, M. (2010a). Robot rights? Toward a social-relational justification for moral consideration. Ethics and Information Technology, 12, 209–221.
Danaher, J. (2019). The philosophical case for robot friendship. Journal of Posthuman Studies, 3(1), 5–24.
Elder, A. (2017). Figuring out who your real friends are. In Silcox, M. (Ed.), Experience Machines: The Philosophy of Virtual Worlds. Rowman & Littlefield, 87–98.
Fröding, B., & Peterson, M. (2020). Friendly AI. Ethics and Information Technology, 23(3), 207–214.
Jecker, N. S. (2025). Person, Not Thing: A Relational Path to Personhood for Social Robots. In Hacker, P. (Ed.), Oxford Intersections: AI in Society. Oxford University Press.
Kempt, H. (2025). Human–AI Friendship: An Optimistic Account. In Hacker, P. (Ed.), Oxford Intersections: AI in Society. Oxford University Press.
Kewenig, V. (2019). Intentionality but Not Consciousness: Reconsidering Robot Love. In Zhou, Y., & Fischer, M. H. (Eds.), AI Love You. Springer, Cham.
Munn, N., & Weijers, D. (2025). Human–AI friendship is possible and can be good. In Hacker, P. (Ed.), Oxford Intersections: AI in Society. Oxford University Press.
Ryland, H. (2021). It’s Friendship, Jim, but Not as We Know It: A Degrees-of-Friendship View of Human–Robot Friendships. Minds and Machines, 31(3), 377–393.
Sullins, J. P. (2012). Robots, love and sex: The Ethics of building a love machine. IEEE Transactions on Affective Computing, 3(4), 398–408.
Ye, R. (2025). AI Companionship: A New Frontier. In Hacker, P. (Ed.), Oxford Intersections: AI in Society. Oxford University Press.
[1] E.g. Cocking, D., & Kennett, J. (1998). Friendship and the self. Ethics, 108(3), 502–527.; Fröding, B., & Peterson, M. (2020). Friendly AI. Ethics and Information Technology, 23(3), 207–214.; Abbate, C. (2022). The Animals in our Living Rooms. In The Routledge Handbook of Philosophy of Friendship. Routledge, 138–150.
[2] This requirement is not quite so accepted as mutuality, e.g. Munn, N., & Weijers, D. (2025). Human–AI friendship is possible and can be good. In Hacker, P. (Ed.), Oxford Intersections: AI in Society. Oxford University Press.
[3] These two features are often coincident – it seems probable that almost all inauthentic relationships are non-mutual, and many vice versa. In principle, we might have two people with equal attachment but deceived or mistaken perceptions (mutual but not authentic), or open unrequited attachment.
[4] Fröding, B., & Peterson, M. (2020). Friendly AI. Ethics and Information Technology, 23(3), 207–214
[5] Sullins, J. P. (2012). Robots, love and sex: The Ethics of building a love machine. IEEE Transactions
on Affective Computing, 3(4), 398–408.
[6] Elder, A. (2017). Figuring out who your real friends are. In Silcox, M. (Ed.), Experience Machines: The Philosophy of Virtual Worlds. Rowman & Littlefield, 87–98.
[7] Coeckelbergh, M. (2010a). Robot rights? Toward a social-relational justification for moral consideration. Ethics and Information Technology, 12, 209–221.
Coeckelbergh is focused on AI systems’ moral standing, but Jecker uses the concept for personhood generally.
[8] Jecker, Nancy S, (2025) 'Person, Not Thing: A Relational Path to Personhood for Social Robots' (20 Mar. 2025). In P. Hacker (Ed.), Oxford intersections: AI in society. Oxford University Press
[9] Danaher, J. (2019). The philosophical case for robot friendship. Journal of Posthuman Studies, 3(1), 5–24.
[10] Kempt, H. (2025). Human–AI Friendship: An Optimistic Account. In Hacker, P. (Ed.), Oxford Intersections: AI in Society. Oxford University Press.
[11] Munn, N., & Weijers, D. (2025). Human–AI friendship is possible and can be good. In Hacker, P. (Ed.), Oxford Intersections: AI in Society. Oxford University Press.
[12] Ryland, H. (2021). It’s Friendship, Jim, but Not as We Know It: A Degrees-of-Friendship View of Human–Robot Friendships. Minds and Machines, 31(3), 377–393.
[13] Kewenig, V. (2019). Intentionality but Not Consciousness: Reconsidering Robot Love. In Zhou, Y., & Fischer, M. H. (Eds.), AI Love You. Springer, Cham.
[14] Additionally, because positive intention is so thin, these accounts tend to be deflationary – friendship appears not so deep or profound as philosophy has assumed. To compensate, a hierarchy of friendship is often introduced, to validate ordinary feelings that our most important friendships are different sorts of relations to those we share with acquaintances or colleagues. For example, Ryland introduces ‘degrees of friendship’, or Ye introduces ‘enhanced companionship’ for more substantial relations. This often has the effect of downgrading AI, or at least current AI, as friends, because the hierarchy is differentiated by features like emotional connection.