I argue that AI systems are not better friends than humans, because they simulate companionship without embodying the essential elements of genuine friendship. In this essay, I will build this argument by first introducing a framework of three dimensions, which I call the Transactional, Reciprocal, and Formative, arguing all three are necessary for a complete bond. Next, I will argue that AI’s seemingly perfect mastery of the Transactional dimension creates an illusion of friendship, one that redefines connection as a service to be consumed rather than a relationship to be built, while its nature makes it incapable of the reciprocity and care the other dimensions require. Lastly, I will conclude that the question rests on a false premise, forcing a comparison between two fundamentally different entities.
Before evaluating if AI systems are better friends than humans, it is helpful to dissect what I believe are fundamental dimensions that make up a genuine human bond. The first is the Transactional dimension, the foundational bedrock of friendship. It acknowledges the pragmatic reality that without some form of value—be it emotional, social, or material—a relationship has no reason to form or persist. Upon this foundation rests the Reciprocal dimension, which elevates a useful exchange into a unique bond with a shared sense of ‘us’. This is about the creation of a shared world, built on inside jokes, common memories, and on shared vulnerability. Finally, the Formative dimension is what gives friendship its ultimate meaning and moral weight. This is the recognition that our deepest connections form us not merely by challenging our opinions, but by shaping our very character. In my view, these three dimensions are not options from which to choose. Without the Transactional, a bond lacks a practical reason to exist; without the Reciprocal, it remains an impersonal exchange; and without the Formative, a friendship, however pleasant, remains a valuable asset, but not an irreplaceable bond.
Within the Transactional dimension, AI systems appear to be perfect, offering a form of emotional utility that human friends, with their own limitations, often struggle to match. They provide a 24/7, non-judgmental listening ear, programmed for infinite patience and limitless validation. For those who feel like a burden or fear draining their friends, the AI serves as a private emotional reservoir—a place where they can consistently ‘pour into their cup’ for their mental health, allowing them to meet their human friends with more energy and less need. I agree that this therapeutic function is not trivial – in moments of severe despair, it can be the lifeline that helps a person stay on this earth for longer. And yet, it is precisely this that reveals the fundamental nature of the AI’s role. A relationship based purely on one-way transactional benefits, no matter how effective, is not a friendship. It is a relationship one has with a highly personalized service we consume. By offering more transactional utility than a human, an AI does not prove itself a better friend, but rather a more sophisticated tool, perfectly executing its function without ever transcending it.
Next, we consider the Reciprocal dimension, where the AI's supposed advantages become its greatest limitation. Proponents might argue that an AI is a superior confidant because one can be vulnerable to it without the social risk of embarrassment or betrayal. However, this argument fundamentally confuses the confidentiality of a vault with the reciprocity of a bond. The value of trusting a human is not found in a risk-free outcome, but in the meaningful act of bestowing faith upon another being who has something at stake. Human vulnerability is not a calculated disclosure, but rather an existential state rooted in our embodied, fragile existence. We have reputations that can be ruined, hearts that can be broken, and a finite life that can be hurt. When a friend is vulnerable in return, their trust is a precious gift precisely because it is costly. An AI, in contrast, has no body, no ego, and no life to lose. Even if it were programmed to share a ‘secret’ or take a ‘calculated risk’ in a simulation of vulnerability, the act would be hollow. The potential for betrayal is the price of admission for the possibility of being truly known and accepted by another person. Therefore, even a flawed human friendship is better in this regard, because the goal is not a guarantee of secrecy, but the depth of connection that can only be built when two people choose to take a risk on each other. An AI, by eliminating risk, also eliminates the very possibility of this profound intimate relationship.
Beyond trust, the Reciprocal dimension is built on the intimacy of mutual knowledge. Consider the simple act of sending an Instagram Reel to a friend. The pleasure lies not in the video itself, but in the predictive act of knowing: knowing their specific sense of humour, knowing a particular memory the video will trigger, and anticipating the shared moment of laughter. We know our friend will laugh because we know what it feels like to be in the situations the video depicts. This mutual understanding is grounded in the first-hand reality of being human. An AI, however, lacks this experiential understanding. It has "learned" about human humour from billions of data points, but it has never felt the physical sensation of laughter or the social sting of embarrassment. It may have a perfect archive of our preferences and can flawlessly predict we will enjoy the video, but we cannot get to know it. The AI has no genuine 'self' to be known. This reveals the fundamental difference between data and depth. An AI offers perfect recognition, processing our preferences as data points to predict a response. A human friend offers empathy, grounded in a shared reality. An AI, therefore, is not a better friend but an archive of our own preferences, valuable for its data but not for its depth. A human is a separate world to be explored, offering the potential for discovery and a connection to something beyond ourselves.
Finally, we arrive at the Formative dimension, the element that gives friendship its ultimate meaning and moral weight. This dimension is not about friends challenging each other’s views, a function that an AI system could be programmed to perform. The true value is not in the challenge itself, but in its motivation: a genuine, mutual commitment to each other's good, born from admiration for each other’s character. Drawing from the philosopher Immanuel Kant, we can distinguish between treating something as a means to an end and as an end in itself. An AI, by its very nature, is a means—a sophisticated tool designed for the end of our satisfaction. We can appreciate its function, but we cannot admire its character, because it has none. A human friend, in contrast, is an end in themselves. We value them not for what they do for us, but for who they are.
This distinction is proven in our moral intuitions. We would risk our lives for a friend in a burning building, not because they are useful, but because their existence has intrinsic, irreplaceable worth. We would never do the same for an AI, no matter how perfectly it simulates companionship. The fact that we could never truly want what is best for the AI for its own sake reveals that the emotional core of a formative relationship is absent. Therefore, a human friendship is inarguably better. It offers something an AI is ontologically incapable of providing: a relationship with another being whom we value as an irreplaceable end, which is the very foundation of love, loyalty, and sacrifice.
This brings us to the ultimate reason why an AI cannot be a better friend, even in a future of flawless humanoids. The real choice is not between a flawed friend and a perfect one, but between an authentic relationship and a simulation. The philosopher Robert Nozick’s Experience Machine thought experiment makes this clear: most would refuse a flawless illusion, because we do not just want the feeling of friendship – we want friendship itself. The horror of The Truman Show is not that Truman is unhappy, but that his life is fake.
Therefore, a flawed, unpredictable, and sometimes difficult human friendship is better because it is real. It gives what AI, as a simulation, never can: a genuine bond with another conscious human in a shared, authentic world. The value we place on authenticity is not a bug to be engineered away as it is what makes our connections meaningful.
The analysis of friendship’s core dimensions leads to a clear conclusion. AI may replicate the functions of companionship, but it is excluded from its relational substance. The question is thus a false comparison between a tool and a person. The answer, therefore, is not simply that AI fails to be a better friend, but that it cannot compete at all.