What Should We Do About Chatbots?

Publication date
What Should We Do About Chatbots? Blog image

What Should We Do About Chatbots? Eating People, Pornography and Education

Written by Professor Edward Harcourt, Director of the Institute for Ethics in AI (Interim)

This week the BBC carried the tragic report of two teenagers, both of whom had seemingly taken their own lives thanks to their interactions with chatbots.[1] It seems clear that better guardrails round this technology are needed so further such tragedies are avoided. Some think specialist therapy chatbots are the answer, trained on expert advice rather than on whatever’s out there on the web, like Slingshot AI’s new therapy chatbot – Ash – which launched over the summer.[2] But though chatbot use seems unlikely to decrease any time soon, we also need more: to educate users of this technology – especially young users – about what it actually is (and isn’t). Though published nearly fifty years ago, Cora Diamond’s paper ‘Eating Meat and Eating People’[3] points to a fresh way of thinking about the ethics of AI, chatbots and their links to mental health. In case you blinked, I really did say ‘AI’. And no, Diamond doesn’t mention AI even once, so I realize I have some explaining to do.

Diamond writes as a vegetarian. So her point is not that it’s OK to eat meat but not OK to eat people: she doesn’t think it’s OK to eat either. It’s rather that our reasons not to eat people are of an entirely different order. A vegetarian who thinks it’s wrong to kill animals for food should have no objection to eating a sheep that has been killed in an accident, but nobody sees human beings who have met an accidental death as a food source. On the contrary, she says, it’s part of our very idea of a fellow human being that we do not see each other as food – alive or dead. As Diamond puts it, this ‘goes to build our notion of human beings’. And there are indefinitely many more facts of the same kind. Like the fact that we give human beings names rather than numbers, and not just any old names but names that locate us in further networks of significance such as ancestry or wider group membership. Or that we see even dead human bodies as needing to be treated with special solemnity; that we have infinitely mobile faces in which we can read, or fail to read, second by second changes of mind; or that we bring to everything we do a piece of knowledge that generally stays in the background - that we have all been born, and are going to die.

For Diamond herself, these thoughts pointed towards a critique of what’s still, all those years later, the contemporary philosophical mainstream which sees ethics as a combination of duties – such as to refrain from harming – plus features which confer ‘moral status’, that is, bring things into the scope of the duties (or not). But according to Diamond this gets things the wrong way round. If it were not for the mass of sometimes unthinking ‘modes of response’ – not seeing each other as a food source and all the rest - which constitute our thinking of one another as fellow human beings, why would killing or harming others be such an important thing to avoid? Explicit moral principles express our groping attempts to sharpen the boundaries between ways we can and cannot treat one another. But the principles have no authority of their own: our specialness to one another comes first. As Diamond puts it, ‘the ways in which we mark what human life is belong to the source of moral life’.

And here, after all that, is one of the paper’s links to AI. Much AI ethics investigates the supposed similarities between AI and ourselves. Indeed the very term ‘artificial intelligence’ smuggles in a philosophical claim that deserves closer examination, namely that there’s a single characteristic – intelligence – that’s possessed by both humans and machines. However, one of the deepest but still underexplored questions in AI ethics is not how much human territory is also occupiable by AI, but the reverse: what the development of AI makes us realize is unique to ourselves. That was a worthwhile question before AI was even thought of, but progress in AI forces it on us like never before.

It's not of course that no one else has thought of it. In the shape of ‘consciousness studies’, academics are burrowing away at it all over the world. But if Diamond is on the right track, their efforts are misguided. Whether AI should be treated in the same way as humans is not to be settled by asking whether a single property of ours – ‘consciousness’ – is (or could be) possessed by AI as well, even if philosophers could agree what it is. As Diamond shows us, what if anything is unique to human beings is a complex of so many different things. That indeed may help to explain the currency of the concept of ‘consciousness’ – either an honoured entry to the philosophical ‘too difficult’ pile or a dustbin concept, depending on your view. It’s best thought of as a kind of placeholder: much easier to think everything depends on whether or not AI could have that – whatever it is - than to engage in the difficult and open-ended reflective exercise of cataloguing what really ‘goes to build our notion of human beings’.

I say politely ‘what if anything is unique’, but isn’t it obvious – as soon as we begin that reflective exercise – that the same natural substrate of dispositions to behave (etc.) towards one another that ‘goes to build our idea of a human being’ does not create a fellowship between us and artificial entities, however good they are at producing strings of words, mimicking facial expressions and so on? (Animals, fascinatingly, seem to be half in and half out of this fellowship. But that’s a topic for another day.)

This, however, may be where even those who are not signed up to the ‘consciousness studies’ agenda will baulk. After all, there’s that $50m behind Slingshot AI’s chatbot. And it’s not just a faute de mieux to meet rising demand. Many teenagers seem to find it easier to confide in chatbots than they do in real live human therapists, perhaps even than in friends. 

Let’s pause to ask why they find it easier. You can’t, for example, try the patience of a chatbot. You can’t fret that the bot also talks to other clients, implying that you aren’t really all that special; fret that it has a whole life – of loved ones, colleagues, outside interests – that is invisible to you, and in which you have no place; that it might cancel on you because something more important has come up; or that it might die, indeed die before your therapy has reached a meaningful conclusion. Of course that might all be evidence that chatbots are therapeutically a bad deal: getting over the narcissism behind such worries is part of the therapy, but because chatbots aren’t mortal, don’t have favourite or least favourite clients etc., there’s nothing to get over. But whatever the wisdom in that thought, some people do now choose the chatbot over the human therapist. Let’s set on one side the disturbing phenomenon of ‘AI psychosis’, which specialized chatbots along the lines of Ash might – might - help to address. Choosing to talk to a chatbot may work well for some people, and they end up less anxious or depressed having used the service than they were before. Doesn’t that show that all the Diamond stuff about the tangle of hard-wired reactivity that gives moral principles their point is all deeply contingent and now – thanks to advances in AI – machines are ripe to be admitted to the fellowship we might previously have thought was humans-only?

The answer is no. What’s going on with teenagers and chatbots is instrumentalization: we lift a desire at one end of a human relationship – to be free of anxiety, say – out of a more complex whole, while at the other end we substitute something that reduces to its power to gratify the desire. The opportunity to instrumentalize therapeutic relationships is new, thanks to the advent of AI-driven chatbots. But instrumentalization itself is not new at all: it’s also what’s going on, for example, with pornography, and pornography is as old as the hills. In pornography once again we select out a single aspect of a human relationship (the desire for sexual excitement), and at the other end substitute something - the pornographic content - that reduces to its power to satisfy it. And there’s no pretending pornography doesn’t ‘work’: if it didn’t, it would have ceased to exist centuries ago. 

What’s striking is that in the case of chatbots, many people seem – so far - unable to see that this is what’s going on, and here there is a significant contrast with pornography. Absorbed though they may be in fantasy while it lasts, users of pornography surely know that they are not on one end of a real human relationship: they turn to it because they can’t enjoy real human relationships, or because what they want is something simpler than real human relationships can readily provide, or both. And that’s not just because conventional pornography is purely visual: a quick read of the ads for AI-driven sex dolls (‘changeable heads’, ‘torso only’, etc.) show that despite the promise of ‘superior companionship’, people know that what they are buying is a ‘superior’ and temporary illusion of companionship. It’s puzzling therefore – at least to me - that the popularity of chatbots should prompt excited speculation about the possibility of new albeit artificial members of the human fellowship when the very one-dimensionality of our relationship with them – as shown by the fact that they don’t have full diaries, get tired, have outside interests, die and all the rest – shows that that’s exactly what they aren’t.

Sadly, though, this doesn’t mean that there’s nothing to worry about if teenagers turn en masse to chatbots rather than real people for confidential exchanges, any more than there’s nothing to worry about in humanity’s – I should say male humanity’s – millennial engagement with pornography. So what would happen if we routinely asked young people – as part of their school education, perhaps – some questions: do you think a chatbot is making time to see you? The chatbot doesn’t grab your hand to comfort you – is that because it’s frosty, or (alternatively) observing proper boundaries between you? When the chatbot talks with you for three hours at a stretch without a change of mood, is that because it is very patient? A chatbot can’t skip a session because one of its children is critically ill, so can it really empathize with you in the same way as a creature that shares your vulnerability? My guess is that if we designed the questions right, sooner or later the answer we’d get to all of them would be ‘no’. And if despite the negative answers, young people still chose chatbots over real people – which they might well do - that would be because instrumentalization is sometimes a convenience, not because they had been taken in by ‘Seemingly Conscious AI’. As Mustafa Suleyman has put it recently,[4] we ‘build AI for people, not to be a person’.

I am no educationalist so this menu of questions might not be the right way to approach the issue with young people. But both pornography and chatbots show in their very different ways that the standing attractions of instrumentalization do not alter even to the slightest degree the Diamondian web of sensitivities that constitute our specialness to one another. So as we search for better educational approaches, what we can take with us – that is, those of us whom AI has prompted to explore what AI can’t be – is the certainty that there’s a big difference, and a difference only dimly apprehended by the idea of ‘consciousness’, between artificial intelligence and the real thing.

-

[1] https://www.bbc.co.uk/news/articles/ce3xgwyywe4o 

[2] https://www.statnews.com/2025/07/22/slingshot-new-investors-generative-ai-mental-health-therapy-chatbot-called-ash/ 

[3] Philosophy 53:206 (1978), pp 465-79.

[4] https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming

-

Suggested citation: Harcourt, ‘What Should We Do About Chatbots? Eating People, Pornography and Education', AI Ethics at Oxford Blog (14 November 2025)