AI is reshaping our world even as we speak. But the very name of the technology conceals an assumption, that there’s a single feature - ‘intelligence’ - that’s common to humans and machines. But is there? Can human intelligence be sheared off from other characteristically human features - our dependence on one another, our vulnerability, our mortality, our seemingly hard-wired reactivity to the human form - and recreated elsewhere? As shown by recent developments in the world of AI - the advent for example of AI ‘welfare experts’ – some clearly think the answer is yes. But can a machine really commiserate or apologize? Though we increasingly rely on AI as a source of information, can a machine even tell somebody something? That depends on what telling is: just getting information from one place to another, or partly about human relationship, like shaking hands or catching someone’s eye? Are we too ready to forget the person behind the information?
In revisiting such questions, AI and humanity thinks afresh not just about what humans and machines can both do, but about what’s unique to us, and thus on how – as tools, or as peers - we and AI should share the human world.
Working in this research area
-
Professor Edward Harcourt MBE, Director of the Institute for Ethics in AI (Interim), is Professor of Philosophy at the University of Oxford and a Fellow of Keble College. His academic expertise lies at the boundary of ethics and the philosophy of mind, an intellectual foundation he brings to questions of AI governance and human agency. At the Institute for Ethics in AI, his research focuses on AI and humanity: how human values can shape the design, governance and use of AI and how philosophical reflection on AI can help to identify what’s unique to human beings.
-
As an Early Career Research Fellow in Moral Philosophy, Thomas is researching the topic of trustworthy AI. Thomas has worked previously in the ethics of influence, particularly technologically-enhanced methods of influence, as well as the philosophy of trust and trustworthiness.
-
David Storrs-Fox is an Early Career Research Fellow at the Institute and a Junior Research Fellow at Jesus College. David works in moral philosophy, metaphysics and the philosophy of action. His current research concerns the ethics of groups that are composed of both human and AI agents, with a focus on the abilities such groups have and the moral responsibility they might bear.
Publications
- Professor Edward Harcourt, 'Expert Comment: What should we do about chatbots?' University of Oxford, 2025
- Dr David Storrs-Fox, 'Inability, Fallibility, and the Positive Case for PAP,' Philosophical Studies, 2025
- Dr Thomas Mitchell, 'Trust and Transparency in Artificial Intelligence', Philosophy & Technology, 2025
