AI in Social Care Summit 2025
Driving forward the conversation on AI and Social Care
Write up by Imogen Lucy Grace Rivers, DPhil Scholar at the Institute for Ethics in AI
On Thursday 27th March, the Institute’s Director of Research, Dr Caroline Green, inaugurated the AI in Social Care Summit 2025. Bringing together over 150 participants from across the care, tech and policy sectors, the Summit showcased the world-leading work undertaken by the Oxford collaboration on the responsible use of generative AI in adult social care. Stephen Kinnock, Minister of State for Social Care, addressed summit attendees via video:
‘This summit is vital – bringing together expertise to ensure AI enhances, rather than replaces, human care. The government is committed to supporting the responsible adoption of AI in public services, and we want to work with you to make this a reality.’
Why does AI in social care matter?
Opening the event, Dr Caroline Green highlights that social care is about supporting people with disabilities, illness or specific needs to live as independently as possible with choice and control over their lives and with their dignity and status as equal human beings and their human rights protected. Currently, the use of generative AI in adult social care is advancing at a pace. But there is no official government policy or guidance on the responsible use of AI in this particular context. This is where the work of the Oxford collaboration becomes vital:
‘The ‘responsible use of (generative) AI in social care’ means that the use of AI systems in the care or related to the care of people supports and does not undermine, harm or unfairly breach fundamental values of care, including human rights, independence, choice and control, dignity, equality and wellbeing.’
This value-led definition of the responsible use of (generative) AI in adult social care guides the work of the Oxford collaboration.
The latest from the Oxford collaboration
The collaboration launched in February 2024 at the University of Oxford under the leadership of the Institute for Ethics in AI in partnership with the Digital Care Hub and Casson Consulting. Last year, the collaboration brought together over 30 organisations and individuals to create the Oxford Statement on the responsible use of generative AI in social care. Over the last year, the collaboration has expanded to involve over 70 individuals and organisations from across the care community, including people who draw on care and support, care workers, care providers, tech developers, academics, local authorities, advocacy groups and policy makers. The Summit unveiled the extensive work of the last year, including a Call to Action, Guidance, a Pledge for Tech Suppliers and a white paper, ‘The responsible use of Generative AI in adult social care: A value-led approach’, co-authored by Dr Caroline Green, Katie Thorn, John Boyle, Kate Jopling and Daniel Casson.
AI in Practice and in Principle
Speaking at the panel session on ‘AI in Practice: How is it already being used and what’s coming next?’, Peter Zein, Expert by Experience at Kent Council challenges us to be at once inclusive, critical and hopeful in the practical development of AI in adult social care. He says:
‘I feel we are forgetting that group of people who are excluded. AI is okay if you can talk or use your fingers […] I think we need to challenge manufacturers to remember the needs of people, because they are forgetting little things […]’.
But at the same time:
‘We need to be positive about technology […] If I didn’t have that technology, do you think that I can be here talking to you all?’
The invitation is clear: in the development of generative AI for the social care sector, we need to fight for the inclusion of the lived experiences of those who draw on social care and support.
The next panel session, which centres on the question of how AI in adult social care can be used ethically and responsibly, foregrounds a key challenge. This concerns the use of personal data: on the one hand, Professor Kate Hamblin, Director of the ESRC Centre for Care and lead at IMPACT, points out that bias and opacity in the use of data for generative AI undermines the ability of participants to meaningfully consent to its use; on the other hand, John Boyle the Founder of A14U Consultancy, argues that personalised care is the future for social care and to achieve this we need more personal data. In his view:
‘Users should share personal data [as] the risks are worth it for the benefits.’
International Perspectives and Future Directions
The last panel session of the Summit examines ethical AI in an international care context. Silvia Perel-Levin, Consultant and Advocate for Health and Human Rights of Older Persons, discusses the work of the UN on AI in adult social care. Dr Christoph Ellssel from the Catholic University of Applied Sciences in Munich explains the work of the Munich Competence Centre in AI and highlights the common challenges shared with the Oxford collaboration: meaningful consent to the use of personal data, trust in the care system and an employment market which is set to be revolutionized by generative AI technology. Dr Samir Sinha, Professor of Family and Community Medicine/Geriatrician from the University of Toronto brings a Canadian perspective and agrees that many of the core challenges are shared across the international community: consent, trust and how much we need the ‘human touch’ in adult social care. This panel sets the scene for the work of the Oxford collaboration to go international in the advancement of responsible generative AI in adult social care at the global level.
With this future direction in mind, Dr Caroline Green, with Katie Thorn from Digital Care Hub and Daniel Casson of Casson Consulting, close the conference. As we reflect on the new Guidance issued by the Oxford collaboration, one of the key takeaways is collective accountability, encapsulated by “I” and “we” statements:
‘If you’re a person who draws on care and support or someone who works in social care, you can use these “I” statements to check whether technology is being used responsibly. If you’re a provider of care […] or a technology provider or developer, you should consider whether the “we” statements reflect how you’d describe your work.’
Rooted in the UK context, the Oxford collaboration on the responsible use of generative AI in adult social care represents the start of a new approach to AI regulation with the potential to challenge law and policy makers at the international level in the care sector and beyond.