Overview of the Ethics in AI Colloquium

Image inside St Luke's Chapel of the audience and panel of speakers
Image credit: Maciek, MT Studio/Oxford Atelier

 

 

Why Do We Expect More of Machines Than We Do of Ourselves?

On Thursday, 16th November, we were honoured to welcome Mr Justice Fraser for our second Ethics in AI Colloquium this term, entitled "Why Do We Expect More of Machines Than We Do of Ourselves?". Mr Justice Fraser was called to the Bar in 1989. He took Silk in 2009. He was appointed to the High Court in 2015, and he will take his new role as the Chair of the Law Commission in December. Our two commentators were Dr Linda Eggert and Dr Caroline Green, who are both Early Career Fellows at the Institute of Ethics in AI. The discussion was chaired by the Institute's Inaugural Director, Professor John Tasioulas.

In recent years, the rapid rise of AI has been accompanied by growing concerns not just about the various harms that it may cause but also about the ways we allocate responsibility for such harms. We do not think of AI programmes as agents, and the law does not treat them as such. Yet, equally, our product liability law is inept in allocating responsibility for the harms caused by advanced AI. We may need to reform our laws to close up the responsibility gaps that new technologies can bring about. The issue of responsibility for the harms caused by AI is particularly pressing in two contexts which Mr Justice Fraser focused on in his remarks - lethal autonomous weapons systems (LAWS) and medical diagnostics.

Mr Justice Fraser described the introduction of LAWS as a potentially fundamental change in the nature of warfare. The US Department of Defence defines LAWS as weapons that, once activated, select and engage their targets without further intervention by a human operator. Mr Justice Fraser noted many potential ethical and military advantages of such systems, which have been claimed in the debate surrounding them. In particular, they might make war more humane, as they are not subject to various defects affecting human volition, like fear or anger - and therefore, their introduction might help reduce the number of war atrocities. LAWS are also likely to reduce military casualties on the side of states that deploy them. However, arguably, they might also create greater risks to civilians and to combatants, not just due to errors in identifying targets but also because they operate without empathy or mercy. Similarly, we may worry about AI being increasingly relied on in healthcare. AI programmes are already used in this context, and they have been known to diagnose tumours that human diagnosticians have missed. However, the prospect of value-laden decisions about treatment options being delegated to AI may raise serious ethical questions precisely because it removes human emotional responses, like empathy, from the decision-making process.

In her comments, Dr Linda Eggert considered whether the putative advantages of AI systems over human decision-makers - such as the fact that robots are never tired or angry - mean that we would be right to expect more from machines than humans. While it is true that humans have many real flaws, sometimes acting upon one's emotions might be a virtue;  and in any case, if humans with all our flaws and biases create the data on which algorithms are trained, can we really put more faith in them than we do in each other? Without reaching firm conclusions, she observed that if we were right to expect better decisions from machines than humans, it could be a mistake to worry about respect for human dignity when decisions are delegated to them. Dr Eggert also highlighted the need to update our regulatory frameworks for the age of AI, discussing the need to consider the appropriate division of labour when it comes to the harms arising as a result of the release of new technologies. Citing the example of ChatGPT, she expressed concerns that the responsibility for mitigating the harms brought about by new technologies often falls on individuals who do not gain in any way from the technologies being released and who are not consulted in advance in any way.

Dr Caroline Green extended the discussion by considering the issue in the context of the provision of social care. At the moment, there are huge staff shortages in the care sector and a major need to improve service level standards offered to individuals in need of care. Many people hope that we can redress these shortages by using AI and digital technologies. Dr Green acknowledged the many ways in which such technologies may be useful but warned against expecting too much of machines and delegating too many tasks to them without considering other solutions and consulting stakeholders, including, above all, the recipients of social care. Our Colloquium concluded with a lively floor discussion, and we were very grateful to have received a range of interesting questions from our audience.

On Thursday, 16th November, we were honoured to welcome Mr Justice Fraser for our second Ethics in AI Colloquium this term, entitled "Why Do We Expect More of Machines Than We Do of Ourselves?". Mr Justice Fraser was called to the Bar in 1989. He took Silk in 2009. He was appointed to the High Court in 2015, and he will take his new role as the Chair of the Law Commission in December. Our two commentators were Dr Linda Eggert and Dr Caroline Green, who are both Early Career Fellows at the Institute of Ethics in AI. The discussion was chaired by the Institute's Inaugural Director, Professor John Tasioulas.

In recent years, the rapid rise of AI has been accompanied by growing concerns not just about the various harms that it may cause but also about the ways we allocate responsibility for such harms. We do not think of AI programmes as agents, and the law does not treat them as such. Yet, equally, our product liability law is inept in allocating responsibility for the harms caused by advanced AI. We may need to reform our laws to close up the responsibility gaps that new technologies can bring about. The issue of responsibility for the harms caused by AI is particularly pressing in two contexts which Mr Justice Fraser focused on in his remarks - lethal autonomous weapons systems (LAWS) and medical diagnostics.

Written by Konrad Ksiazek (DPhil Student in Law at Balliol College, University of Oxford, Affiliated Student at the Institute for Ethics in AI)