AI in the Law or AI in the Place of Law? Brief Overview

Rear view of the audience looking towards the panel

 

On Thursday 4th May, we had the pleasure of welcoming Professor Gerald J. Postema at our most recent Ethics in AI Colloquium, entitled 'AI in the Law or AI in the Place of Law?'. Professor Postema is a leading voice in legal philosophy, and the author of Law's Rule (2023), where he laid out his conception of the rule of law, built around the three core principles of sovereignty, equality and fidelity. In his book, Professor Postema recognises the rise of AI as one of the leading challenges for the rule of law in modern society. His chapter on the place of AI in Law served as a starting point for our Colloquium. Professor Postema was joined at the Colloquium by a highly distinguished group of commentators, including Professor Timothy Endicott, Professor Sarah Green and Dr Natalie Byrom. Their discussion was chaired by our Director, Professor John Tasioulas.

Professor Green and Professor Postema

The place of AI in the law is a highly pertinent subject because  there are a growing number of voices advocating for involving AI tools, especially large language models reminiscent of ChatGPT, in various types of legal work. AI tools could soon relieve lawyers of the monotonous work like document review or drafting standard term contracts. While this may not be too worrying, there is no reason why we should not imagine that, more radically, AI tools could eventually replace the judicial work of drafting judgements and deciding cases. Large language models could potentially predict expected outcomes and their justifications in litigated cases at a fraction of time and cost compared with the equivalent work of human judges.

Professor Green, Professor Postema and Dr Byrom

This invites a pressing question. What value is lost, if any, when we delegate the responsibility over the law to large language models? Postema offers several answers. First, when we include AI in place of legal decisionmakers, we change the nature of legal reasoning. Deliberative reason is replaced with calculation, prediction and tracking correlations. Second, AI models do not appreciate values and norms like we do, and they cannot weigh them as reasons in the same way as human judges. When acting as judges, AI models cannot be motivated in their 'reasoning' by a true sense of empathy, mercy or common sense. Finally, AI models cannot be accountable and responsible for their decisions like human judges. Large language models are deeply detached from our society, our values, and the reach of our human laws. This is a big problem. Postema argues that the rule of law must be built upon deep communal bonds between the rulers and the governed. The underlying aim of the rule of law - constraining arbitrary exercise of power - requires us to take responsibility for holding legal decisionmakers accountable. When we delegate judicial decisions and advocacy to entities which are not embedded in our society, we risk losing the ability to do this. At best, we may have assurance that the programmes we created are reliably functioning in the intended way, in a technical sense. This is not the kind of accountability we need to protect the rule of law.

a member of the audience directing a question to the panel

Professor Postema's impressive talk was followed by a stimulating discussion with comments and questions from our panel and our audience. Professor Sarah Green wondered if disputes between commercial parties, where efficiency and predictability often matter more than empathy and mercy, may be more suitable for involving AI tools in legal decisionmaking than disputes within other areas of the law. Dr Natalie Byrom highlighted the need for democratising the debate about involving AI tools in legal work. Given that the inefficiencies of our legal system, which AI tools could mitigate, create serious problems with access to justice, those affected by this problem should have a say regarding the place of AI in the law. Professor Timothy Endicott agreed with Postema's worry about a general strategy of replacing the rule of law with governance by large language models, but expressed hope that computational modelling may be extraordinarily useful in aspects of policy making and implementation that are legitimately managerial (such as management of traffic flow in a city, as opposed to determination of liability for traffic offences).

 

We are hosting two more Ethics in AI Colloquia over the next few weeks. Everyone is always warmly invited to join us online and submit questions to our speakers.

 

Written by Konrad Ksiazek, DPhil Student in Law at Balliol College, University of Oxford, Affiliated Student at the Institute for Ethics in AI

 

Images by Ian Wallman