The EU’s AI Act at a Crossroads for Rights

Publication Date 

4 December 2023

image by Savvapanf Photo - EU Flag and shadows of people
image by Savvapanf Photo from Adobe Stock Images

by John Tasioulas and Caroline Green

The EU’s proposed AI Act is arguably the most impressive official effort yet to address the complex regulatory challenges posed by AI in a way that promotes market-driven innovation whilst respecting basic rights. On December 6th representatives of the European Parliament, the European Council and the European Commission will continue discussions in Madrid over the final shape of the Act. The gathering will be the fifth trilogue session since the original draft was published in April 2021, highlighting the difficulties of reaching consensus.

In a world in which there is intense competition not only to build increasingly powerful AI systems, but also to set the regulatory agenda that will govern their development and deployment globally, the AI Act is well-placed to exert considerable influence beyond the EU’s borders. This is due to the potent combination of the EU’s market power and its ideological pull, a ‘Brussels effect’ already observed in the case of the General Data Protection Regulation. As a result, the final content of the Act should be of considerable interest to a global audience far beyond the citizens of the EU.

However, in just the past few months, the race to create a global template for AI regulation has intensified, with President Biden’s executive order on AI and the Bletchley Park declaration on AI safety being issued within days of each other. Against this background, the EU’s protracted deliberations over the AI Act may cause it to lose its early lead in the race to shape the global agenda on AI regulation.

But those who hope the AI Act will be a beacon for enlightened rights protection in the age of AI face a serious problem. There is a methodological deficiency in the Act’s approach to regulating AI systems which leads to the systematic under-protection of basic rights. Moreover, the recent proposal by three key member states to exempt foundation models from the Act’s requirements threatens a dangerous widening of this gap in rights protection. The looming outcome on December 6th is the undeserved triumph of the logic of market-driven innovation over the Act’s noble aspirations to uphold basic rights.

In this post we offer a brief account of the AI Act’s rights gap and suggest ways in which it might be addressed.

The AI Act’s Philosophy: Calibrating the Level of Regulation to the Level of Risk

The AI Act adopts a risk-based approach that aims to calibrate the level of regulation to the level of risk posed by different AI systems. In this way, it strives to avoid the twin perils of under-regulation (failing to protect rights and other values) and over-regulation (stifling technological innovation and the efficient operation of the single market). The result is a regulatory approach that relies on classifying AI systems by means of a hierarchy of risk:

  • Prohibited systems:  These are prohibited on the grounds that they pose an unacceptable level of risk, e.g. AI systems used for subliminal manipulation, exploitation of the vulnerable, general purpose social scoring etc. (Title II, Art 5).
  • High risk systems: (Title III Chapter 1 Article 6) These are subject to the most exacting forms of regulation, including the requirement to establish a risk management system, and pertaining to high quality data, documentation and traceability, transparency, human oversight, and accuracy and robustness (Title III Chapter 2).
  • Non-high-risk systems: These are subject either to some obligations of notification and transparency (Title IV, Art. 52) or else are free of mandatory requirements (Explanatory Memorandum (EM) 2.3), although the adoption of voluntary codes of conduct is encouraged in their case (Title IX Art. 69).

The criterion for classifying an AI system as “high-risk”, apart from those AI systems that are safety components of regulated products, takes the form a revisable list of specific purposes for which AI systems are deployed (Annex III). It is deployment within these purposive domains that renders an AI system “high risk”. The relevant purposive domains are set out in Annex III and include biometric identification and categorisation of natural persons; access to educational and vocational training institutions and assessment; job recruitment and worker management; law enforcement and administration of justice; access to and enjoyment of essential private services and public services and benefits; and migration, asylum, and border control.

A Gap in Rights Protection: The List-Based Method for Determining the Level of Risk

There seems, however, to be a structural flaw in the Act’s methodology, insofar as it relies upon a list-based approach to determining the level of risk. The flaw arises from the fact that it is exceedingly difficult to enumerate, in advance, a manageable set of applications of AI systems that threaten serious rights violations. As a result, AI systems currently designated as “low risk” are potentially not subject to mandatory regulation that offers adequate rights protections.

Consider one of the “high risk” domains i.e. AI systems that affect “access to and enjoyment of essential private services and public services and benefits” (Annex III, section 5) in relation to AI systems operating in the delivery of health care services: Sub-section (c) refers to “AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid”.

The above description of the domain excludes non-emergency delivery of medical aid, rendering AI systems in these contexts ‘low risk’. But consider, for example, a general practitioner who uses an AI system to prioritise patients’ appointment requests in non-emergency settings. If such an algorithm systematically discriminated against patients who are women, or members of racial or ethnic minorities, it would still not come within the existing “high risk” category. This appears to underestimate the gravity of such discrimination relative to other forms of harm that do come within one of the listed domains.

This worry generalises beyond healthcare services because the threat of AI systems perpetrating rights violations cuts across their potential domains of operation. Racially discriminatory algorithms in the allocation of cultural or leisure opportunities, for example, also fall outside the existing list of purposive domains. If an algorithm regulating access to theatres or leisure centres systematically discriminated against ethnic minority children, for example, a serious harm would be incurred irrespective of the seeming a priori non-seriousness of the domain of activity. The examples can be multiplied, but the upshot is that the list-based methodology underestimates the pervasive character of the serious risks posed by AI systems.

One response to the problem identified here is to invoke the possibility of the Commission “expanding the list of high-risk AI systems used within certain pre-defined areas, by applying a set of criteria and risk assessment methodology” (EM, p.5.2.3.). The first problem with this remedy is that, according to Article 7(1)(a), any such addition must still fall within the right pre-defined areas set out in Annex III, which may itself be unduly constraining. The proposed remedy also seems ad hoc, one that involves belatedly playing catch-up whilst potentially sending out a message that downplays the gravity of those harms that are not currently listed as coming under the “high risk” category.

Widening the Rights Gap: The Proposed Exemption of Foundation Models from Mandatory Regulation

The troubling gap in rights protection identified above will be exacerbated if the EU accepts the proposal advanced last month by Germany, France, and Italy that foundation models should not be subject to mandatory legal regulation.[i] Foundation models are AI system that can produce a variety of outputs, such as texts, images, and audio, operating either in their own right or as a base which can be fine-tuned to create more specific applications. For example, the GPT-3.5 and GPT-4 foundation models are the bases on which popular conversational applications such as Chat-GPT and Bing are built. [ii]

The rationale for the proposed exemption is that the AI Act seeks to regulate the application of AI, not the technology as such, and that the final application of foundation models is determined by downstream providers of AI systems. In place of mandatory regulation, the proposal urges that foundation model developers engage in self-regulation by means of a code of conduct without (for the time being) any sanctions regime. The only mandatory element of this self-regulation approach would be the requirement to create model cards for each foundation model. The same proposal imposes compliance responsibilities on downstream providers in line with the list-based approach to risk classification in the Act.

Although the proposal to exempt foundation models is touted as promoting innovation, we agree with the mounting chorus of critical voices that deny that this would facilitate ethically responsible innovation.[iii] Treating foundation models as effectively low risk AI systems would be a failure to address the systematic risks these models pose to basic rights, including physical security, anti-discrimination, privacy, due process, and intellectual property rights, among others. Among the relevant considerations against exempting foundation models are the following.

First, downstream providers using foundation models will typically have far less expertise, capacity or even access needed to comply with the regulatory requirements imposed by the “high risk” classification as compared with the handful of companies (under twenty world-wide) with the capacity to develop such models.

Second, the assumption that classifying foundation models as low risk will promote innovation is questionable. As a recent open letter from leading AI experts has pointed out, requiring SMEs and other downstream deployers to bear the compliance costs and liability risks of potentially unsafe foundation models used in their products would “severely stifle innovation”.

Finally, the claim that regulating foundation models constitutes regulating AI technology itself rather than its applications is little more than a piece of verbal legerdemain. After all, in addition to being able to operate in a stand-alone fashion, the point of these models is to be adaptable to many more specific purposes. Flaws in a model would potentially cause unpredictable and systematic harms across a wide range of AI systems that built on it. Regulating such foundational models would thus be in line with the Act’s focus on rights protection.

In short, adequate protection for rights necessitates that foundation models be designated as automatically falling under the Act’s high-risk classification. This is not to deny the foundation models may pose special problems for regulation given their high level of adaptability. But this underlines the need for the creative application of robust regulatory standards, it does not justify a blanket regulatory exemption.

What Can Be Hoped For?

It is probably politically unrealistic to expect that the problematic gap in rights protection identified above will be remedied by the EU abandoning the list-based approach to the classification of AI systems. However, a less drastic response to the problem is to fortify the principled character of the Act, and to mitigate the risk of regulatory gaps by directly incorporating, whether as an article or recitals, the ethical guidelines for trustworthy AI developed by the High-Level Expert Group on Artificial Intelligence. [iv]

These guidelines would apply to all AI systems, irrespective of their risk ranking. On this interpretation, the guidelines are the underlying principles that the risk-ranking method implements, but their legal significance is not completely exhausted by their operationalisation through that method. The presence of the guidelines in the Act might help mitigate regulatory gaps with respect to systems classified as non high risk.

If the EU exempts foundation models from mandatory regulation, the case for incorporating the ethical guidelines into the text of the Act becomes even more compelling. An additional step that might also be taken, in that event, would be to add mandatory sanctions to the codes of conduct that regulate providers of foundation models.

However, whether the blanket exemption of foundation models will be adopted at the Madrid trilogue remains unclear. Both the European Parliament and the Spanish presidency of the EU Council have insisted on robust regulation for foundation models. More recently, the European Commission has outlined a compromise solution, distinguishing between General Purpose AI models, such as ChatGPT, which would be subject to mandatory regulations, and foundation models which would be subject to a code of conduct. The Spanish presidency has responded by tweaking the Commission’s compromise to address the Parliament’s concerns.[v]

After the trilogue session on December 6th, we should have a clearer picture of whether the EU will remain a global leader in rights protection in AI regulation. Which path it takes at this crossroads moment will have serious repercussions for the rights of people not only in the EU but throughout the world.

 

[i] Advanced in a joint paper called  ‘An innovation-friendly approach based on European values for the AI Act’ joint paper’

[ii] For a helpful explainer, see Eliot Jones, ‘Explainer: What is a Foundation Model?’,

[iii] See, for example, the list of critical reactions in this thread on the social media platform X  https://x.com/FLIxrisk/status/1730182806595047868?s=20

[iv] They set out seven key requirements for “trustworthy” AI: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; societal and environmental well-being; accountability. High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, (2019), p.14. 

[v] See for example: Luca Bertuzzi, AI Act: Spanish Presidency makes last mediation attempt on foundation models(November 29, 2023).

 

Suggested citation: J. Tasioulas and C. Green, ‘The EU's AI Act at a Crossroad for Rights’, AI Ethics at Oxford Blog (4th December 2023) (available at https://www.oxford-aiethics.ox.ac.uk/eus-ai-act-crossroads-rights)