Red Herring, Meaningful Human Control and the Autonomous Weapons Systems Debate

Publication date
Stock image of soldier using a computer

By Professor Yuval Shany

Introduction

On 5 October 2023, the UN Secretary General, António Guterres, and the President of the International Committee of the Red Cross (ICRC), Mirjana Spoljaric, issued a joint call on states to establish rules to limit the use of autonomous weapon systems (AWS). According to the call, the development and proliferation of AWS – which are defined in the call as “weapon systems that select targets and apply force without human intervention” – pose serious humanitarian, legal, ethical and security risks, including increased propensity to engage in violent conflicts. As a result, Guterres and Spoljaric urged states to establish by 2026 “specific prohibitions and restrictions”  that would give effect to the following standards: (a) retention of human control over life and death decisions by AWS; (b) ban on the military use of unpredictable AWS, including those based on machine learning; (c) restricting other types of use of AWS, including “limiting where, when and for how long they are used, the types of targets they strike and the scale of force used, as well as ensuring the ability for effective human supervision, and timely intervention and deactivation”. These three proposed standards closely mirror standards already advocated by the ICRC in a 2021 position paper on the topic of AWS.

The 2023 call marks an increased interest on the part of the UN and the ICRC in promoting a new regulatory framework for controlling the use of AWS. Under existing international law, it is not fully clear whether, and under which conditions, these weapon systems are deemed unlawful, hence the push for a new standard-setting instrument. Moreover, efforts to directly regulate AWS through traditional weapon control mechanisms have so far failed: Negotiations relating to Lethal Autonomous Weapon Systems (LAWS) have been taking place under the auspices of the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (CCW) for a decade now, and have yet to produce a detailed agreement. The one concrete outcome of the CCW process, up until now, has been the endorsement by the state parties in 2019 of a report composed by a Group of Governmental Experts (GGE), identifying 11 guiding principles that should govern the regulatory effort. Given the frustrating slow pace of the negotiations, and the inability to agree on basic issues, such as whether a new treaty is needed, whether any use of LAWS must be subject to meaningful human control and what qualifies as such (see also here and here), civil society actors have called for temporary restrictions on the development of AWS/LAWS or an outright ban, and some UN bodies have started contemplating other paths for regulation. Among those alternative tracks, one may note Human Rights Council Resolution 51/22 (2022),  which asked the Council’s Advisory Committee to study “the human rights implications of new and emerging technologies in the military domain”, and General Assembly Resolution 78/241 (2023), which asked the UN Secretary-General to seek the views of states and other stakeholders on LAWS. These studies could serve as the basis for future initiatives to adopt new international law standards on the AWS/LAWS, as some civil society groups hope will happen.

Whether one accepts or not the push for a new legal instrument based around the standards elaborated by Guterres and Spoljaric (see e.g., here on the sceptical position of the US towards such initiatives, and here for academic criticism of anthropocentric objections to AWS/LAWS), debates over the recent use of military AI by the Israel Defence Forces (IDF) in the Gaza war and the news regarding the partial activation by the US military of a new operational decision-making platform with dramatic potential for transforming battlefield dynamics, underscore two critical points.

First, while international negotiations regards AWS/LAWS are moving at a glacial pace, military AI systems, are being rapidly developed, deployed and used. As a result, like in other AI domains, the speed of regulation is seriously lagging behind actual events. Moreover, the regulatory efforts appear to be over-focused on some technologies and under-focused on others. In fact, focusing legal, ethical and political debates around AWS/LAWS might be somewhat of a red herring, since it tends to elide many similar dilemmas, which are confronting us now in relation to non-autonomous military AI systems, including questions of meaningful human control and accountability. Given the centrality of such systems to the operation of many contemporary militaries, the feasibility of extending to them, in due course, restrictive standards like those proposed by Guterres and Spoljaric appears unlikely.

Second, the more sophisticated military AI systems become, the less the distinction between AWS/LAWS and other military AI – whose application to concrete cases is already controversial given the different definitions that exist as to what constitutes AWS/LAWS – makes sense. As a result, regulators should be arguably focusing less on AWS/LAWS as a distinct category of weapon systems and more on the application of law and ethics to a range of modalities of human-machine interaction in military contexts.  

New developments in Military AI

Recent weeks have seen developments relating to the use of military AI in two very different contexts. Although the two military AI systems in question are not covered by the definition of AWS/LAWS, they raise comparable issues of meaningful human control and accountability. The first development involves the intensive debate around the use by the IDF of its target-generating AI system – The Gospel. This system has been described as a sophisticated intelligence data-analysis algorithm that is able to quickly generate targeting recommendations on the basis of real-time operational information, which is processed against multi-source intelligence materials that is accumulated on an ongoing bases, increasing thereby significantly the pace of generating military targeting decision. To illustrate, according to a 2021 interview with the previous IDF Chief of Staff, Aviv Kohavi, the Gospel system generated when used in Gaza during active hostilities in May 2021 around 100 targets a day, whereas, previously, human-only intelligence analysts generated around 50 targets over a period of a year. The speed by which targets are generated by the Gospel, the difficulties in assessing the accuracy of its performance and the large numbers of victims in the current Gaza war, have given rise to a controversy surrounding the legitimacy of using it: Whereas the IDF describes the system as “around the clock target factory”, critics refer to it as a “mass assassination factory”; and whereas the IDF claims that the system facilitates more accurate targeting and reduced collateral harm, critics allege that the fast timeline for developing and deploying systems like the Gospel inevitably results in uncertainty about their impact, effectively turning those targeted by the system to “guinea pigs” in a real life experiment.

To be sure, target-generating AI systems, such as the Gospel, and command and control (C2) systems, such as the Ukrainian military’s Gis Arta (a system which distributes designated targets among available artillery batteries – sometimes referred to as “Uber for artillery”), are not AWS/LAWS. They provide actionable recommendations to human operators, who then decide whether or not to rely on them when engaging a target. Still, the level of control and oversight actually exercised by human operators over these systems might be very limited due to the speed and complexity of the data analysis processes they engage in, which may exceed human capacities. Put differently, the very features that render the case for the development, deployment and use of such targeting or C2 systems compelling in the eyes of the military – the ability to quickly produce actionable recommendations based on very large amounts of data – could render them effectively beyond reach of human powers of control and oversight in real time (as opposed to pre-deployment impact assessment and ex post facto monitoring). Under those circumstances, having a human in real time in the loop or on the loop might be merely perfunctory in nature, especially given concerns about automation bias, strong organizational pressures to act quickly and decisively in times of active hostilities and the limited ability of any single human operator to effectively control complex and multi-part decision making systems.   

Such concerns are likely to be further compounded even more once complex C2 systems, like the US’s Combined Joint All-Domain Command and Control (CJADC2), become fully operational. Such systems gather massive amounts of data from intelligence, surveillance and reconnaissance (ISR) devices, including real time battlefield information (“sense”), identify on the basis of the processed data concrete operational threats and choices (“make sense”) and recommend action – including through using AI-controlled weapon systems (“act”). In February 2024, US Deputy Secretary of Defence, Kathleen Hicks, announced that a “minimum viable capability for CJADC2 is real and ready now”. Given the large number of computerized devices it connects (constituting what is sometimes referred to as the “Internet of military things”), CJADC2 represents a significant expansion of in field commanders’ decision-making capabilities; yet, here too, a question arises concerning human capacity to effectively control or review, in real time, recommended decisions and acts. This is especially, since some features of the system are expected, according to a senior US Air Force commander involved in developing advanced battlefield management projects, to “happen in the matter of microseconds”. In such scenarios, commanders would need to delegate decision making to the “control node that has the tools and the ability to direct that action”.

Implications for the AWS/LAWS debate

The two recent developments involving the Gospel and CJADC2 weapon systems underscore the disconnect between the AWS/LAWS debate and some of the pressing challenges presented by other military AI systems which are fully or partly operational. Such systems are already used, in some cases, on a large scale and with many lethal consequences (by contrast, there has been to date only one alleged case of operational use of AWS). Yet, many of the relevant regulatory challenges posed by such military AI systems and AWS/LAWS are comparable.

Given their operational speed and complexity, AI-based targeting or C2 systems also invite a discussion of whether they could be subject to meaningful human control and what such control involves. Ultimately, the same reasons that lead to the development of such AI systems – to enhance military decision making capacity beyond that which is available to humans – render them non-transparent and unpredictable for humans. They also make control and oversight over them by humans difficult, if not impossible. One may note in this regard Geoffrey Hinton’s comments, delivered in a recent speech in Oxford (February 2024), according to which “[t]here are very few examples of more intelligent things being controlled by less intelligent things. Thus, the more intelligent military AI systems will become, the less controllable they might be. Such capacity gaps also raise complicated questions of legal accountability, which pile on questions arising from the intricate division of labour and collaboration between the many human and non-human ingredients that comprise the decision making processes in which military AI systems are embedded.

It is therefore somewhat of an anomaly that so much international attention is directed at the dystopian threat posed by AWS/LAWS (captured by popular references to them as “killer robots” or “slaughterbots”), while relatively little attention has been directed at other AI military systems which might also raise serious legal, ethical and policy problems. The fact that a rising number of military AI systems that raise questions of meaningful human control and accountability are already operational underscores the need to consider the slow international law reaction to all forms of military AI against the general pattern of late regulatory intervention with AI technology, and of the tendency of foregrounding in such interventions some technological features and backgrounding others (see e.g., here). To be sure, the timing of regulatory intervention is critical in the military AI policy domain, as trying to restrict the use of such systems after they have already become fully embedded in military operations render it unlikely that such regulatory efforts will succeed. Like with other weapons control challenges, states are reluctant to give up powerful weapons they already have in their arsenals, especially when such weapons or weapon systems give them a strategic advantage over their adversaries.

Finally, it is doubtful whether, going forward, the normative distinction between fully autonomous, semi-autonomous or nominally human-controlled weapon systems – pegging legality or legitimacy to technical degrees of machine autonomy and not to the actual practices of use – should be fully retained. Arguably, AI weapon systems belonging to all three categories potentially raise certain comparable legal, ethical and policy challenges (e.g., meaningful human control, transparency, accountability, effectiveness in mission accomplishment and compliance with legal requirements) which should ideally be addressed in an effective, comprehensive and coherent manner (e.g., extensive ex ante and ex post impact assessments, restrictions on use in certain military scenarios and on certain weapon features, clear chain of legal responsibility etc.). Moreover, one can even question whether the suitable framework for analysing military AI systems involving the division of labour and/or collaboration between humans and algorithms is that of traditional weapon systems, as opposed to analysis of the broader means and methods of warfare they facilitate and partake in. Arguably, an alternative way to conceptualize such human-machine interaction is through the prism of the introduction of human enhancement technology in military contexts – i.e., military AI systems that enhance human decision making capacity. Such a framework raises, of course, its own set of normative problems, but it might encourage a more holistic engagement with the legal, ethical and policy issues raised by the development and use of hybrid systems governing human-machine interaction in military contexts.

 

The author would like to thank Dr David Barnes and Dr Linda Eggert for their helpful comments on this piece. This piece is partly based on a recent article by the author.