
Written by Professor Yuval Shany and Dr Yahli Shereshevsky
The professional discussion around the development and deployment of autonomous weapon systems (AWS) has devoted much attention to the (in)ability of AWS to comply with international humanitarian law (IHL) – the main body of international law norms that regulates the conduct of hostilities in wartime. Critics of AWS argue that, for a variety of reasons, the use of AWS is not likely to adhere to some key rules of IHL. While they raise an important set of concerns about the capacity of AI systems to achieve accuracy and balance between competing values in extremely complex and consequential factual settings, they cannot exclude the possibility that developments in AI technology would enable AWS, at some point in time, to comply with IHL norms as well as, or even better than humans. In a forthcoming article, to be published in the Columbia Human Rights Law Review, we focus on two normative concerns that still arise under a scenario in which AWS will have the capacity to fully comply with IHL norms. In this brief post, we present in brief these two concerns and connect them to the pragmatic ethos of IHL.
IHL enters into the picture when the established legal order collapses – that is, after the international law regime prohibiting the use of force had been violated by one or more party to a conflict. In such non-ideal real-world situations, IHL offers a way to reduce the level of harm and suffering attendant to the conduct of war, rather than stopping wars altogether. In doing so, it does not prohibit per se the use of military force, but aims to balance military necessity against humanitarian considerations. Still, IHL rules do not treat military necessity and humanitarian considerations as fully equivalent to one another: While the protective function of IHL is mostly comprised of prohibitions on engaging in certain means and methods of warfare in circumstances that could compromise humanitarian interests, the enabling function of IHL permits parties to an armed conflict to use certain types of military force in order to advance their military needs, but leaves them with a choice whether or not to actually use force. In other words, IHL is a regime of both prohibitions and permissions to use military force; it does not contain, generally speaking, a duty to use military force.
The decision to refrain from using force, even when it is legally permissible to do so – for example, to avoid attacking an enemy soldier who does present an immediate danger to the adversary force – could be based on different reasons. It might be driven by moral considerations relating the sanctity of life, by human emotions such as compassion, by political interests such as the wish to limit carnage in order to hasten reconciliation after the conflict ends, or it could be made with no apparent reason at all. What is common to all of these motivations is their ultimate result: They lead to more restraint and less destruction and pain than what the rules of IHL allow for. This feature invites reconceptualizing some IHL rules as imposing a normative ‘ceiling’ – capping the overall level of legally-authorized harm in war – and acknowledging the possibility that a significant gap might hold in practice between the operational ‘floor’ – the overall harm actually carried out pursuant to legal authority found in IHL – and the aforementioned normative ‘ceiling’. This gap, we believe, is an essential feature of the humanization of warfare that the drafters of IHL instruments strove to promote, but which did not receive sufficient attention in the academic literature on the laws of war. We focus in our research on the implications of the gap facilitated by the permissive nature of many IHL rules for the normative debate over the introduction of AWS into the battlefield. Still, we maintain that the said gap has broad normative implications that go beyond the debate over the use of AWS.
If, indeed, the use of AWS would, in the future, fully comply with IHL rules, including the most complicated and context-dependent IHL norms, such as the proportionality rule, it cannot be excluded that they would eventually out-perform human beings in giving effect to IHL prohibitions. However, strict compliance with IHL rules might also result in a state of affairs where an IHL-compliant weapon system will be programmed to attack all of the lawful targets it identifies in a relevant theatre of hostilities, practically transforming the legal permission to use force under IHL into a non-discretionary directive to do so. Put differently, such a use of AWS would collapse the operational ‘floor’ with the normative ‘ceiling’. This would qualitatively alter the nature of IHL, and undermine its humanizing mission. We suggest that the potential tension between AWS and the fundamental architecture of IHL as built on many permissive rules serves as a significant argument against the introduction and deployment of AWS.
The use of AWS also implicates the notion of human dignity – a basic moral principle underlying international human rights law (IHRL), which we suggest is strongly correlated to the exercise of choice by soldiers regarding the use of military force. The essence of this part of our argument against the use AWS lies in the disappearance of choice itself. Once AWS replace the human soldier entirely, neither the person deploying the weapon system nor the person facing its consequences can reasonably believe that a non-lethal outcome remains possible. The battlefield thus becomes an arena of a mechanical and deterministic process of cause and effect, with no room for hesitation or change of heart. Whether the human impetus to spare a life from being targeted stems from compassion, instinct, confusion, or a whim, AWS remove altogether that critical choice-making space.
By contrast, when humans remain in control of the application lethal force, even within highly constrained operational settings, a residue of uncertainty about the choices they make persists. This uncertainty preserves a vital space for deviation from the course of events which IHL authorizes, preventing thereby wars from becoming a fully scripted process of organized violence. In this sense, the presence of human choice, regardless of its factual or normative basis, is what keeps armed hostilities within the realm of human experience. This insight is captured powerfully in the “naked soldiers” stories in Michael Walzer’s Just and Unjust Wars, which describe cases where soldiers, faced with the legal and operational possibility of using force, consciously choose not to do so. The significance of these stories lies not only in the moral implications of the decision not to kill, but also in the insistence on retaining choice and reaffirming the possibility of deviating from the harsh logic of warfare and the associated regulatory boundaries and path determinacies set by the laws of war. The preservation of choice works against reducing combat operations to a programmed application of military necessity, thereby reasserting human capacity to step outside the four corners of what is legally permitted (but not legally required).
This erosion of human choice on the battlefield strongly relates in our view to the principle of human dignity. An important aspect of human dignity involves the belief that one’s life is invested with meaning and moral worth, and is shaped by individual agency and moral responsibility. A life with dignity depends not only on the ability to choose whether or not to act, but also on the sense that one’s fate is not entirely predetermined from the outside. In conditions of war, where suffering is acute and control over external circumstances is scarce, maintaining the elements that comprise human dignity becomes particularly challenging. What AWS threaten, in this context, is this very sense of authorship over one’s life. By removing the space for human choice relating to the application of weapon systems, AWS create a closed system of cause and effect that denies both the attacker and the targeted individuals any meaningful opportunity to alter the course of events, and to exert influence on how their lives unfold. In other words, AWS strips the encounter between attacking and attacked soldiers (and other affected individuals) of the elements of choice and agency which are integral to human encounters and to the normative expectation that the principle of human dignity should always influence the course of human affairs. We therefore claim that this change in the nature of warfare serves as a significant dignity-based objection against the introduction and deployment of AWS.
There are two important caveats to our arguments. The first is that it might be possible in the future to create AWS that are not just programmed to fully adhere to IHL, but also have the capacity to mimic human behavior and/or refrain sometimes from exercising force in circumstances where the strict letter of the law permits it. If such a technological development becomes a new reality, the strength of our argument would diminish. Still, there are few indications so far to show that there are concrete plans to develop AWS in this direction. And even if future advances in AI technology would make such a development technically feasible, it remains unclear whether it will be possible to preserve compliance with IHL rules and prohibitions, while avoiding certain undesirable features of mimicking human behavior, which may include not only the capacity for the exercise of restraint but also harmful tendencies to use excessive force due to fear, revenge or hatred.
The second qualification is that we do not suggest that the structure of IHL or the importance of preserving an element of choice necessarily justifies an absolute ban against AWS. Since the constitutive ethos of IHL is largely pragmatic in nature and is centered around the need to minimize suffering in war, it is possible that if AWS will be extremely successful, at some point in time, in minimizing harm and suffering by virtue of a superior ability to adhere to IHL prohibitions, their harm-decreasing benefits might outweigh the costs associated with the competing considerations that we discuss here. This could justify, in turn, resorting to AWS notwithstanding the problems stemming from their interface with permissive IHL rules. Still, the arguments that we raise here should weigh heavily against the introduction of AWS, as they raise the bar for making a pragmatic case in favor of using AWS.
Our article contains two additional parts. The first of those, which we will not elaborate here, explores the doctrinal basis in international law for the normative claims we offer. Specifically, we address both IHL and IHRL as potential doctrinal sources for limiting the use of AWS and the relationship between these two bodies of law. The second additional part contains our position on the emerging debate over the use of Decision Support Systems (DSS) in armed conflicts. The debate over DSS resembles in many ways the debate over AWS, including an ongoing debate over their capacity to adhere to IHL when identifying and recommending legitimate targets. We limit our discussion only to whether our specific arguments against the use of AWS also apply fully to the debate over the use DSS. We suggest that they generally do not. Several authors highlight the risk that the use of DSS would lead to significant deference to such systems due to automation bias and other incentives to rubber stamp their recommendations, resulting in less-than-meaningful human control over them. It is not clear to us whether such assertions are adequately supported empirically (see e.g., here); but even if this is the case, human operators still typically maintain the choice to refrain from engaging the target recommended by a DSS. In other words, they still have a choice whether to target an enemy soldier, as the laws of war permit, or refrain from doing so on the basis of moral and/or other considerations. We specifically submit, in this regard, that even constrained choices, which might fall short of the emerging international standards of “meaningful human control”, could suffice to address our dignitarian concerns.
In sum, we do not intend to fully revisit here or in our article the broad terms of the debate over the use of AWS and/or DSS in military contexts. Instead, we present two arguments against the use of AWS that are based on the nature of IHL as partly premised on permissive standards which invite some choice regarding their application, and on a notion of human dignity that implies some possibility of interacting with enemy soldiers on the basis of human choice and agency. Still, we accept that the arguments we make are not conclusive in nature, and that other arguments in favor or against using AWS that exceed the scope of our normative claims may carry more weight over time.
Suggested citation: Shany Y., Shereshevsky Y., ‘Autonomous weapons and the significance of choosing not to use force’, AI Ethics at Oxford Blog (16 June 2025)