The Case for Regulating Fully Autonomous Weapons
On April 22, 2013, organizations across the world banded together to launch the Campaign to Stop Killer Robots. Advocates called for a ban on fully autonomous weapons (FAWs), robotic systems that can “choose and fire on targets on their own, without any human intervention.”1 Though no such weapon has been fully developed,2 the campaign has gained momentum and attracted the support of international bodies,3 activists,4 and scientists.5 In May 2014, just a year after the campaign began, the United Nations Convention on Certain Conventional Weapons met to debate whether a ban on FAWs is warranted.6 In pressing their case, activists cited the Ottawa Treaty, which banned virtually all anti-personnel landmines,7 as a model for full-scale prohibition.8
This Comment takes a different lesson from landmines. Drawing on the 1996 Amended Protocol on the use of landmines9 and other case studies in international law, I argue that regulation, rather than an outright ban, would likely be more effective in ensuring that FAWs comply with international law. This argument begins from the premise that the best approach to FAWs is the one most likely to reduce human suffering. I contend that FAWs are amenable to regulation and that, as a practical matter, regulation is more likely than a ban to induce compliance from countries such as the United States, China, and Russia. Ultimately, I argue that in regulating these weapons systems, nations may well be able to create an administrable legal regime for a new technology of war.
The Comment proceeds in three Parts. Part I defines FAWs and introduces the legal and ethical issues surrounding these weapons. Part II draws on the history of attempts to regulate weapons systems, including landmines, to explain why regulation is the correct response to FAWs. Part III develops a framework based on the Amended Protocol to guide the use of FAWs. Though it is difficult to develop standards for such novel weapons, the momentum around a preemptive ban makes it important to consider whether regulation might instead be an effective response to FAWs. By demonstrating that existing frameworks are capable of regulating FAWs, this Comment aims to integrate FAWs into current debates in international law and to dispel the notion that these weapons raise wholly unique legal challenges.
I. defining and critiquing fully autonomous weapons
The definition of a FAW hinges on the distinction between automated and autonomous weapons. Automated weapons have certain automated features, such as an autopilot function, and are common in many militaries. The U.S. Navy, for example, can land unpiloted drones on moving aircraft carriers,10 while both South Korea and Israel have deployed automated sentry guns capable of selecting targets and alerting a human operator, who makes the decision to fire.11 Autonomous weapons, in contrast, “can select and engage targets without further intervention by a human operator.”12 Autonomous weapons, in other words, require no human input—once activated, they possess the power to fire on their own.13 Any weapon that possesses this essential characteristic should be considered a FAW.
Like any weapons system, FAWs must be designed and deployed in accordance with international law. Under Article 36 of Additional Protocol I to the Geneva Conventions,14 nations must review new weapons systems to ensure that they are not indiscriminate by nature or likely to cause unnecessary injury.15 Furthermore, when using a particular weapon, a combatant must apply the familiar rules of international humanitarian law: she must distinguish between combatants and civilians16 and avoid collateral damage disproportionate to the military objective.17 Finally, weapons must be consistent with the Martens Clause, which is found, among other places, in Article 1 of Additional Protocol I to the Geneva Conventions, and is understood to represent customary international law.18 The Martens Clause stipulates that weapons must comply with the “dictates of public conscience,” a nebulous requirement that, read broadly, may render weapons that offend public opinion impermissible.19
It is difficult to evaluate the legality of FAWs under these frameworks because such systems are in their infancy. At present, there are two main strands of criticism levied against FAWs. First, there is a legal critique, which questions whether FAWs can comply with the principles of distinction and proportionality, and whether anyone can be held responsible when FAWs violate international law. For example, Human Rights Watch (HRW) argues that FAWs will never be able to distinguish between combatants and civilians as well as a human soldier can.20 Groups like HRW contend that machines cannot be designed with human qualities, such as emotion and ethical judgment, which are important to the decision to take a life.21 Opponents also argue that if a FAW violates international law, it is unclear whom to hold responsible—the machine, the commander who authorized its use, or the manufacturer who designed it. Critics charge that, without someone to sanction, international law’s deterrent function is weakened, making violations more likely.22
Second, opponents level an ethical, deontological criticism of FAWs: it is simply wrong to remove humans from the process of killing. According to this line of reasoning, the decision to kill must be reserved for a human decision maker, whatever the ultimate consequences of widespread FAW usage. A related line of critique emphasizes that FAWs, like drones, put distance between human decision makers and the reality of war, thereby desensitizing decision makers to the consequences of their actions and making future wars more likely by reducing their human cost.23
Opponents of FAWs invoke each of these critiques to argue in favor of an absolute, preemptive ban on FAWs.24 In their view, once nations have developed FAW technology, it will be difficult to get them to stop.25 The time to act is now, before the military utility of FAWs has been demonstrated and before other countries develop FAW technology in response. These arguments resonate with apocalyptic images, replete in pop culture, of killer robots run amok—think Terminator.26 Yet they also exaggerate the danger posed by FAWs and underappreciate regulation’s potential to respond to the threats that FAWs do pose.
II. the argument for regulation, rather than prohibition
This Comment begins from the proposition that the purpose of international humanitarian law is to minimize harm understood in terms of suffering—primarily to civilians, but also to combatants.27 Many have argued that public policy should be guided by consequentialist aims, given, among other things, the differences between individuals and states28 and the inevitability of trade-offs in policymaking.29 Gabriella Blum contends that the argument for consequentialism is particularly strong in the case of armed conflict, which “is about committing evils and choosing between evils.”30 Following Blum’s logic, this Part brackets the deontological critique of FAWs—understood as the view that the use of FAWs is wrong independently of its consequences—and focuses on the possibility of regulatory regimes that minimize suffering in practice. While the deontological critique of FAWs presents a serious challenge, it loses much of its force if the responsible use of FAWs can reduce harm.31
A consequentialist approach focused on minimizing harm also makes less compelling the objection that the use of FAWs reduces accountability. While the “autonomous” nature of FAWs appears to distance decision makers from the harms they inflict, commanders remain responsible for the initial use of FAWs. A commander must give the order to deploy a FAW and set parameters for its use—for example, by instructing that a FAW has X mission and must operate within Y area. In this sense, there is no such thing as a fully autonomous weapon. Any weapon will require human intervention at some point, if only to activate it. The commander is ultimately responsible for using a FAW within its programming and within legal limits. If humans must remain an integral part of the decision to take a life in order for a weapon to fulfill the condition of accountability, then FAWs satisfy this requirement.
I focus, therefore, on the question whether, as a legal matter, FAWs can be regulated in ways that minimize the suffering that they cause. This Comment argues that they can, for two reasons. First, FAWs are highly amenable to regulation. As quasi- but never fully autonomous systems, FAWs are ripe for a regulatory scheme that provides standards for permissible usage and holds commanders accountable. Second, given the potential military utility of FAWs, states are more likely to comply with regulations than with an absolute prohibition. This point matters because, even if the critics are correct that FAWs will always violate international law, they are wrong to think that prohibition will avert these harms.
A. FAWs May Be Used Lawfully
Whether FAWs can be deployed lawfully depends in part on whether they can be used in a manner that avoids civilian casualties. Given the pace of technological development, it is too early to say that a FAW could never make the contextual and difficult decisions that soldiers must make when distinguishing between combatants and civilians.32 As George R. Lucas, Jr. argues, the critical question is not whether FAWs can “be ethical,” but whether they can perform at the level of a human soldier.33 Human soldiers aren’t perfect.34 Indeed, robots may have a number of advantages over humans, including superior sensory and computational capabilities, a lack of such emotions as fear and anger, and the ability to monitor and report unethical behavior on the battlefield.35 A robot might also have access to greater information about the value of a target, and hence may be able to make a better determination on an issue like proportionality in the heat of the moment. Even if robots lack quintessentially human characteristics like empathy,36 they may nonetheless be able to respect the rules of combat.
Whatever we might think about the capabilities of FAWs as a general matter, in some circumstances the use of FAWs will be wholly unproblematic. Imagine a robotic submarine operating in an isolated undersea environment, far from any civilians.37 Here the risk of the submarine violating international humanitarian law is exceedingly low. This is a best-case scenario, but it illustrates a broader point: the efficacy and legitimacy of FAWs will depend on the circumstances. In this regard, FAWs are unlike some weapons that have been banned by the international community, like non-detectable fragments and blinding laser weapons, which cause unnecessary suffering no matter how they are used.38 In the future, FAWs may have the ability to patrol an urban area and seek out combatant targets; for now, perhaps they are best left to operate in isolated environments or in a purely defensive capacity. In either case, the need for clear guidelines is not an argument for an outright ban. Rather than prohibiting FAWs writ large, international law should recognize that in some circumstances, they may permissibly be used—bolstering the case for regulation.
Regulating FAWs would also help to resolve issues of compliance and accountability. International law sets out fairly broad standards: weapons must distinguish between civilians and combatants, they may not cause disproportionate collateral damage, and so on. Yet in any given case, there is ambiguity about what the relevant standard requires, and this ambiguity hinders effective compliance and accountability. For instance, a commander, in the heat of battle and with incomplete information, may not know whether a particular use complies with abstract concepts such as distinction or proportionality. Defining the bounds of permissible conduct more precisely via regulation can minimize these concerns.39
For this reason, various actors have recognized the need for guidance regarding FAWs. In 2009, the Department of Defense issued a directive on autonomous weapons, thereby taking a strong first step toward regulation. That directive primarily addresses mechanisms for approving the development of new weapons systems, though it does also consider both the levels of autonomy present in a given system and the purposes for which systems may be used.40 The directive also generally dictates that commanders using automated weapons should apply “appropriate care” in compliance with international and domestic law.41 An ideal regulatory scheme would develop beyond this Directive: it would be international in nature, would focus more heavily on use, and would provide greater specificity regarding how and when particular systems may be used. A complete regulatory scheme would also tackle other thorny issues, including research, testing, acquisition, development, and proliferation.42 In these early stages, the project of regulation ought to begin with the issue of permissible usage, given that it presents difficult—yet familiar—questions under international law.
B. States Are More Likely To Comply with Regulations
In the previous section, I suggested that not all FAWs present an unacceptable risk of civilian casualties, and, as such, that these weapons are not wholly impermissible. Yet, even if FAWs ought to be categorically rejected, it is not clear that a ban would actually be effective. Robotic weaponry in the form of unmanned drones has already begun to revolutionize the ways in which nations fight wars. At least one military analyst has suggested that fully autonomous weapons will represent the biggest advance in military technology since gunpowder.43 Other commentators have argued that it would be unrealistic to expect major world powers to ban FAWs altogether, especially if some states refused to sign on and continued to develop them.44 FAWs may have significant military utility, and in this respect, they are unlike many other weapons that the international community has banned.45 Even if a ban were successful, moreover, nations might interpret the terms of the ban narrowly to permit further development of FAWs46 or violate the prohibition in ways that escape detection.47 The better approach to ensure compliance overall would be to establish minimum limitations on FAW technology and specific rules governing use.
Two cases, landmines and cluster munitions, help to illustrate this point. The Ottawa Treaty formally banned landmines in 1997. However, several states, including the United States, China, Russia, and India, declined to sign the treaty, invoking military necessity.48 Nations that have refused to sign the Ottawa Treaty have generally complied with the more modest regulations of the Amended Protocol.49 In a similar pattern, several states, invoking claims of military necessity, have declined to sign the Oslo Convention of 2008, which banned cluster weapons.50 However, these nations have signaled that they would be willing to negotiate a set of regulations under the Convention on Certain Conventional Weapons.51 These cases suggest that nations are unlikely to accept a full ban on weapons that they frequently use. Among those states that are inclined to use FAWs, a more modest attempt to regulate these weapons may result in higher initial buy-in, as well as higher overall compliance with the principles of distinction and proportionality.
In response to this claim, opponents of regulation make a slippery-slope argument, stressing that once nations invest in FAW technology, it will be difficult to encourage compliance with even modest regulations.52 Alternatively, there is some evidence from the case of landmines that an absolute prohibition can establish a norm against a weapons system that buttresses other, more modest regulatory schemes.53 This may be true, but if FAWs turn out to revolutionize warfare, then states may continue to develop them regardless. Furthermore, the causality may work the other way—”soft law” norms, like nation-specific codes of conduct, can often ripen into “hard law” treaties.54 If a ban turns out to be necessary, then it may be easier to build on an existing set of regulations and norms rather than to create one from scratch. For these reasons, it is important to consider the components of an effective regulatory scheme.
III. developing a regulatory scheme
International law already regulates a category of weapons systems that share important similarities with FAWs: landmines. While several commentators have hinted at these similarities,55 scholars have yet to consider whether FAWs might be regulated much as landmines are.56 This lacuna is striking because the similarities between FAWs and landmines speak directly to core questions of distinction and proportionality under international law.
A. Drawing from the Amended Protocol
Landmines and FAWs share several essential features that affect the level of risk they pose to civilians. First, once activated, landmines and FAWs both possess the capacity to “target” and kill without further human input. For instance, anti-tank landmines possess a rudimentary capacity to discriminate between targets, in the sense that they explode only when a large vehicle travels over them.57 Like FAWs, landmines not only react to an external signal but also potentially possess the ability to distinguish among signals. The extent to which a weapons system can make such distinctions in a given set of circumstances is essential to determining whether it complies with international law.
Second, both landmines and FAWs are used, and threaten individuals, within certain defined parameters.58 Landmines are placed in a particular location. FAWs would also be deployed with a specific set of instructions, which may include limitations on the areas within which they can operate. These qualities affect the likelihood that any system will target (or harm) civilians. In this respect, landmines and FAWs implicate similar questions of distinction.
Landmines and FAWs are also similar in the sense that their permissibility turns on technical characteristics. Landmines can be made with detectable or undetectable materials; they can be triggered by different stimuli; they can be delivered remotely or by hand; and they can be set to deactivate after a certain amount of time has passed. Similarly, FAWs could have better or worse sensors or targeting software, could carry lethal or non-lethal ordinance, and could be mobile or static, among other traits. These features create ample opportunity for the international community to supply specific rules regarding how these weapons can be designed, with the intention of rendering them compliant with international law.
At the same time, FAWs and landmines are dissimilar in ways that render an Ottawa Treaty-style ban inappropriate. Unlike landmines, FAWs will likely be subject to tracking and remote deactivation by design.59 The “temporal indiscriminateness” of landmines, which can kill many years after they are placed,60 is therefore almost non-existent with respect to FAWs. Moreover, FAWs may develop in ways that make them capable of distinguishing between civilians and combatants. Landmines, on the other hand, cannot make this distinction. No inherent feature of FAWs renders these weapons categorically impermissible under international law—within some parameters, they may even perform better than human soldiers. Compared to landmines, then, FAWs present a better case for regulation.
Given the parallels between these weapon systems, the Amended Protocol provides a suitable framework for regulating FAWs. The Protocol, first negotiated in 1980 and amended in 1996 in response to concerns about the proliferation of landmines, provides specific rules governing how landmines may be used.61 Specifically, the Amended Protocol focuses on geographic and spatial criteria for determining whether the deployment of mines is permissible.62 First, as a threshold matter, the Protocol does not apply to anti-ship mines,63 suggesting a recognition that—much like in the robot case discussed above64—the use of these weapons at sea raises less problematic issues under international law. Second, the Protocol makes clear that the definition of “indiscriminate” use, a concept from the Geneva Conventions,65 applies to how landmines are placed. The Protocol requires landmines to be deployed around military objectives and requires states to presume that locations “normally dedicated to civilian purposes”—for example, homes and schools—do not constitute legitimate military objectives.66 Relatedly, Section 9 of the Amended Protocol provides that distinct military objectives cannot be treated as one objective, and this, in effect, requires states to deploy landmines only in precisely defined areas.67 The Protocol also requires additional protections in order to place certain weapons in areas containing a concentration of civilians and no active military engagement.68
The emphasis on how landmines are deployed stems from features that landmines share with FAWs. The Protocol focuses on commander decision-making because landmines do not require human action to kill once activated. Under the Amended Protocol, commanders may deploy landmines only within certain parameters, defined by how likely they are to contain civilians. These parameters provide a more precise instantiation of what distinction requires.69 The Protocol does not completely foreclose the use of landmines, but rather allows states with legitimate military purposes to deploy them in a limited fashion.70 By limiting how a commander may deploy landmines, the Amended Protocol implicitly attempts to curb the risk of civilian harm from their use.71 The Amended Protocol thus grapples with the same normative challenges that FAWs pose. In this respect, it provides a particularly apt model for regulation of FAWs.
The Amended Protocol for landmines also establishes a framework for the use of a highly technical weapons system, and in this sense it provides precedent for the regulation of weapons like FAWs. The Protocol contains several technical restrictions on landmines and provides other important protections. Several articles of the Protocol, for instance, provide detailed requirements for which characteristics landmines must have and which they may not.72 Article 3, Section 8 prohibits any method of delivery that cannot distinguish between military objectives or that would lead to excessive loss of civilian life.73 The Amended Protocol also contains a catch-all provision requiring states to take all “feasible precautions” against risk of harm to civilians.74 By analogy, a regulation for FAWs could establish specific technical limits, including, for example, baseline requirements for sensory or computational ability and shut-off capabilities. Given that the Amended Protocol set out to answer many concerns similar to those raised by FAWs, it makes sense to draw from this framework in developing a comparable regulatory scheme.
B. A Model Framework for FAWs
The international community should negotiate—or nations should develop—an instrument for FAWs that is analogous to the Amended Protocol for landmines. These rules would focus on delineating parameters under which FAWs can be used consistently with the principle of distinction. Such parameters would be defined by the need to reduce the risk of civilian casualties. These might minimally include:
· Characteristics of the FAW. As they develop, FAWs may vary widely in their ability to determine whether individuals are civilians or combatants. A regulatory framework could include technical details, such as the quality of the onboard camera and the FAW’s ability to make more difficult, contextual assessments of intention and behavior.75 In the future, the rule might consider the machine’s capacity to learn from encounters, which may improve its capacity to distinguish but may also create a risk that the weapon will exceed its programming.
· Characteristics of the environment. A remote battlefield is unlikely to pose much risk of civilian harm. But a well-trafficked urban area should require the use of a FAW with a heightened capacity to assess targets.76 At the moment, until FAW technology develops further, this may mean that the deployment of FAWs in densely populated areas should be per se illegal. More generally, commanders should also refrain from deploying FAWs near locations that are frequented by civilians, like schools and places of worship. Above all, this regulatory factor would focus on the relative concentration of combat forces and civilians.
· Characteristics of the opposing force. Whether enemy forces wear uniforms or insignia or instead blend into the civilian population may make a difference in terms of how well a FAW can distinguish between combatants and civilians. Regulations should include this factor and should require commanders to consider countermeasures used by opposing forces to evade detection. This factor may be dispositive depending on how significantly the opposing force’s evasion techniques affect the FAW’s ability to distinguish.
· Level of residual human control. The degree to which a human operator supervises and is able to override decisions or shut down the FAW is relevant, even if the machine operates in a fully autonomous capacity. The constant supervision of an operator, who can cancel targeting decisions in real time, may strongly point in favor of a weapon’s permissibility. Depending on the other features of the weapon and how they affect its ability to distinguish, oversight might even be explicitly required.
· Other factors.Other factors that affect whether a FAW may permissibly be used could include weather conditions, ordnance strength (and lethality/non-lethality), whether the weapon is stationary or mobile, whether it is offensive or defensive, and the risk that it will exceed any limitations (for instance, by traveling out of a designated zone).
An international agreement providing rules to govern FAW usage would also need to address the issue of accountability. The Amended Protocol requires states to impose sanctions against individuals who wrongfully use landmines.77 This reflects the intuition that the commander’s decision to deploy is the proper focus of responsibility. We should evaluate responsibility in the context of FAWs the same way. If a commander knowingly deploys a FAW with weak targeting software in the middle of a city, and it kills dozens of civilians, most would likely agree that the commander has committed a crime, or at least should be subjected to sanctions of some kind.78 The standards developed to regulate FAWs should aim to provide commanders with more specific guidance regarding lawful usage, not only to encourage proper use, but also to enable authorities to judge when a commander has used weapons unlawfully.
In this respect, regulation is an attempt not only to limit the harm of war, but also to bolster the rule of law. The commander is liable for his own unlawful act of deployment, like a commander who recklessly orders that artillery be fired in a populated urban area.79 To ensure compliance, states should train commanders in the legitimate use of FAWs.80 If a commander knowingly uses FAWs in a way conducive to harming civilians, he should be punished.
Conclusion
This Comment has attempted to develop an original, yet familiar, framework to guide the use of FAWs. A regulatory strategy that focuses on limiting the most problematic uses of FAWs recognizes that these weapons can be used consistently with international law, and that, in any event, a ban is unlikely to be effective. A preemptive ban on their development is therefore unwarranted. More generally, this Comment has aimed to demonstrate that existing principles of international law are sufficient to circumscribe the use of these weapons. Indeed, the central questions posed by FAWs have been confronted in international law for years with respect to other weapons, like landmines, which can kill long after a human has made the decision to activate them. An effective response to the inevitable development of FAW technology will require the international legal community to engage in a dialogue about analogous regimes—a dialogue that can further the development of international law. This dialogue must recognize one essential fact: a fully autonomous weapon, like any other weapon, is subject to human control, for better or for worse.
JOHN LEWIS*