In my August post, Legal Aspects of Unmanned Systems – Part 1: Civilian Uses, I highlighted legal concerns associated with the application of unmanned systems in civilian settings, including the potential impact of their use on safety, security, privacy, and property rights, as well as the possible application of criminal laws regarding their use. This second post addresses the legal aspects of unmanned weapons systems, and particularly those with greater levels of autonomy.
The introduction of new weapon systems has prompted an ongoing discussion regarding the effectiveness and appropriateness of the development and use of unmanned autonomous weapon systems in the context of modern warfare. Several countries are currently engaged in the development of unmanned air and water vehicles.
Not every unmanned system is autonomous, of course. Scholars distinguish between unmanned systems that perform tasks directed by humans, and systems operating without a human “in the loop.” Thus, a distinction should be made between military drones, where a human is responsible for firing weapons against targets, and weapon systems that, once activated, are intended to select and engage targets on their own. Systems with such capabilities are known as Autonomous Weapons Systems (AWS) or as “Lethal Autonomous Weapon Systems” (LAWS). In this post I will refer to these systems as LAWS.
Legal Aspects of the Operation of LAWS
Proponents of LAWS argue that, by improving performance, LAWS technology can actually limit harm. They point to various advantages, for example, their “speed of processing and reaction, lower risks for soldiers and civilians, capacity to do dull, dirty, and dangerous tasks,” as well as their capacity to gather intelligence, carry out rescue operations, and fulfill logistics and transport tasks.
The operation of LAWS, however, raises multiple legal challenges.
The complexity involved with the determination of who is accountable for the operation of LAWS has been described as follows:
If a specific given AWS is merely applying a set of preprogrammed instructions, then, presumably its designers and operators are the ones morally responsible for its behavior. But if the AWS in question is a genuine moral discerner in its own right, that appears to shift the locus of responsibility to the automated system itself. And if this is the case, what are the implications for legal liability? Who, if anyone, should bear the legal liability for decisions the AWS makes?
The operation of LAWS raises both criminal as well as civil legal implications.
Issues for consideration include the question of the degree to which a commander ordering the activation of a particular LAWS is liable under international criminal law for war crimes resulting from such use. What about the liability of the designers or manufacturers of LAWS for harm resulting from their operation?
In explaining the difficulty of assigning responsibility, a 2015 study by Human Rights Watch and the Harvard Law School International Human Rights Clinic (IHRC) noted that human commanders or operators could not generally be assigned direct responsibility for the wrongful actions of a fully autonomous weapon. They could only be held accountable in rare cases when it can be shown that they actually “possessed the specific intention and capability to commit criminal acts through the misuse of fully autonomous weapons. In most cases, [however] it would also be unreasonable to impose criminal punishment on the programmer or manufacturer, who might not specifically intend, or even foresee, the robot’s commission of wrongful acts.” A programmer or a manufacturer “might also lack the military operator’s or commander’s understanding of the circumstances or variables the robot would encounter and respond to, which would diminish the likelihood it could be proved they intended the unlawful act.” (Human Rights Watch & IHRC, Mind the Gap, The Lack of Accountability for Killer Robots (April 2015) (IHRC Study), p. 2.)
Similar to programmers or manufacturers, assigning responsibility for wrongdoing to commanders for actions taken by their soldiers could be unlikely. Under the doctrine of indirect responsibility, or command responsibility, superiors may be held accountable only if they knew or should have known of a subordinate’s criminal act and failed to prevent or punish it. The study concludes that “[s]ince robots could not have the mental state to commit an underlying crime, command responsibility would never be available in situations involving these weapons.” (Id.)
Considering the difficulties of imposing criminal liability for harm caused by the operation of LAWS, a commander or a programmer could in principle be held liable for negligence “if, for example, the unlawful acts brought about by robots were reasonably foreseeable, even if not intended.” (Id. p. 3.)
Civil lawsuits involving the operation of LAWS, however, may not be feasible. Aside from not achieving the same level of social condemnation associated with punishment for a crime, and military immunity in some countries, such suits would “likely be expensive, time consuming, and dependent on the assistance of experts who could deal with the complex legal and technical issues implicated by the use of fully autonomous weapons.” (Id.)
According to the IHRC Study, even without these obstacles, due to the “complexity of an autonomous robot’s software” a plaintiff would likely “find it challenging to establish that a fully autonomous weapon was legally defective for the purposes of a product liability suit.” (Id. p. 4.)
International Law and LAWS
The introduction of LAWS into modern warfare raises the question of the extent to which their use is compatible with various rules of international law. Such rules include both the law on the use of force (rules of jus ad bellum) and norms of international humanitarian law (IHL) (jus in bello).
As noted, the IHRC Study has concluded that there is currently no mechanism under existing laws to impose liability for harm resulting from LAWS. Computer programmers, manufacturers and military commanders, therefore, could all escape liability for unlawful harm caused by LAWS. The study further concluded that no clear framework could even exist in the future that would establish liability of those involved.
Since LAWS would create an “accountability gap,” the authors of the IHRC Study argued that it is necessary to have a complete prohibition of their development, production, and use. Such prohibition, the study concluded, should be achieved through both national laws and policies as well as through an international legally binding instrument “to prevent arms race and proliferation to armed forces with little regard to the law.” (IHRC Study, p. 1.)
The conclusions reached by the IHRC Study are not shared by all. A report by the Geneva Academy of International Humanitarian Law and Human Rights (GA Report), expresses the view that LAWS are not inherently incompatible with international law requirements. Instead of a complete prohibition on the development, manufacture and use of LAWS, the GA Report called for a thorough review of the treaty and customary obligations of all states under international law.
According to the GA Report, LAWS must be used in accordance with existing rules of international law on the use of force. Such rules require states to refrain from threatening or using force against the territorial integrity of other states (article 2(4) of the UN Charter), except with authorization by the UN Security Council to maintain or restore international peace and security (under Chapter VII of the UN Charter); or for self-defense (under article 51 of the Charter).
During war, states are bound by IHL norms that require that they distinguish between civilians and combatants and military objectives (rule of distinction); ensure that the incidental loss of civilian life, injury to civilians, or damage to civilian objects is not excessive in relation to the concrete and direct military advantage anticipated (rule of proportionality); and take ‘feasible’ precautions when they carry out attacks to avoid and minimize incidental loss of civilian life, injury to civilians, and damage to civilian objects (rule of precautions in attack).
According to the GA Report, to ensure compliance with these rules, states must conduct legal reviews to assess, in accordance with article 36 of the 1977 Protocol additional to the Geneva Conventions of 12 August 1949, the compatibility of any new weapon with the Protocol and any other rule of international law. Contrary to the IHRC Study, the GA Report authors argue that current technological limitations provide no excuse for failing to comply with IHL. Furthermore, they opined, LAWS must be able to recognize situations of doubt regarding distinguishing between civilian and combatants, and hesitate and refrain from attacking in such situations. The GA Report therefore calls on states to “agree on how exactly proportionality must be calculated and also which parameters influence this calculation.” (GA Report, p. 15.)
Calls for International Agreement on LAWS
To address legal challenges posed by LAWS, a number of scholars have called for some form of international instrument that would apply to autonomous systems.
An informal discussion of questions related to emerging unmanned weapons technologies in the context of the objectives of the Convention on Certain Conventional Weapons (CCW) took place in Geneva, Switzerland during the week of April 13-17, 2015.
According to an advance copy of the Report of the 2015 Informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), most delegations expressed the view that further clarification of various aspects of the utilization of LAWS was necessary. (Id. ¶ 20.) Delegations have reportedly supported the CCW as the right forum in which to continue the discussions, with some delegations proposing that “other fora could complement the CCW debate.” (Id. ¶ 77.)
Issues mentioned for further consideration included “an in-depth examination of legal weapons reviews (Article 36, Additional Protocol I); a discussion on the general acceptability of LAWS in reference to the Martens Clause; ethical issues and the notions of meaningful human control; autonomy in the critical functions, autonomy, command and control; and system-human interaction.” (Id. ¶ 80.)
Current U.S. Department of Defense Policy
The U.S. Department of Defense (DoD) policy and guidelines for the “development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms” are detailed in DoD Directive No. 3000.09 (Nov. 21, 2012).
According to the Directive, it is DoD policy that LAWS will be designed in a way that will allow commanders and operators to exercise appropriate levels of human judgment over the use of force. For example, the Directive states that
Systems will go through rigorous hardware and software verification and validation (V&V) and realistic system developmental and operational test and evaluation (T&E) in accordance with the guidelines in Enclosure 2. Training, doctrine, and tactics, techniques, and procedures (TTPs) will be established. These measures will ensure that autonomous and semi-autonomous weapon systems: (a) Function as anticipated in realistic operational environments against adaptive adversaries. (b) Complete engagements in a timeframe consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement. (c) Are sufficiently robust to minimize failures that could lead to unintended engagements or to loss of control of the system to unauthorized parties.
The Directive includes specific requirements relating to the design of physical hardware and software to be “consistent with the potential consequences of an unintended engagement or loss of control of the system to unauthorized parties.” Additional guidelines are provided “[i]n order for operators to make informed and appropriate decisions in engaging targets.”
The development and deployment of LAWS provide yet another fascinating example of where technological advancements make us pause and think about whether and how current laws apply to situations that have not or could not have been foreseen, and about whether new legal frameworks should be established. We will continue to follow legal developments in this area and will report on them through this blog and the Global Legal Monitor as appropriate.