Skip to end of metadata
Go to start of metadata

Harvard Law School, National Security Journal Growing controversy surrounds the rapid development of artificial intelligence (AI) in weapon systems, with little consideration of intent or the variety of potential risks involved.  The following papers provide significant detail and insight regarding actual legal aspects with respect to International Humanitarian Law (IHL).  Key insights include recognition of the temporal aspect associated with naval missions.  Long time intervals may occur between direction and execution, without frequent communication, but the need for human control remains essential throughout.

  • Alan L. Schuller, "At the Crossroads of Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with International Humanitarian Law," Harvard National Security Journal, vol. 8 no. 2, 30 May 2017, pp. 379-425.  (online, pdf)

Abstract. Lawyers and scientists have repeatedly expressed a need for practical, substantive guidance on the development of Autonomous Weapons Systems (AWS) consistent with the principles of IHL. Less proximate human control in the context of machine learning poses challenges for IHL compliance, since this technology carries the risk that subjective judgments on lethal decisions could be delegated to artificial intelligence (AI). Lawful employment of such technology depends on whether one can reasonably predict that the AI will comply with IHL in conditions of uncertainty. With this guiding principle, the article proposes clear, objective principles for avoiding unlawful autonomy: the decision to kill may never be functionally delegated to a computer; AWS may be lawfully controlled through programming alone; IHL does not require temporally proximate human interaction with an AWS prior to lethal action; reasonable predictability is only required with respect to IHL compliance; and close attention should be paid to the limitations on both authorities and capabilities of AWS.

  • Alan L. Schuller, "Inimical Inceptions of Imminence: A New Approach to Anticipatory Self-Defense Under the Law of Armed Conflict," UCLA Journal of International Law and Foreign Affairs, vol. 18, no. 2, 2014, pp. 161-206.   (online, pdf)

Abstract. The Law of Armed Conflict (LOAC) has historically incorporated the term “imminence” across the bodies of law governing resort to armed force (jus ad bellum) and those which govern during an armed conflict (jus in bello), as an integral part of evaluating the legality of responding to a threat. Since these areas of the LOAC have traditionally been considered separate and distinct, the meaning of imminence within them has likewise been treated as distinguishable. But the modern threat environment, especially following the terrorist attacks of September 11, 2001, has proven that this division of imminence ad bellum and in bello is no longer tenable. Application of the concept of an imminent threat has been incoherent and inconsistent. This Article argues that imminence should be a singular concept that applies logically in any situation and given any threat of armed attack. In making this argument, the Article presents a simple and flexible framework that can be applied by any person or entity even in light of crisis and imperfect information. Finally, it proposes three principles of imminence that can be applied in evaluating the legality of actions in self-defense across the spectrum of armed conflict.

Proper control of remote unmanned systems with weapons capabilities is of course fundamentally important for achieving Network Optional Warfare (NOW) goals, namely naval forces operating with far less communications vulnerabilities. 

Current press reports how some industrialists - many producing commercial autonomous vehicles with potential for lethal force - to call for outlawing any form of autonomous weapons.  For military forces, the other team doesn't necessarily read the same memos or follow the same rules regarding IHL. Perhaps pushing such notions to their logical conclusion:  If AI is outlawed, will only outlaws have AI?

A further insight emerged from recent group discussions together at the Stockton Center for the Study of International Law, Naval War College (NWC) in Newport RI.  For at least the past century, the operational effectiveness of naval forces has improved in direct relation to the ability of ships to communicate and coordinate both internally and externally.  Thus more-effective supervised teaming of humans with unmanned systems is not "just" a moral or ethical imperative, not just superior control of numerous diverse robotic systems, but also better warfighting capabilities for our forces that must operate in harm's way. 

Humans - qualified military professionals - are trained and committed to meet difficult moral, legal and ethical challenges in modern warfare.  Hybrid human-machine approaches are increasingly necessary for successful defense.  Colonel Schuller's important papers clarify International Humanitarian Law (IHL) regarding autonomous lethality, examines key issues relevant to evolving naval operations, and explores the critical thinking behind these fundamental legal principles.

 

 

  • No labels