Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.

News. Presentation "Ethical Control of Autonomous Unmanned Systems: A Practical Approach" 12 April 2017 at NPS as part of the annual CRUSER Technical Conference (TechCon) 2017.

The backdrop for this work:

  • Roboticists tend to build software systems in unique and dissimilar ways, but nevertheless share a common repertoire of directable capabilities.
  • A number of philosophers view unmanned systems as inherently uncontrollable, and therefore propose international protocols banning their existence.
  • Potential opponents are likely to use such systems as weapons of war regardless, and do not particularly care about ethical command and control.

The gist of our work:

  • Releasing uncontrolled robots at sea with potential for lethal force is not permissible under Law of Armed Conflict (LOAC).
  • Naval officers must act ethically and maintain supervisory control of such devices even if direct communications might be lost.
  • Formal mission orders can describe both tasking and constraints on action that are logically validatable and executable by a wide variety of robots.
  • Identification Friend Foe Neutral (IFFN) is much simpler on the ocean than it is on land.
  • A feasible path forward exists that allows Navy commanders to similarly task and trust unmanned systems to act - and appropriately avoid acting - just as they might with other trusted human partners.

Abstract.  Autonomous systems can be ethically supervised by humans without constant communications.  Ships have the potential to direct autonomous systems effectively, as trusted partners that do not require constant supervisory control, if robot mission orders clearly complement the mission tasking followed by humans.  Such a capability can make robot operations safer - and Network Optional Warfare achievable - for maritime unmanned systems.

  • Semantic coherence is the primary key for achieving success: consolidating a vast variety of robot dialects into a common set of C2 definitions for task orders and constraints. 
  • Efficient messaging is also necessary since maritime systems can easily lose communication links due to long ranges and severe environmental changes. 
  • Optical signaling can help unmanned systems avoid revealing presence, even acting as "data mules" when important messages need to be delivered covertly.

Adding constraints such as no-fly zones, time limitations, permission prerequisites etc. to mission orders allows operators to legally and ethically control mobile systems that have the potential for deliberate (or unintentional) lethal force. The following papers present results from many work-years of effort, showing that ethical control can be practically achieved by providing parsable (and ethically validatable) orders to diverse unmanned systems.  Continuing work intends to demonstrate such capabilities in full-fidelity simulations and at-sea experimentation.

Short-form and long-form papers, presentations:

short formDavis, Duane T., Brutzman, Donald P., Blais, Curtis L. and McGhee, Robert B., "Ethical Mission Definition and Execution for Maritime Robotic Vehicles: A Practical Approach," MTS/IEEE OCEANS 2016 , Monterey California USA, 19-23 September 2016, 10 pages.slides    (.pdf)
full lengthBrutzman, Donald P., Davis, Duane T., Blais, Curtis L. and McGhee, Robert B., "Ethical Mission Definition and Execution for Maritime Unmanned Systems: A Practical Approach," draft paper for IEEE Journal of Oceanic Engineering , submitted 28 January 2017, 29 pages.slides (.pdf)

Abstract. Many types of robotic vehicles are increasingly utilized in both civilian and military maritime missions. Some amount of human supervision is typically present in such operations, thereby ensuring appropriate accountability in case of mission accidents or errors. However, there is growing interest in augmenting the degree of independence of such vehicles, up to and including full autonomy. A primary challenge in the face of reduced operator oversight is to maintain full human responsibility for ethical robot behavior.

Informed by decades of direct involvement in both naval operations and unmanned systems research, this work proposes a new mathematical formalism that maintains human accountability at every level of robot mission planning and execution. This formalism is based on extending a fully general model for digital computation, known as a Turing machine. This extension, called a Mission Execution Automaton (MEA), allows communication with one or more "external agents" that interact with the physical world and respond to queries/commands from the MEA while observing human-defined ethical constraints.

An important MEA feature is that it is language independent and results in mission definitions equally well suited to human or robot execution (or any arbitrary combination). Formal description logics are used to enforce mission structure and semantics, provide operator assurance of correct mission definition, and ensure suitability of a mission definition for execution by a specific vehicle, all prior to mission parsing and execution. Computer simulation examples show the value of such a Mission Execution Ontology (MEO).

The flexibility of the MEA formalism is illustrated by application to a prototypical multiphase area search and sample mission. This paper presents an entirely new approach to achieving a practical and fully testable means for ethical mission definition and execution. This work demonstrates that ensuring ethical behavior during mission execution is achievable with current technologies and without requiring artificial intelligence abstractions for high-level mission definition or control.

To learn more: NPS AUV Workbench: EthicsEthical Control of Unmanned Systems provides additional references and related resources for ongoing work.  Feedback on these important topics are always welcome.