The Association for Computing Machinery (ACM) is dedicated to Advancing Computing as a Science & Profession.  "We see a world where computing helps solve tomorrow’s problems – where we use our knowledge and skills to advance the profession and make a positive impact."

The ACM Committee on Professional Ethics (COPE) is responsible for promoting ethical conduct among computing professionals, and the ACM Code of Ethics and Professional Conduct is undergoing a comprehensive review. We have offered the the following recommendation.

We have reviewed "ACM Code of Ethics and Professional Conduct" (Draft 3) and applaud this important guidance for computing professionals [1].

A significant area of ongoing activity is not yet addressed: remote and autonomous systems.  While several aspects of the draft Code pertain, the design and deployment of such systems pose special challenges that computing professionals must specifically consider.  As noted in the Turing Award Lecture of fifty years ago: ethics, professional behavior, and social responsibility cannot be separated from the diverse fields in which computer science is applied [2].

Many robotic systems are being deployed with high degrees of autonomy, long operational endurance, and independence from direct human supervision.  Examples include self-driving cars, unmanned air vehicles (drones), military sentry vehicles, and many others.  Whether manned, unmanned, civil or military, such systems have significant potential for applying indiscriminate lethal force at a distance.  Complex situations, unforeseen interactions, and emergent behaviors often occur that are beyond the original scope or intent of designers and engineers.

Special considerations are necessary for such machines, since preprogrammed machine responses remain inadequate in isolation.  Protections for human life must be considered and engineered into systems capable of prolonged operations beyond the range of direct remote control.  A critical enabler is available to help: the combination of human judgement and artificial intelligence can yield more effective systems than is possible by either alone [3].  Thus sufficient human supervisory guidance, and permission checks for recognizably dangerous situations, must be available for systems that are allowed to operate autonomously. Simply put: if a human is not in charge, then no one is in charge.

Constraints on action (such as limits of authority, and conditions requiring explicit human approval) can be achieved for remote systems presenting potential hazard to life. For example, recent work has shown that human ethical considerations can be expressed using validatable syntax and logical semantics when defining executable robot missions [4].  Indeed, if ethical approaches combining machine and human capabilities can better ensure human safety, it is unethical to not consider them.  Understanding such issues when engineering systems with autonomy requires the technical expertise and moral judgement of computer-science professionals.

We recommend adding a section to the ACM Code that articulates these vital concerns.  Suggested draft Professional Responsibilities paragraph 2.10 follow.


"Recognize potential risks associated with autonomy.  Systems operating remotely or with minimal human supervision (for example, drones or driverless vehicles) may have the capacity for inflicting unintended lethal force.  Safeguards, legal requirements, moral imperatives, and means for asserting direct human control must be considered, in order to avoid the potential for unintended injury or loss of life due to emergent behavior by robotic systems."


Our chosen wording of "must" vice "should" is intentional, since recognizing such risks meets thresholds described in [1] and failure to consider such issues is negligent.

Ethical constraints on robot mission execution are possible today.  There is no need to wait for future developments in Artificial Intelligence (AI). It is a moral imperative that ethical constraints in some form be introduced immediately into the software of all robots that are capable of inflicting unintended or deliberate harm to humans or property.

Very respectfully submitted.

Don Brutzman, Bob McGhee, Curt Blais and Duane Davis
Naval Postgraduate School (NPS), Monterey California USA

[1] Don Gotterbarn, Amy Bruckman, Catherine Flick, Keith Miller, and Marty J. Wolf, "ACM Code of Ethics: A Guide for Positive Action," Communications of the ACM (CACM), vol. 61 no. 1, pp. 121-128.

[2] Richard W. Hamming, "One Man's View of Computer Science," ACM Turing Award Lecture, Journal of the ACM (JACM), vol. 16 no. 1, January 1969.

[3] Richard W. Hamming, Learning to Learn: The Art of Doing Science and Engineering, CRC Press, 1997.

[4] Don Brutzman, Curtis Blais, Duane Davis, and Robert B. McGhee, "Ethical Mission Definition and Execution for Maritime Robots under Human Supervision," IEEE Journal of Oceanic Engineering (JOE), January 2018,

As a sneak-peak courtesy, here is advance copy of a forthcoming publication.

Ethical Mission Definition and Execution for Maritime Robots under Human Supervision

Don Brutzman, Curtis Blais, Duane Davis, and Robert B. McGhee, Naval Postgraduate School (NPS), Monterey California USA, 6 DEC 2017.

Submitted to IEEE Journal of Oceanic Engineering for forthcoming special issue on Cutting Edge AUV Technology, planned publication early 2018.

Summary flyer Abstract.  Experts and practitioners have worked long and hard towards achieving functionally capable robots. While numerous areas of progress have been achieved, ethical control of unmanned systems meeting legal requirements has been elusive and problematic.  Common conclusions that treat ethical robots as an always-amoral philosophical conundrum requiring undemonstrated morality-based artificial intelligence (AI) are simply not sensible or repeatable. Patterning after successful practice by human teams shows that precise mission definition and task execution using well-defined, syntactically valid vocabularies is a necessary first step.  Addition of operational constraints enables humans to place limits on robot activities, even when operating at a distance under gapped communications.  Semantic validation can then be provided by a Mission Execution Ontology (MEO) to confirm that no logical or legal contradictions are present in mission orders.  Thorough simulation, testing and certification of qualified robot responses are necessary to build human authority and trust when directing ethical robot operations at a distance.  Together these capabilities can provide safeguards for autonomous robots possessing the potential for lethal force. This approach appears to have broad usefulness for both civil and military application of unmanned systems at sea.

Index Terms — autonomous vehicles, robot ethics, Mission Execution Automata (MEA), Mission Execution Ontology (MEO)

Available: publication draft paper and summary flyer.

“Ethical constraints on robot mission execution are possible today.  There is no need to wait for future developments in Artificial Intelligence (AI). It is a moral imperative that ethical constraints in some form be introduced immediately into the software of all robots that are capable of inflicting unintended or deliberate harm to humans or property.”  Robert B. McGhee, April 2016.

Consideration and feedback are welcome.  Contact:


Harvard Law School, National Security Journal Growing controversy surrounds the rapid development of artificial intelligence (AI) in weapon systems, with little consideration of intent or the variety of potential risks involved.  The following papers provide significant detail and insight regarding actual legal aspects with respect to International Humanitarian Law (IHL).  Key insights include recognition of the temporal aspect associated with naval missions.  Long time intervals may occur between direction and execution, without frequent communication, but the need for human control remains essential throughout.

  • Alan L. Schuller, "At the Crossroads of Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with International Humanitarian Law," Harvard National Security Journal, vol. 8 no. 2, 30 May 2017, pp. 379-425.  (online, pdf)

Abstract. Lawyers and scientists have repeatedly expressed a need for practical, substantive guidance on the development of Autonomous Weapons Systems (AWS) consistent with the principles of IHL. Less proximate human control in the context of machine learning poses challenges for IHL compliance, since this technology carries the risk that subjective judgments on lethal decisions could be delegated to artificial intelligence (AI). Lawful employment of such technology depends on whether one can reasonably predict that the AI will comply with IHL in conditions of uncertainty. With this guiding principle, the article proposes clear, objective principles for avoiding unlawful autonomy: the decision to kill may never be functionally delegated to a computer; AWS may be lawfully controlled through programming alone; IHL does not require temporally proximate human interaction with an AWS prior to lethal action; reasonable predictability is only required with respect to IHL compliance; and close attention should be paid to the limitations on both authorities and capabilities of AWS.

  • Alan L. Schuller, "Inimical Inceptions of Imminence: A New Approach to Anticipatory Self-Defense Under the Law of Armed Conflict," UCLA Journal of International Law and Foreign Affairs, vol. 18, no. 2, 2014, pp. 161-206.   (online, pdf)

Abstract. The Law of Armed Conflict (LOAC) has historically incorporated the term “imminence” across the bodies of law governing resort to armed force (jus ad bellum) and those which govern during an armed conflict (jus in bello), as an integral part of evaluating the legality of responding to a threat. Since these areas of the LOAC have traditionally been considered separate and distinct, the meaning of imminence within them has likewise been treated as distinguishable. But the modern threat environment, especially following the terrorist attacks of September 11, 2001, has proven that this division of imminence ad bellum and in bello is no longer tenable. Application of the concept of an imminent threat has been incoherent and inconsistent. This Article argues that imminence should be a singular concept that applies logically in any situation and given any threat of armed attack. In making this argument, the Article presents a simple and flexible framework that can be applied by any person or entity even in light of crisis and imperfect information. Finally, it proposes three principles of imminence that can be applied in evaluating the legality of actions in self-defense across the spectrum of armed conflict.

Proper control of remote unmanned systems with weapons capabilities is of course fundamentally important for achieving Network Optional Warfare (NOW) goals, namely naval forces operating with far less communications-related vulnerabilities. 

Current press reports how some industrialists - many themselves producing commercial autonomous vehicles with potential for lethal force - to nevertheless call for outlawing any form of autonomous weapons.  For military forces, the other team doesn't necessarily "read the same memos" or follow the same rules regarding IHL. Perhaps pushing such notions to their logical conclusion:  If AI is outlawed, will only outlaws have AI?

A further insight emerged from recent group discussions together at the Stockton Center for the Study of International Law, Naval War College (NWC) in Newport RI.  For at least the past century, the operational effectiveness of Naval forces has improved in direct relation to the ability of ships to communicate and coordinate, both internally and externally.  Thus more-effective supervised teaming of humans with unmanned systems is not "just" a moral or ethical imperative, not "just" superior control of numerous diverse robotic systems, but also better warfighting capabilities for our forces that must operate in harm's way. 

Humans - qualified military professionals - are trained and committed to meet difficult moral, legal and ethical challenges in modern warfare.  Hybrid human-machine approaches are increasingly necessary for successful defense.  Colonel Schuller's important papers clarify International Humanitarian Law (IHL) regarding autonomous lethality, examine key issues relevant to evolving naval operations, and explore the critical thinking behind these fundamental legal principles.

This important essay includes command and control (C2) analytic thinking regarding distributed lethality, decentralized netted fires, adapting mesh networks, and network enabled vice network dependendent.  The author notes "Most of the article’s observations are culminations of several years of analysis by our (NPS) students, faculty, and fellow naval design strategists." 

Impacts of the Robotics Age on Naval Force Design, Effectiveness, and Acquisition

Abstract. The twenty-first century will see the emergence of maritime powers that have the capacity and capability to challenge the U.S. Navy for control of the seas. Unfortunately, the Navy’s ability to react to emerging maritime powers’ rapid growth and technological advancement is constrained by its own planning, acquisition, and political processes. Introducing our own technology advances is hindered as well. The planning and acquisition system for our overly platform-focused naval force structure is burdened with so many inhibitors to change that we are ill prepared to capitalize on the missile and robotics age of warfare.

Yet by embracing the robotics age, recognizing the fundamental shift it represents in how naval power is conveyed, and refocusing our efforts to emphasize the “right side” of our offensive kill chain—the side that delivers the packages producing kinetic and nonkinetic effects—we may hurdle acquisition challenges and bring cutting-edge technology to contemporary naval warfare.  Incorporating robotics technology into the fleet as rapidly, effectively, and efficiently as possible would magnify the fleet’s capacity, lethality, and opportunity—all critical to strategic and tactical considerations. Doing so also would recognize the fiscal constraints under which our present force planning cannot be sustained. As Admiral Walker advised above, it is now time to change.

After addressing the traditional foundations of force structure planning and the inhibitors to change, this article will discuss how focusing on the packages delivered rather than the delivery platforms would allow us better to leverage new technologies in the 2030 time frame. What would a naval force architecture look like if this acquisition strategy were employed? This article will present a force-employment philosophy and a war-fighting strategy based on the tactical offensive that align with this acquisition approach. The article does not present an alternative force structure with actual numbers of ships and platforms, but suggests a force-acquisition strategy and force-design concept that provide a foundational underpinning by which a specific force architecture can be developed. Three strategic force measures—reactivity, robustness, and resilience—will be used subjectively to assess this fleet design compared with our traditional programmed forces.

Jeffrey E. Kline, writing in Naval War College (NWC) Review, Issue: 2017 - Summer



So much to read, so little time...  Here are key works that each shed light on important implications of Network Optional Warfare (NOW).  Enjoy!



Ghost Fleet: A Novel of the Next World War by P.W. Singer and August Cole, Mariner Books, Houghton Mifflin Harcourt, 2015.

What will World War III look like? Find out in this ripping, near-futuristic thriller.

The United States, China, and Russia eye each other across a 21st century version of the Cold War. But what if it ever turned hot?

In the spirit of early Tom Clancy, Ghost Fleet is a page-turning imagining of how World War III might play out. But what makes it even more notable is how the book smashes together the technothriller and nonfiction genres. It is a novel, but with 400 endnotes, showing how every trend and technology featured in book— no matter how sci-fi it may seem — is real. It lays out the future of technology and war, while following a global cast of characters fighting at sea, on land, in the air and in two new places of conflict: outer space and cyberspace. Warship captains battle through a modern day Pearl Harbor; fighter pilots duel with stealthy drones; teenage hackers battle in digital playgrounds; American veterans are forced to fight as low-tech insurgents; Silicon Valley billionaires mobilize for cyber-war; and a serial killer carries out her own vendetta. Ultimately, victory will depend on who can best blend the lessons of the past with the weapons of the future. includes flyer and promotion video


Freedom's Forge: How American Business Produced Victory in World War II by Arthur Herman, Random House, 2012.

Remarkable as it may seem today, there once was a time when the president of the United States could pick up the phone and ask the president of General Motors to resign his position and take the reins of a great national enterprise. And the CEO would oblige, no questions asked, because it was his patriotic duty. In Freedom’s Forge, bestselling author Arthur Herman takes us back to that time, revealing how two extraordinary American businessmen—automobile magnate William Knudsen and shipbuilder Henry J. Kaiser—helped corral, cajole, and inspire business leaders across the country to mobilize the “arsenal of democracy” that propelled the Allies to victory in World War II.
“Knudsen? I want to see you in Washington. I want you to work on sorme production matters.” With those words, President Franklin D. Roosevelt enlisted “Big Bill” Knudsen, a Danish immigrant who had risen through the ranks of the auto industry to become president of General Motors, to drop his plans for market domination and join the U.S. Army. Commissioned a lieutenant general, Knudsen assembled a crack team of industrial innovators, persuading them one by one to leave their lucrative private sector positions and join him in Washington, D.C. Dubbed the “dollar-a-year men,” these dedicated patriots quickly took charge of America’s moribund war production effort. [...] Featuring behind-the-scenes portraits of FDR, George Marshall, Henry Stimson, Harry Hopkins, Jimmy Doolittle, and Curtis LeMay, as well as scores of largely forgotten heroes and heroines of the wartime industrial effort, Freedom’s Forge is the American story writ large. It vividly re-creates American industry’s finest hour, when the nation’s business elites put aside their pursuit of profits and set about saving the world. and also video


Engineers of Victory: The Problem Solvers Who Turned the Tide in the Second World War by Paul Kennedy, Random House, 2013.

Paul Kennedy, award-winning author of The Rise and Fall of the Great Powers and one of today’s most renowned historians, now provides a new and unique look at how World War II was won. Engineers of Victory is a fascinating nuts-and-bolts account of the strategic factors that led to Allied victory. Kennedy reveals how the leaders’ grand strategy was carried out by the ordinary soldiers, scientists, engineers, and businessmen responsible for realizing their commanders’ visions of success. In January 1943, FDR and Churchill convened in Casablanca and established the Allied objectives for the war: to defeat the Nazi blitzkrieg; to control the Atlantic sea lanes and the air over western and central Europe; to take the fight to the European mainland; and to end Japan’s imperialism. Astonishingly, a little over a year later, these ambitious goals had nearly all been accomplished. With riveting, tactical detail, Engineers of Victory reveals how. [...]

The story of World War II is often told as a grand narrative, as if it were fought by supermen or decided by fate. Here Kennedy uncovers the real heroes of the war, highlighting for the first time the creative strategies, tactics, and organizational decisions that made the lofty Allied objectives into a successful reality. In an even more significant way, Engineers of Victory has another claim to our attention, for it restores “the middle level of war” to its rightful place in history. 


War Plan Orange: The U.S. Strategy to Defeat Japan, 1897-1945 by Edward S. Miller, U.S. Naval Institute, 2007.

Based on twenty years of research in formerly secret archives, this book reveals for the first time the full significance of War Plan Orange—the U.S. Navy's strategy to defeat Japan, formulated over the forty years prior to World War II.  It recounts the struggles between "thrusting" and "cautionary" schools of strategy, the roles of outspoken leaders such as Dewey, Mahan, King, and MacArthur, and the adaptation of aviation and other technologies to the plan.  The book shows that the strategy of Plan Orange was the basis of prewar U.S. naval development in training, ship and aircraft design, and amphibious and tactical thought.

War Plan Orange is the recipient of numerous book awards, including the prestigious Theodore and Franklin Roosevelt Naval History Prize.

Related reading: "The New War Plan Orange" by LCDR Scott Allen USN (Ret.), U.S. Naval Institute Proceedings, vol. 122/8/1,122, August 1996.

Force reductions have weakened the U.S. military’s role as a stabilizing influence in the Western Pacific. Asian leaders see—and plan to fill—the power vacuum. [...] It is time to think about a new War Plan Orange.

Minutemen Class: “Answering the Call by Sea”

NPS Total Ship Systems Engineering (TSSE) 2016 Design Project

Each year students in the NPS Total Ship Systems Engineering (TSSE) program pursue a group project to design an interesting new class of ships for the Navy. The July-December 2016 cohort included 10 U.S. and allied naval officers in the Systems Engineering, Mechanical Engineering and Physics curricula.  Problem statement:

  • Sea control traditionally provided by capital ships is increasingly difficult as advanced sea-denial strategies reduce global maritime security.
  • Distributed Lethality (DL) has the ability to overcome these challenges by forcing adversaries to disperse their defenses into a countering position.
  • To accomplish this mission, conceptual design of an affordable surface vessel capable of offensive surface operations for sea control is needed.

Using modern naval architecture techniques together with operationally relevant design goals, the group produced the conceptual Minutemen class.

  • Concept of Operations (CONOPS) including tactically useful missions.
  • Analysis of alternatives for equipment, weapons, propulsion, fuel, water.
  • Reduced manning and maintenance requirements, austere but livable.
  • Damage Control Ethos: minimal, can take 1-2 hits then abandon ship.

  • Greater access to small ports, increased flexibility in forward logistics.
  • Under $100M production cost, potential repeatable production at scale.

The Minutemen Class Design Project slideset summarizes the findings of this capable group. This significant report shows that a cost-effective small combatant can indeed be designed and produced to fill a gap in the Navy's force structure. 

Also available: Minutemen flyer and Minutemen poster from Surface Navy Association (SNA) 2017 Symposium, plus TSSE brochure and TSSE website.


 Project Motivations, Execution, and Potential Influence

The NPS TSSE project pursued a specific small-combatant design challenge.  It found a solution space, i.e. a range of system parameters, showing that quantity production of a small single-purpose combatant is a feasible construct for U.S. construction and budgets.

This TSSE concept study included multiple in-depth tradeoffs guided by faculty and participant experience, cross-disciplinary systems engineering principles, current engineering practice, and modern computational tools for ship design.  Of note is that software support is steadily becoming more comprehensive and able to handle multiple inputs across multiple interrelated system domains.

The ship design was produced by active-duty naval officers who fully understand the requirements of at-sea operations by a real live crew. An enabling consideration: smaller ships that can be forward deployed with coalition partners do not necessarily have to be underway a majority of the time. 

About 40 naval professionals were in attendance at the project outbrief on 13 December 2016.  No arrogance or magical thinking was noted in any of the student presentations.  Pointed questions were offered and answered during the discussion period.

Wartime is risky and dangerous.  Survival was not the #1 requirement - the crew goes into the water with survival gear to await rescue.  Increased support for that scenario recommended as future work (slide showing available technology is pretty lean). It has been observed that reduced livability and survivability, for bearable periods during wartime, is not completely at odds with classic Navy ships and crews who intended to go in harm's way.

As with any ship design, there are numerous competing and interlocking tradeoffs.  This group attempted to break past contemporary fiscal blockers to small-ship combatant design by emphasizing primary mission objectives and relaxing secondary requirements. Engineering evaluation of this candidate design versus Minutemen Class goal requirements is interesting and appears near the end of the slideset.  

The project was not presented as a final recommendation for construction, rather it explored a design space.  Further work is necessary on every aspect to build such a ship.

Several important studies relating to Navy force structure were published at the beginning of the year. It surprising that none of them examined the potential of small combatants in much detail.  One might conclude that such ships are not currently considered to be a viable option for the future Navy.

The Minutemen Ship Class design provides an excellent baseline for consideration of future littoral operations.  Now in planning: we expect that this design can stimulate continued exploration in upcoming NPS courses, theses, wargames and studies.  Having a buildable ship example (rather than a notional concept) helps downstream analysis to better apply small-ship concepts when considering opportunity/cost/risk/benefit relationships.

Given that the notional Minutemen class is not a proposed program for competition, it nevertheless can provide real value in our studies here at NPS.  Having a plausible capability that might exist in 10+ years helps to keep future-alternatives analyses grounded in art of the possible.  A meaningful small-ship exemplar also helps focus activity in areas that need work (such as deployment and sustainability of unmanned systems), related acquisition needs, and even topics that might otherwise not be considered (such as abandon-ship crew survivability).

Another observation: "we don't have to mumble any more" about what is achievable.  Future NPS graduate-student efforts can provide even-sharper insight on a range of technical, tactical, operational and strategic possibilities.

The Minutemen work is highlighted on the Network Optional Warfare (NOW) blog because small single-purpose combatants are stealthier with less need of constant communications.  Such command and control approaches support the operational concepts of Distributed Lethality (DL).

Caveats: no classified information was utilized or considered.  Students from partner nations provided important value.  If the NPS student team can produce such a potential ship design, then other teams (friendly or not) can also accomplish the same.

Personal assessment: the group's investigation has shown that rebalanced operational requirements for small combatants can be feasible, affordable and repeatable.  Future work along multiple vectors is needed and has important potential value for the Navy.

CAPT George Galdorisi USN (Ret.) recently posed the following opportunities and challenges on one of the greatest challenges facing the future Navy: how to effectively develop and deploy unmanned systems that take advantage of both artificial intelligence and human intelligence.

Designing Autonomous Systems for Military Use: Harnessing Artificial Intelligence to Provide Augmented Intelligence


Executive Summary. One of the most rapidly growing areas of innovative technology adoption involves unmanned systems. The U.S. military’s use of these systems—especially armed unmanned systems—is not only changing the face of modern warfare, but is also altering the process of decision-making in combat operations. These systems are evolving rapidly to deliver enhanced capability to the warfighter and seemed poised to deliver the next “revolution in military affairs.” However, there are increasing concerns regarding the degree of autonomy these systems—especially armed unmanned systems—should have. Until these issues are addressed, military unmanned systems may not reach their full potential.

The Department of Defense has evolved a comprehensive Unmanned Systems Integrated Roadmap that forecasts the evolution of military unmanned systems over the next quarter-century. Concurrently, funding for unmanned systems is predicted to rise year-over-year for the foreseeable future. Indeed, as the DoD has rolled out a “Third Offset Strategy” to evolve new operational concepts and technologies to deal with emerging peer competitors, autonomous systems and artificial intelligence have emerged as key—even critical—components of that strategy. One of the operational and technical challenges of fielding even more capable unmanned systems is the rising cost of military manpower—one of the fastest growing military accounts—and the biggest cost driver in the total ownership cost (TOC) of all military systems. Because of this, the U.S. military has sought to increase the autonomy of unmanned military systems in order to drive down total ownership cost.

As military unmanned systems have become more autonomous, concerns have surfaced regarding a potential “dark side” of having armed unmanned systems make life-or-death decisions. Some of these concerns emerge from popular culture, such as movies like 2001: A Space Odyssey, Her, and Ex Machina. Whether the movies are far-fetched or not isn’t the point, what is important is that the ethical concerns regarding employing armed unmanned systems are being raised in national and international media. While the DoD has issued guidance regarding operator control of autonomous vehicles, rapid advances in artificial intelligence (AI) have exacerbated concerns that the military might lose control of armed autonomous systems. The challenge for autonomous systems designers is to provide the military not with completely autonomous systems, but with systems with augmented intelligence that provides the operator with enhanced warfighting effectiveness.

The DoD can use the experience of the automotive industry and driverless cars to help shape the degree of autonomy in future unmanned systems. As testing of these vehicles has progressed, and as safety and ethical considerations have emerged, carmakers have tempered their zeal to produce completely autonomous vehicles and have looked to produce cars with augmented intelligence to assist the driver. Harnessing AI to provide warfighters with unmanned systems with augmented intelligence—vice fully autonomous systems—may hold the key to overcoming the ethical concerns that currently limit the potential of military unmanned systems.


George Galdorisi works at Space and Naval Warfare Systems Center Pacific (SSCPAC), San Diego, California.  Full paper and slideset are available online. 


CRUSER TechCon 2017This work was presented during the CRUSER Technical Continuum (TechCon) at NPS 11-12 April 2017.  The Consortium for Robotics and Unamanned Sytems Education and Research (CRUSER) provides a collaborative environment for the advancement of educational and research endeavors across the Navy and Marine Corps. The Consortium seeks to capitalize efforts, both internal and external to NPS, by facilitating active means of collaboration, providing a portal for information exchange among researchers and educators with collaborative interests, fostering innovation through directed programs of operational experimentation, and supporting the development of an array of educational ventures.


News. Presentation "Ethical Control of Autonomous Unmanned Systems: A Practical Approach" 12 April 2017 at NPS as part of the annual CRUSER Technical Conference (TechCon) 2017.

The backdrop for this work:

  • Roboticists tend to build software systems in unique and dissimilar ways, but nevertheless share a common repertoire of directable capabilities.
  • A number of philosophers view unmanned systems as inherently uncontrollable, and therefore propose international protocols banning their existence.
  • Potential opponents are likely to use such systems as weapons of war regardless, and do not particularly care about ethical command and control.

The gist of our work:

  • Releasing uncontrolled robots at sea with potential for lethal force is not permissible under Law of Armed Conflict (LOAC).
  • Naval officers must act ethically and maintain supervisory control of such devices even if direct communications might be lost.
  • Formal mission orders can describe both tasking and constraints on action that are logically validatable and executable by a wide variety of robots.
  • Identification Friend Foe Neutral (IFFN) is much simpler on the ocean than it is on land.
  • A feasible path forward exists that allows Navy commanders to similarly task and trust unmanned systems to act - and appropriately avoid acting - just as they might with other trusted human partners.

Abstract.  Autonomous systems can be ethically supervised by humans without constant communications.  Ships have the potential to direct autonomous systems effectively, as trusted partners that do not require constant supervisory control, if robot mission orders clearly complement the mission tasking followed by humans.  Such a capability can make robot operations safer - and Network Optional Warfare achievable - for maritime unmanned systems.

  • Semantic coherence is the primary key for achieving success: consolidating a vast variety of robot dialects into a common set of C2 definitions for task orders and constraints. 
  • Efficient messaging is also necessary since maritime systems can easily lose communication links due to long ranges and severe environmental changes. 
  • Optical signaling can help unmanned systems avoid revealing presence, even acting as "data mules" when important messages need to be delivered covertly.

Adding constraints such as no-fly zones, time limitations, permission prerequisites etc. to mission orders allows operators to legally and ethically control mobile systems that have the potential for deliberate (or unintentional) lethal force. The following papers present results from many work-years of effort, showing that ethical control can be practically achieved by providing parsable (and ethically validatable) orders to diverse unmanned systems.  Continuing work intends to demonstrate such capabilities in full-fidelity simulations and at-sea experimentation.

Short-form and long-form papers, presentations:

short formDavis, Duane T., Brutzman, Donald P., Blais, Curtis L. and McGhee, Robert B., "Ethical Mission Definition and Execution for Maritime Robotic Vehicles: A Practical Approach," MTS/IEEE OCEANS 2016 , Monterey California USA, 19-23 September 2016, 10 pages.slides    (.pdf)
full lengthBrutzman, Donald P., Davis, Duane T., Blais, Curtis L. and McGhee, Robert B., "Ethical Mission Definition and Execution for Maritime Unmanned Systems: A Practical Approach," draft paper for IEEE Journal of Oceanic Engineering , submitted 28 January 2017, 29 pages.slides (.pdf)

Abstract. Many types of robotic vehicles are increasingly utilized in both civilian and military maritime missions. Some amount of human supervision is typically present in such operations, thereby ensuring appropriate accountability in case of mission accidents or errors. However, there is growing interest in augmenting the degree of independence of such vehicles, up to and including full autonomy. A primary challenge in the face of reduced operator oversight is to maintain full human responsibility for ethical robot behavior.

Informed by decades of direct involvement in both naval operations and unmanned systems research, this work proposes a new mathematical formalism that maintains human accountability at every level of robot mission planning and execution. This formalism is based on extending a fully general model for digital computation, known as a Turing machine. This extension, called a Mission Execution Automaton (MEA), allows communication with one or more "external agents" that interact with the physical world and respond to queries/commands from the MEA while observing human-defined ethical constraints.

An important MEA feature is that it is language independent and results in mission definitions equally well suited to human or robot execution (or any arbitrary combination). Formal description logics are used to enforce mission structure and semantics, provide operator assurance of correct mission definition, and ensure suitability of a mission definition for execution by a specific vehicle, all prior to mission parsing and execution. Computer simulation examples show the value of such a Mission Execution Ontology (MEO).

The flexibility of the MEA formalism is illustrated by application to a prototypical multiphase area search and sample mission. This paper presents an entirely new approach to achieving a practical and fully testable means for ethical mission definition and execution. This work demonstrates that ensuring ethical behavior during mission execution is achievable with current technologies and without requiring artificial intelligence abstractions for high-level mission definition or control.

To learn more: NPS AUV Workbench: Ethical Control of Unmanned Systems provides additional references and related resources for ongoing work.  Feedback on these important topics are always welcome.

CRUSER TechCon 2017This work was presented during the CRUSER Technical Continuum (TechCon) at NPS 11-12 April 2017.  The Consortium for Robotics and Unamanned Sytems Education and Research (CRUSER) provides a collaborative environment for the advancement of educational and research endeavors across the Navy and Marine Corps. The Consortium seeks to capitalize efforts, both internal and external to NPS, by facilitating active means of collaboration, providing a portal for information exchange among researchers and educators with collaborative interests, fostering innovation through directed programs of operational experimentation, and supporting the development of an array of educational ventures.



Maritime Innovation

A Discussion with NPS Faculty and Students

Dean Emeritus Wayne P. Hughes, Jr.

15 December 2016

A better title than Innovation, the achievement of which too many people theorize about without success these days, would be the ossification seen in big organizations like the Navy and what to do about it.  But a talk about ossification, or rigidity, or calcification, or sclerosis, or intransigence would sound too petulant. So after a few words describing two aspects of the contemporary Navy’s problem, I’m going to flip to the other side of the coin and talk about two beacons of innovative success to inspire actions that may free us to get going with productive, affordable innovation. One is in the Navy at large, and one is here at NPS.

Since I came to NPS in 1979 I have seen the Navy become more and more frozen into inaction. Here are two reasons that don’t get the recognition they deserve:

  • Our latest IG inspection report exhibits a Navy-wide attempt to make no mistakes in the belief that the way to be perfect is to follow rules and regulations, instead of seeking new modes of effective teaching, such as distance learning. At the Naval Postgraduate School our number of lawyers, safety inspectors, examinations, briefings, and administrative staff to assure security, perfect record keeping, accounting, suicide prevention, equal opportunity for minorities, women’s rights, flawless energy conservation, and safe travel exhibit Navy priorities on compliance with rules better than my mere words can do. The first evidence of Navy ossification is that we promote people to positions of authority who take no risks, comply with the rules, and never make a mistake. It is a society that rewards doing nothing perfectly.
  • The second problem is one that all big organizations suffer from, but is worst when they are big government organizations. I call it diseconomies of scale. There is such a thing as critical mass. Until an organization is large enough to produce a product effectively it cannot produce one efficiently. Henry Ford and his mass production symbolize the beginning of large-scale, efficient production. He also symbolizes the diseconomies of getting so big that General Motors almost destroyed Ford with a more modern auto while Ford procrastinated and sold Model T’s for “just one more year.” Ford had become efficient but ineffective because its bureaucracy had destroyed its ability to innovate. Fifty years later the whole Detroit auto industry almost died when foreign builders of cars and trucks—aided by cheap transportation—stole our market with better vehicles than Detroit’s stodgy, complacent, auto designers were producing. A modern example is Google. Like the World Wide Web, Wikipedia, and other computer technologies, its critical mass is very big. Google has achieved a critical mass and is enjoying successful economies of scale. But Google, like Apple, must innovate to stay ahead of the competition. If government doesn’t interfere to bail out the losers, or inhibit start-ups who have better ideas, diseconomies of scale in the commercial world will be obvious when the competition forces innovation or death. The high-tech world is littered with corpses of big organizations that could not adapt.

But big government is different. Big government is a monopoly that wants to stay that way. DoD and the Navy are a special case because we won’t know when we suffer from diseconomies of scale until we fail catastrophically in war. Two years ago I wrote an essay called “A Business Strategy for Shipbuilders” that says you don’t have to predict the future to know changes have already happened that should have affected our Navy. We ought to be in a “catch up” mode to recover from five major things that have already happened during the past 20 years:

  • The foremost operational change is that the seas are no longer a safe sanctuary for U. S. fleet operations.
  • The foremost national security change is the nation’s growing debt that threatens to make the existing defense budget unaffordable.
  • The foremost national strategy change resulted from the rise of Chinese maritime interests and ambitions.
  • The foremost technology change was “the revolution in military affairs” with its precision missiles and accurate detection, tracking, and targeting.
  • The foremost impending change underway is the ever larger number of small, versatile, inexpensive, unmanned and increasingly autonomous vehicles.

 In the past we were timely enough in shifting from the battleship era to the carrier era of warfare at sea, but we have missed the transformation to the missile era that started with a successful missile attack on the Israeli destroyer Eilat in 1967. The Israeli navy quickly responded with small Sa’ar boats carrying Gabriel missiles and by 1973 was ready to fight and win the first sea battles with missiles. Israel did that in just six years. By contrast, until very recently the U.S. Navy hadn’t deduced that missile warfare is fundamentally different from carrier warfare because lethal missiles can be distributed in smaller and more numerous warships.

Now there is a new transformation going on. Call it the era of robot and cyber warfare. It is another factor that will make smaller fighting vehicles more valuable and swarm attacks more and more common.

A new book Ghost Fleet by Peter Singer and August Cole describes a big war when China attacks and nearly destroys the U. S. fleet with cyber-attacks and unmanned, sometimes robotic, vehicles. Unlike the surprise air attack on Pearl Harbor in 1941, the Chinese conduct a successful surprise invasion of Oahu and seize the island. The story is all good fun, with the U. S. now in the position of the loser who, Islamist-like, begins our own terrorist attacks with Special Forces assassinating Chinese leaders on the island. But there is nothing imaginary about the Chinese cyber worms that destroyed our ships’ ability fight, or the unmanned strike vehicles that participated in the destruction of our forces defending Hawaii. Read the book to see what innovation has already wrought upon modern warfare at sea.

The 2015 movie called Eye in the Sky introduces the overdone question of robot ethics. It exaggerates the emotional side of unmanned vehicle attacks, but it is on target in exhibiting the advantages of Predator-sized attackers over manned strike aircraft, and the presence of very small, bird- and even bug-sized searchers that are here now (or almost here).

I won’t speak further about Navy innovation as a whole, because the bureaucratic goal of perfection—of never making a mistake—has crippled it. But I can point to an exception that I hope will have more and more influence on the future Navy. Swift progress in the Surface Navy is going on almost unnoticed in the press and on the blogger circuits.

A Real Innovation Underway

Our Surface Forces under the leadership of VADM Tom Rowden are setting an example for the rest of the Navy. In just over one year the Commander of our Surface Forces has accomplished three things that are changing how surface forces will fight effectively and affordably in the rest of the 21st Century. In a word, he has embarked on real innovation to achieve a big change in a short time.

First, Admiral Rowden published a framework called Distributed Lethality to unify all Surface Navy endeavors. Distributed lethality establishes an offensive mindset that will force the enemy to be ever-ready to defend against our sudden surprise attacks. This reverses the surface navy’s longstanding defensive role in CVBG, ESG, and convoy protection. Rowden is specific about creating a team that employs UAVs to detect the enemy so that surface warships can deliver ASCMs to attack the enemy first. He envisions the scout-to-shoot forces embedded in a moving deadly circle that is hard to detect out to a range of 100 nm or more and is designed to attack at a time and place of our choosing.

Second, Admiral Rowden is specifying actions to give his surface forces an immediate, more distributable offensive capability to achieve his intentions insofar as possible with existing ships and aircraft, manned and unmanned. He wants to show the way to achieve distributed lethality during his all too short tour as Commander of Naval Surface Forces. I don’t know everything going on, but a standout event is deploying what is called an ADP, an Adaptive Force Package, comprising an experienced task group commander, three DDGs, and enough UAVs and helicopters to test the performance of the deadly circle in one segment. This month the AFP task group departed on a long cruise to the Western Pacific with plenty of new tactics and tests to try out. 

Admiral Rowden has also tapped the NWC and NPS to achieve affordable distributed lethality in the immediate future.

Third, he is specifying actions to take now to make the future Surface Force more distributable. He wants to build a large number of small missile combatants with minuscule crews but lots of firepower. He wants them affordable enough that we can deploy many squadrons of them. How many squadrons? The Chinese have 80 Houbeis or soon will. I want to name our new littoral combatants MINUTEMEN because they will strike silently and unexpectedly and be small enough that when one is detected and put out of action the crew is saved and the ship abandoned. MINUTEMEN squadron tactics will be the opposite of our big expensive warships that must be saved when hit and incapacitated. And there will be many other things he can do to respond to the five changes I cited above that should have already affected how our navy is constructed and will fight in the future. The little MINUTEMEN must be inexpensive—“design to cost” has a bad reputation in the US Navy, but I think $100 million in series production is an absolute top construction cost under a concept that we will never send one through overhaul but instead replace the design after 5 to 15 years with a better one.

Doubtless you will have questions at the end and I will do my best to answer them, but the important thing to note is how quickly innovative thinking can change our Navy, as it did in the Israeli navy.

The Naval Postgraduate School’s Role in Innovation

 I close by pointing out how the Naval Postgraduate School is supporting innovative products. There is plenty of rigidity in civilian academia, which is now tolerating frivolous student protests, is abusing the tenure system, and is promoting based on publication by the pound instead of on usefulness, to name three. I am proud to say NPS has escaped many of these unimaginative academic standards even while overcoming the restrictions of Navy lawyers and the Inspector General. In anticipating Navy needs we have stayed a good five years ahead of the Pentagon in anticipating future opportunities and risks.

  • A standout for many years is unmanned vehicle development under Dave Netzer, Jeff Kline, and now Ray Buettner. NPS UAVs have been a near-perfect example of successful development and swift deployment to the fighting forces.
  • In 2001 our Total Ship Systems Engineering students designed a 400-ton small combatant called Sea Lance. Currently a new TSSE class under Fotis Papoulias and Jake Didoszak is designing a follow-on Sea Lance II, aptly named the MINUTEMAN class, to fill Admiral Rowden’s needs. If the Navy had bought Sea Lance and gained tactical experience, we would be much further along, and blending tactics, technology, and ship design for today, and very low cost in construction and manning.
  • The IT curriculum students under Dan Boger are ready now to give Surface Forces an immediate system of C2 that is reliable, adaptable, and hard to detect.  For the past two years, Network Optional Warfare (NOW) operational concepts have enabled students to explore alternative courses of action in support of Distributed Lethality.
  • The Warfare Innovation Continuum draws contributions from almost everywhere across campus to make the Navy aware of the technological and tactical future. It was testing distributed combat systems of many kinds five years before Admiral Rowden had a chance to do something about it. Come to think of it, the Naval War College faculty needs to follow the WIC, led by Jeff Kline, and join in its far-sighted research on campus.
  • Interdisciplinary studies and research are far more common at NPS than in other universities and undergird our uniqueness.
  • An underappreciated asset is our foreign student advantage. Again and again I have seen them bring perspectives that our faculty and students would not otherwise appreciate.
  • The Littoral Operations Center (LOC) has become well known internationally and is highly respected for promoting cooperation in tactics, technology, and operations to fight in the dangerous littorals.
  • The LOC has been valuable in combining different interdisciplinary skills. For example, experimentation with mesh networks for almost silent operations is going on here. In collaboration with Dan Boger’s IT students, the product can make Admiral Rowden’s moving deadly circle more silent and deadly and do it now.

In conclusion I leave with you with these thoughts. The recent IG inspection of NPS was about as good as it gets. Nevertheless if you read the residual criticisms, the inspectors left us with a list of further actions that are all administrative, and all are about following the plethora of rules and laws imposed in the desire for perfection over progress. The subsequent 2016 re-inspection report found excellent NPS compliance on a plethora of administrative issues, but did not say much at all about our fundamental mission serving Navy and Marine Corps needs.

I am reminded of a famous saying I first heard expressed by one of our finest Under Secretaries, Jim Woolsey, in 1975: The three most untrustworthy statements one can hear are, “The check is in the mail;” “Yes I’ll still love you in the morning;” and “I’m from Washington and I’m here to help you.”

Our challenges are clear.  The world is changing rapidly, and the nation needs the Navy to stay abreast.  Maritime innovation for the surface fleet is happening, and collaborative contributions by NPS students and faculty are perhaps more important than ever.  Trust your experience, instincts and knowledge to keep following that path of innovation together.


  1. Ya'ari - a Prophet for Our Times, Network Optional Warfare Blog, 8 May 2014.
  2. Peter Singer and August Cole, Ghost Fleet, Houghton Mifflin Harcourt, June 30, 2015.
  3. Stephen Holden, ‘Eye in the Sky,’ Drone Precision vs. Human Failings, New York Times, 10 March 2016.
  4. Office of the Naval Inspector General, Report of investigation, Calhoun Archive, Dudley Knox Library, Naval Postgraduate School (NPS), Monterey California, 21 November 2012.
  5. Professor Wayne Hughes, A Maritime Business Strategy for Shipbuilders, Department of Operations Research, Naval Postgraduate School, 28 July 2014
  6. Vice Admiral Thomas Rowden, Rear Admiral Peter Gumataotao, and Rear Admiral Peter Fanta, Distributed Lethality, U.S. Naval Institute (USNI) Proceedings, January 2015, vol. 141/1/1, 343.
  7. Consortium for Robotics and Unmanned Systems Education and Research (CRUSER), Naval Postgraduate School (NPS), Monterey California.
  8. Total Ships Systems Engineering (TSSE) Program, Naval Postgraduate School (NPS), Monterey California.
  9. Littoral Operations Center (LOC), Naval Postgraduate School (NPS), Monterey California.
  10. Adaptive Force Package (AFP): Ryan Kelly, Distributed Lethality Task Force Launches CIMSEC Topic Week, USNI Blog, 22-28 February 2016.
  11. Scott C. Truver, Gaming Distributed Lethality, U.S. Naval Institute (USNI) News and Analysis - Opinion, 26 July 2016.

Also available: printable version



Insights from a Decade of Campaign Analysis, Wargaming, Fleet
Architecture Studies and Tactical Analysis at the Naval Postgraduate School

Professor Jeff Kline, Captain USN (Retired) and Professor of Practice

Friday 6 May 2016, Ingersoll Hall, NPS, Monterey California USA

Abstract. First given during April to the Washington DC Strategic Discussion Group, then again for Naval War College (NWC) faculty and students pursuing Joint Professional Military Education (JPME) at NPS.  This talk presents the methods used and major trends discovered of over ten years of warfare analysis from theses, capstone classroom projects, faculty research projects, wargaming, and seminars at the Naval Postgraduate School. It will discuss how the missile and robotics age provides enablers for both friendly and potential adversary forces and how “Blue” forces responded to an increasing challenging sea and air denial capabilities from “Red”.

Professor Kline served in the Navy for 26 years and is now a faculty member in the NPS Operations Research Department. His distinguished naval career includes commands of the USS AQUILA and the USS CUSHING. He has served as a naval analyst in the Office of the Secretary of Defense and earned numerous awards for teaching and research while serving at the Naval Postgraduate School. He has degrees in Industrial Engineering, Operations Research, and National Security Strategy.

Available online: announcement flyer, presentation slideset, and annotated slideset.  Excerpts follow.

Slide 1.  Thank you for allowing me a stage to brag about what I think is one of the most unique institutions in America, the Naval Postgraduate School. Over the next few minutes I hope to provide evidence of this claim by showing how combining operationally experienced students with a world-class defense-oriented faculty provide both meaningful graduate education for our officers and real insights into today’s defense challenges.

For me to summarize over 800 warfare analysis papers, 200 classroom capstone studies, 300 theses and major research projects related to maritime warfare analysis is impossible to accomplish in detail, so I will do three things today:

  • Stay on script so as not to stray into a detailed discussion until the question period,
  • Provide an overview of how we integrate our graduate education with technology advancements in warfare, and
  • Cover the biggest trends our students and faculty have produced, many of which were originally quite new but are of little surprise today. 

Slide 4.  Each year we have a campus‐wide theme and scenario called the Warfare Innovation Continuum for faculty to apply in their classroom and research if they wish. This year, “Creating Asymmetric Warfighting Advantages” involves over 400 students, faculty and sponsors in capstone classroom projects, thesis work, and research initiatives. It uses a scenario titled Maritime War 2030 which addresses an expansionist Russia and adventurous China. The Continuum theme lasts in research threads for much longer than a year, many ideas going to field experimentation.

Slides 16-17, Big Trends Across Ten Years. Some of the trends I’ll address are listed here. The impact of the missile and robotics age can clearly be seen in the way we employ forces and those forces aligned against us. Almost all trends stem from technologies associated with miniaturization, computing power, speed, connectivity, energy, and advances in artificial intelligence. Innovative employment of these technologies (such as ISR sensors on self‐propelling surf boards) is the most frequent theme from our Warfare Innovation Continuum series.

Slide 18, Characteristics of Modern Maritime Warfare.

  • Offense is the stronger form of naval tactical warfare.  "Fire Effectively First" (Hughes).
  • Defense is the stronger form of naval operational warfare.  Sea Denial is easier than Sea Control.
  • We observe that the U.S. Navy is currently on the disadvantaged side in both these areas, in warfighting and procurement.

I want to remind us of an important perspective about naval warfare, which is reverse from the land warriors' view of defense being the stronger form of warfare. The maritime tactical offense is less expensive to employ, and more advantageous than the maritime tactical defense. Initiatives like distributed lethality are addressing the imbalance between offensive capacity and defense capacity in our force.

Relevance to Network Optional Warfare (NOW): in addition to showing the academic, technical and tactical contexts that have led to current development of NOW concepts, CAPT Kline's synopsis includes comments on "Who can best fight in the night? (EM night that is)," "Robots Forward!" and "Push C2 to lowest level."

Meaningful work over many years by NPS students and faculty is producing an compelling body of work.  Looking back and looking ahead, it is clear that diverse lines of inquiry are aligning along common directions.  Thank you Jeff for your continuing leadership in these critical arenas.

Article by David Larter, Navy Times, 13 JAN 2016, reporting on Surface Navy Association keynote presentation. Excerpt follows:

The US Navy in Europe is going dark.

The four destroyers in Rota, Spain, and ships operating in 6th Fleet are switching off their radars and sensors to operate with more stealth and train for fighting cyber and electronic attacks, said the Navy's top officer in Europe.

"We need to change the culture in the surface Navy," said Adm. Mark Ferguson, head of Naval Forces Europe and a career surface warfare officer. "As I tell the [commanding officers] in Rota all the time, it's not a decision of what you turn off anymore, it's a decision of why you are turning something on. Why are turning that radar on? ... It has spurred tremendous creativity."

Ferguson told the crowd at the 2016 Surface Navy Association's national symposium last week that forces in Europe are operating almost constantly in some degree of emissions control, turning off radars and reducing communications to prevent jamming and to better mask their location and profile. That practice was routine during the Cold War and is returning as the US faces a newly aggressive Russian military.

"We're having to think about how are we going to get information to the ship if the satellite isn't there, if GPS is down — and we are running exercises where we take those systems away," Ferguson said. "We need to be able to execute the [ballistic missile defense] mission and fight through a network or cyber attack."

Transcribed from video, minutes 19:30-21:20 in the talk:

"... The second piece that I talk about in this asymmetric environment is: we have to change the culture of the surface force.  I tell the COs at Rota that when you get under way, it's not a decision of what you turn off anymore when you get to sea, it's a decision of why you are turning something on.  Why do you have to have that radar?  Why do you have to have... Because in the environment we're in - the days of you get a cyberattack against the network - you can't shut down the network, because you have a mission to do in ballistic missile defense.  You have to execute the mission.  So maybe we isolate the rest of the GIG and we have to figure out how to fight through.  So it has spurred tremendous creativity.

For the folks in the first couple of rows who remember this in the old days...  The COs very much like getting mission orders, they like not having the chat rooms turned on to headquarters, they like being able to have the permission to - you know.  How do I get information to the ship when the satellite may not be there, when GPS may be jammed for example.  How do we start to work in that environment?  We are running exercises taking those capabilities away, and now investing in other systems that help us in the PK to operate in close to shore against advanced cruise missiles, to be able to execute the BMD mission, and be able to fight through the cyber or the network attack. 

This is the area when I look at the asymmetric piece that we have to be ready to move into.  It will be more unmanned, it will be more distributed, it will be more cyber focused as we go forward.  The ship has understand that it has to maneuver in this space. Weaving the asymmetric into our day-to-day operations is a big thrust of our operations in Europe. The young generation loves it and they're having a great time with it, and coming up with some really marvelous ideas."

ADM Mark Ferguson 


See the article online, and the full presentation video by Admiral Ferguson online with additional flag presentations at Navy Live.

DISA screenshot of network operations center

Software-Centric versus Data-Centric Security

Establishing total information assurance for computer programs is difficult.  Software certification & accreditation (C&A) is necessary and critically important, but it is also a costly and time-consuming process.  The Department of the Navy spends immense amounts of labor, funds, and personnel time to certify and accredit software.  Overhead includes significant “opportunity cost” of people who must live with tedious workarounds and reduced capabilities while waiting for new software programs to be approved.

For example, software certification prior to installing and running a new application on the Navy Marine Corps Intranet (NMCI) has typically cost sponsoring commands many tens of thousands of dollars - and many months - to accomplish.  The actual work is highly specialized and often performed by contractors, adding further distance and overhead to the overall process.  Once complete (if successful), adding future enhancements and correcting bugs becomes similarly onerous, since follow-on codebase changes must also be carefully examined and tested in order to ensure that new vulnerabilities (either malicious or unintended) have not been introduced. 

A.M. Turing Award History can be instructive – some lessons are timeless.  Here is one important lesson about the limits of software assurance that often seems to be forgotten.

The Turing Award is considered the equivalent of the Nobel Prize for computer science.  Since 1965 it has been awarded annually, with each recipient giving an eagerly anticipated talk describing their work.  The Turing Award Lectures are essential reading and show the evolving foundations of computer science.

In 1983, Dennis Ritchie and Ken Thompson jointly received the Turing Award for their development of generic operating systems theory, and specifically for the implementation of the UNIX operating system. Ken Thompson’s lecture was  Reflections on Trusting Trust, with the subtitle “To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software.”  This talk can still surprise: he describes source code that looks like it does one thing, but actually performs things that are quite different.  Here are key excerpts, quoted from the original.

  • Figure 1, Reflections on Trusting Trust Stage I.  In college, before video games, we would amuse ourselves by posing programming exercises. One of the favorites was to write the shortest self-reproducing program. [...]
  • Stage II.  The C compiler is written in C. What I am about to describe is one of many "chicken and egg" problems that arise when compilers are written in their own language.  [...] shows a minimalist self-replicating code algorithm [...] This is a deep concept. It is as close to a "learning" program as I have seen. You simply tell it once, then you can use this self-referencing definition.
  • Stage III.  [...] Figure 6 shows a simple modification to the compiler that will deliberately miscompile source whenever a particular pattern is matched. If this were not deliberate, it would be called a compiler "bug." Since it is deliberate, it should be called a "Trojan horse." [...]
  • The actual bug I planted in the compiler would match code in the UNIX "login" command. The replacement code would miscompile the login command so that it would accept either the intended encrypted password or a particular known password. Thus if this code were installed in binary and the binary were used to compile the login command, I could log into that system as any user.
  • Moral. The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.

Ken Thompson, Turing Award co-winner

So in effect, Ken Thompson chose his Turing Award moment to reveal to the world that he had superuser and user access for every Unix system and server on the planet.  Further he revealed that, even with a great many people scrutinizing and rebuilding the source code, and even despite users banging on Unix daily everywhere, anyone else might use a super password for each and every account.  Meanwhile no one else knew that the super password existed, much less that it quietly insisted on re-propagating itself in each fresh new copy of Unix.  

No kidding.

What an amazing reveal.  I’ve always imagined that some people in the audience that day might not have waited for the end of the lecture, instead rushing out and calling back to their offices, sounding the alarm to shut down all computer access!  

These fundamental principles and constraints about software testing remain unchanged.  Therefore it is quite  reasonable for anyone today to understand that, at best, an extremely rigorous software certification and accreditation evaluation still has limits nevertheless.  Strictly speaking, even the best evaluators can only conclude “we didn’t notice or detect anything bad happening when we tested the codebase.”  Even more worrisome are accompanying disclaimers like “the accredited software is only considered secure when run in a secure operating environment, on secure hardware... at all times.”

Perhaps considering a data-centric point of view can help us.  Dialog in the Data Dilemma MMOWGLI game clearly shows that the Navy has great dependence – and even greater potential benefit – deriving from data that might be shared broadly.  Data sharing can occur both “outside” with public and partners, as well as “inside” among Navy stakeholder communities.  Might that data-centric point of view help to improve our information assurance in ways that are beyond the expressive power of software to guarantee?

Data is simpler than software and a lot easier to check.  Data that is frequently used also tends be well defined. We ought to take advantage of those traits, in the large, across all of our information systems.  It is time to consider how Data Security might complement Software Security.  

  • Can we create data that is valid, signed, trusted, certified, accredited and secure at birth?  
  • Can we use, re-use, adapt and “mash up” secure data throughout its lifetime and lifecycle?  
  • Can we reduce code complexity and attackable surface within our software applications, by focusing on the full information assurance (IA) of the data they are producing and consuming?
  • Can the same security techniques be used for data in motion, data at rest, and data in use - across multiple applications and also within the cloud?

A good check question for any broad concept is “assume success – then what?”  Let’s apply that test to this potential approach.  If data security can indeed be accomplished to properly complement software security, then here is one possible cybersecurity scenario:

  • Incident: applications in a networked enclave are 100% penetrated by malevolent intruders, who are later detected and locked out.
  •   Impact: no unauthorized access to information occurs because all data sets remain secure.

Data-centric security presents worthy challenges… that are beginning to appear feasible.  Open international standards provide major building blocks to work with.  Pieces of this puzzle are getting pushed around right now, with contributions by many thoughtful players in the Data Dilemma game.  Much more expertise is available to provide help on every question… if we can find the right paths forward.  Simply perpetuating the current status quo and maintaining an unchanging course down an unsustainable path doesn’t scale to meet our growing challenges.

Thirty two years have passed since Ken Thompson's revelation... I wonder whether anyone is calling back to headquarters yet.

How does the Navy get beyond software barriers to reach the next level of capability: trust for shared data?

Don Brutzman, Naval Postgraduate School (NPS), Monterey California USA

Original publication

This blog post originally appeared in the Data Dilemma (dd) MMOWGLI game, 12 April 2015.

CC4913 Command and Control (C2) Capstone Class Project: report and briefing, April 2015

Point of Contact: Professor Dan Boger,, 831.656.3671

CC4913 Policies and Problems in C2 is a capstone course for NPS Command and Control students.  Study of the fundamental role C2 systems fulfill in operational military situations, including the full range of military operations. Topics include analysis of the changing role of organizational structures and processes, as well as technologies and impacts on C2 systems requirements and designs. Considerations include the complexities imposed on C2 systems as the force structure becomes more heterogeneous. Case study of selected incidents and systems provide a focus on current problems.

This year’s class was divided into RF and non-RF groups and asked to explore C2 issues in a scenario where we were trying to prevent conflict by “holding at risk” aggressors in a complex political and geographic situation described above. In last year’s scenario, high power jamming only originated from the mainland. Developments in the past few months caused a change in that assumption: all SATCOM uplinks are at risk within 300 miles of fixed bases and large surface ships. This was how our simple communications wargame transpired: Aggressive action by nation X; Send in UAV to support ROL (Predator-Global Hawk); Lose UAV SATCOM link; Send in missile boats; Missile Boats tracked via omni-MF and VHF; Send in UAV (Shadow-Scan Eagle); Lose UAV CDL; Patrol boats to visual range – all RF comm jammed; Patrol boats exfil to establish link; Picture gone. This led us to consider a combined RF/non-RF solution.

Network Optional Communications (NOW) is comprised of the following potential methods: lasers, flashing light in various bands, underwater/acoustic, QR codes, and data muling. What we rediscovered was this is not a question of RF vs non-RF. There is a spectrum of options, and we may decide to operate at some level of EMCON to avoid detection. Or the enemy and weather may conspire to reduce the availability of our network. We found situations where there did not appear to be a viable and elegant RF solution but a hybrid combination of techniques could meet user requirements. We refer to this as Mission Agile EMCON and is shown on slide 32 of the accompanying presentation. Slide 34 presents our recommendations.

Attached. Class Project Report and Class Project Briefing.  

This work was presented at the Littoral Combat Ship (LCS) Wargame Planning and Innovation Workshop hosted by the NPS Littoral Operations Center (LOC), 23-24 April 2015.

Disclaimer.  These sources represent unclassified, open-source work performed in an academic environment for the purpose of educating graduate students. The views in this document are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.


Steven Debich, Lieutenant Commander, United States Navy

Thesis, Master of Science in Network Operations and Technology, March 2015

Advisor: Don Brutzman, Department of Information Sciences.  Co-Advisor: Scot Miller, Department of Information Sciences.  Second Reader: Don McGregor, MOVES Institute.

Abstract.  Navy afloat units become disadvantaged users, once disconnected from the pier, due in part to the high latency associated with SATCOM. Unfortunately recent gains in SATCOM capacity alone do not overcome throughput limitations that result from latency’s effect on connection-oriented protocols. To mitigate the effect of latency and other performance inhibiting factors, the Navy is improving its current WAN optimization capabilities by implementing Riverbed Steelhead WOCs. At-sea testing has shown Steelhead increases effective SATCOM capacity by 50%. Laboratory testing demonstrates that by encoding structured and semi-structured data as EXI rather than XML, compression ratios can be further improved, up to 19 times greater than Steelhead’s compression capability alone. Combining EXI with Steelhead will further improve the efficient use of existing SATCOM capacity and enable greater operational capabilities, when operating in a communications constrained environment. Not only does EXI improve compactness of traffic traveling over relatively high capacity SATCOM channels, it also expands net-centric capabilities to devices operating at the edge of the network that are restricted to lower capacity transmission methods. In order to achieve these substantial improvements the Navy must incorporate the already mandated DISR standard, EXI, as the single standard for all systems transferring structured and semi-structured data.

Received Outstanding Thesis Award from NPS Information Sciences Department.

Keywords: EXI, Efficient Xml Interchange, EFX, efficient XML, Riverbed, Steelhead, WAN optimization, compression, long fat network LFN.

Links: catalogslideset (.pdf), thesis.


Bruce Hill, Lieutenant, United States Navy

Thesis, Master of Science in Network Operations and Technology, March 2015

Advisor: Don Brutzman, Department of Information Sciences.  Co-Advisor: Don McGregor, MOVES Institute.

Abstract.  Current and emerging Navy information concepts, including network-centric warfare and Navy Tactical Cloud, presume high network throughput and interoperability. The Extensible Markup Language (XML) addresses the latter requirement, but its verbosity is problematic for afloat networks. JavaScript Object Notation (JSON) is an alternative to XML common in web applications and some non-relational databases. Compact, binary encodings exist for both formats. Efficient XML Interchange (EXI) is a standardized, binary encoding of XML. Binary JSON (BSON) and Compact Binary Object Representation (CBOR) are JSON-compatible encodings. This work evaluates EXI compaction against both encodings, and extends evaluations of EXI for datasets up to 4 gigabytes. Generally, a configuration of EXI exists that produces a more compact encoding than BSON or CBOR. Tests show EXI compacts structured, non-multimedia data in Microsoft Office files better than the default format. The Navy needs to immediately consider EXI for use in web, sensor, and office document applications to improve throughput over constrained networks. To maximize EXI benefits, future work needs to evaluate EXI’s parameters, as well as tune XML schema documents, on a case-by-case basis prior to EXI deployment. A suite of test examples and an evaluation framework also need to be developed to support this process.

Received Outstanding Thesis Award from NPS Information Sciences Department.

Keywords: Extensible Markup Language (XML), Efficient XML Interchange (EXI), JavaScript Object Notation (JSON), Compact Binary Object Representation (CBOR), Binary JSON (BSON), data serialization, data interoperability.

Links: catalogslideset (.pdf), thesis.