Pentagon to Adopt Detailed Principles for Using AI

sibsky2016/Shutterstock.com

Sources say the list will closely follow an October report from a defense advisory board.

The Defense Department will soon adopt a detailed set of rules to govern how it develops and uses artificial intelligence, officials familiar with the matter told Defense One

A draft of the rules was released by the Defense Innovation Board, or DIB, in October as “Recommendations on the Ethical Use of Artificial Intelligence.” Sources indicated that the Department’s policy will follow the draft closely. 

“The Department of Defense is in the final stages of adopting AI principles that will be implemented across the U.S. military. An announcement will be made soon with further details,” said Lt. Cmdr. Arlo Abrahamson, a spokesman for the Pentagon’s Joint Artificial Intelligence Center.

The draft recommendations emphasized human control of AI systems. “Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems,” it reads.

The DIB guidelines and the accompanying implementation documentation, go well beyond the brief and largely superficial vision statements on AI issued by tech giants like Google, Facebook and Microsoft. For instance, the recommendations describe key dangers and pitfalls in AI development, like bias in datasets, that commercial players have only begun to grapple with. 

Because the Department of Defense is adopting the principals now, at the beginning of a process of moving AI into far more activities, the hope is that good practices and design will become the norm in the way the U.S. military uses AI, rather than an afterthought that the Department has to retrofit into already existing ways of doing things. 

The DIB also recommended that DoD rely on tools that are transparent, meaning, unlike some types of so-called “black box” neural networks, a technical expert (with permission) could describe the process by which the software reached a specific decision or action. 

The board also recommended that such tools be used only within an “explicit, well-defined domain of use,” a codicil intended to keep software developed for noncombat activities from finding its way into lethal operations.

Heather Roff, who helped draft the DIB recommendations, said, “I’m very pleased to see that [Defense Secretary Mark Esper] has adopted the principles and is implementing them department-wide, and securing our national security through responsible research and innovation in artificial intelligence.” 

Other ethicists and academics in artificial intelligence and weapons applauded the news of the DoD’s adoption but added that there was further to go, and that risks and and concerns about military use of AI would remain.

Rebecca Crootof, a law professor who specializes in technology and armed conflict at the University of Richmond School of Law, said, “I have little doubt that the process of working towards these principles was influential within the DoD. In learning about the different kinds of risks posed by AI, in working through how they might manifest in various military scenarios, and in thinking about what policies might minimize their manifestation or impact, participants in this process undoubtedly internalized why having and abiding by ethical principles for AI is critically important.” But Crootof said DoD still needs to follow the map provided to actually implement the principles.

She also said she hopes that DoD’s example would help establish international norms for the military use of AI. 

“While it’s great that the DIB principles affirm the import of international law, there are a number of areas where it is still unclear what international law requires for AI systems or weapon systems with increasingly autonomous capabilities,” she said.

Crootof added that Defense officials could make the DoD more accountable to the public and improve the safety of AI in the military by publishing standards or guidelines for holding individuals accountable for accidents caused by AI weapons. She recommended that DoD publish information about how it reviews standards for weapons with AI features. Further, the DoD or the White House, could state “a U.S. commitment to taking international state responsibility for accidents caused by weapon systems with AI and autonomous capabilities.”

Frank Sauer, a member of the International Panel on the Regulation of Autonomous Weapons and the International Committee for Robot Arms Control, called DoD’s adoption of principles a good step, but said that adding automation to warfare could yet have destabilizing effects. “The DIB report is a very well thought-out document. But what has people like myself and the arms control community in general very concerned is the uncontrollable risk of escalation accompanying fully automated kill chains,” he said — meaning AI and automation creeping into the processes by which the military collects information on a target, dispatches a weapon to attack and ultimately kills it. 

Michael C. Horowitz, a political-science professor at the University of Pennsylvania, said that "the ultimate impact will depend not just on how the principles actually shape DoD investments in AI, but in how the broader AI community perceives DoD actions."  However, he said, "Taking AI ethics seriously could increase the chances that some skeptics in the broader AI community are willing to work with DoD while also potentially improving the reliability and effectiveness of military uses of algorithms."