The guidance is intended to steer the department’s expanding AI portfolio and reassure the public that military leaders take concerns about bias, transparency and accountability seriously.
The Pentagon’s tech advisory board on Thursday laid out five key principles that department leaders should prioritize to ensure military artificial intelligence systems adhere to ethical standards.
In its much-anticipated AI ethics report, the Defense Innovation Board also recommended a dozen technical and organizational measures that would help lay the groundwork for those more abstract ideals. The guidance is intended to inform the Pentagon’s expanding portfolio of AI projects and reassure the public, including potential partners in Silicon Valley, that military leaders are taking ethical concerns surrounding the tech seriously.
The report presents “an opportunity to lead a global dialogue [around AI] founded in the basics of who we are and how we operate as a country and as a department, and where we go from there,” Michael McQuade, Defense Innovation Board member and vice president for research at Carnegie Mellon University, said during a public meeting at Georgetown University. “We do need to provide clarity to people who will use these systems, and we need to provide clarity to the public so that they understand how, when, why and where the department will use AI.”
Pentagon leaders see expanding artificial intelligence capabilities as a key part of their tech agenda in the coming years, and those efforts are only expected to ramp up as department gets its Joint AI Center off the ground and begins standing up its enterprise cloud solution. As officials gear up to deploy the tech in both combat and back-office operations, the board’s recommendations are meant to help them avoid any unintended consequences.
The guidance is rooted in more than a year of discussions with technologists in government, industry and academia. They also take into account existing doctrines governing military ethics, like the U.S. Constitution, international treaties and the Pentagon’s Law of War.
The report provides a high-level blueprint for military deployments of artificial intelligence and addresses some general shortcomings of a technology that’s still in its infancy. The guidance is distilled into five principles:
- Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of department AI systems.
- Defense should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.
- The department’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.
- The department’s AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.
- The department’s AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and disengage or deactivate deployed systems that demonstrate unintended escalatory or other behavior.
The report also proposed a dozen specific policies that would put the Pentagon’s efforts on the right path. Specifically, the board recommended increasing investment in standards development, workforce programs and AI security applications, and formalizing channels for exploring the ethical implications of the tech across the department, among other measures.
The Defense Innovation Board serves only as an advisory group, so its report carries no binding provisions. Going forward, Pentagon leaders will decide whether to enact the board’s recommendations into concrete policy.