State Department Proposes International Principles for Responsible Military AI

.shock/Getty Images

In collaboration with other countries, the U.S. State Department put forth a slew of best practices when incorporating artificial intelligence into defense operations.

Sound human judgment and a formalized chain of command are two pillars in new artificial intelligence guidelines for military usage the U.S. State Department unveiled early Thursday at the 2023 Summit on Responsible AI in the Military Domain, in The Hague, Netherlands. 

Detailed in a publication titled the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, the State Department outlined several key tenets that the U.S. and other endorsing nation states would agree to adhere to when using AI and machine learning technologies in military applications. 

The tenets work to reduce instances of bias and accidents in AI systems, emphasizing a human control element in the deployment and development of any AI military technology. 

“Use of AI in armed conflict must be in accord with applicable international humanitarian law, including its fundamental principles,” the document reads. “States should take appropriate measures to ensure the responsible development, deployment, and use of their military AI capabilities, including those enabling autonomous systems.”

Specific steps State outline include maintaining human control of autonomous systems overseeing sensitive operations—like nuclear weapons—developing AI systems with “auditable” data sources and design procedures, and implementing safeguards to prevent accidents related to autonomous warfighter operations. 

Such recommendations are not legally binding but would rather constitute a multinational consensus on the appropriate use of military AI. “We believe that this Declaration can serve as a foundation for the international community on the principles and practices that are necessary to ensure the responsible military uses of AI and autonomy,” State said in a press email. “We view the need to ensure militaries use emerging technologies such as AI responsibly as a shared challenge.”

Experts have long maintained that the key to responsible AI hinges on retaining a “human centric” element to automated systems to prevent harmful biases from being replicated into decisions made by algorithms informed by large swaths of potentially biased data. 

Roughly one year ago, experts at the National Institutes of Standards and Technology emphasized a continued human-centric design approach to develop AI-powered systems. This philosophy further informed the agency’s new AI Risk Management Framework that debuted in January of this year.