A report from a State Department advisory board on AI’s impact on arms control, nonproliferation and verification warned that the tech “is likely to lower barriers to biological and chemical weapons development.”
Although artificial intelligence tools can enhance the United States’ early detection and deterrence of hostile nuclear weapons programs, serious concerns remain unaddressed as to the role that emerging technologies can play in the proliferation of chemical and biological threats, according to a report from the State Department’s International Security Advisory Board — or ISAB — that was publicly released on Nov. 16.
The 68-page assessment from State’s advisory board of independent experts — who were tasked by the department in October 2022 with studying how AI and related technologies impact arms control, nonproliferation and verification — said improvements in AI technologies “present new opportunities to further enhance U.S. nuclear proliferation detection.”
“Leveraging advances in data science and computing as well as new data sources, ongoing research efforts are developing innovative, AI-enabled techniques to reveal additional indicators of nuclear proliferation,” the report said. “These next generation methods reveal subtle clues that may indicate a change in capability or even a change in strategic intent of foreign nuclear weapons programs.”
ISAB’s review called the use of technologies to detect nuclear weapons and materials “foundational to U.S. nuclear nonproliferation and arms control,” noting that more advanced AI tools can help detect “early warnings of an emerging nuclear weapons program” by examining how weapons-usable a particular scientific or technological advance may be and by detecting “subtle indicators of changes in intent from civilian to military use.”
The report recommended, in part, that State — in partnership with the U.S. intelligence community — “broaden its nonproliferation and deterrence approach to include early detection and deterrence, based on the use of big data, machine learning and AI.”
The incorporation of AI into a host of threat environments, however, was also found to present a range of new risks to U.S. national security interests, according to ISAB’s assessment.
The report said that “the risks and potential timeline for risks at the intersection of AI and biotechnology are uncertain,” but warned that “AI is likely to lower barriers to biological and chemical weapons development.”
While the study noted that merging AI with the fields of biotechnology and biochemistry offered “promising development for biomedical research,” it also said this convergence “is likely to further accelerate the research and development process for novel biological and chemical weapons development, by reducing the need for research and development to develop possible candidate weapons.”
ISAB’s assessment also highlighted the intelligence community’s uncertainty about AI’s potential impact on the development of chemical and biological weapons, noting that “there is no consensus in the U.S. government regarding how to best approach the risks of the convergence of biotechnology and AI/[machine learning].”
The report recommended, in part, that State develop closer partnerships with private sector biotechnology companies “to promote risk mitigation and responsible use of the technology” and also work with international partners “to prepare for the possible negative uses of AI/ML in biotechnology, to develop a shared understanding with allies about the risks of misuse and to encourage the development of common levers to mitigate misuse.”
In addition to AI’s potential impact on the deterrence of nuclear weapons and the development of biological and chemical weapons, the assessment said the growing use of AI technologies and other emerging tools raised broader questions about sensitive export controls, including “what commodities, software and technology should be on the U.S. Munitions List.”
ISAB’s assessment also noted that current controls to prevent hostile nations from procuring advanced AI technologies “have left gaps,” including when it comes to “accessing AI computational power or resources (AI compute) through the cloud.”
These risks, the review said, require federal agencies to prepare for a shifting threat environment that is likely to be further altered by the use of more powerful AI tools and emerging technologies moving forward.
“The Department of State and its U.S. government partners will need to make fundamental changes in order to address the risks and benefits of these rapidly developing technologies,” ISAB Chair Edwin Dorn said in an Oct. 31-dated letter accompanying the review.