Current and former government tech leaders also stressed the need for high-level standards to ensure the global AI industry grows in line with democratic values
The White House will soon issue guidance on how agencies should approach regulating specific applications of artificial intelligence under their purview, according to the federal chief technology officer.
The memorandum, mandated under the Trump administration’s AI executive order, is intended to help agency leaders strike a balance between fostering the tech’s growth and defending against its potentially harmful applications, according to federal Chief Technology Officer Michael Kratsios.
“This will be the first document that has legal force around the way that agencies should be looking at regulating artificial intelligence technologies,” he said Tuesday at the Politico AI Summit. “I think it will set the tone globally on the way that we can be pro-innovation while also protecting American safety.”
During the panel, Kratsios suggested the guidance would be released in the near future, though a spokesperson for the Office of Science and Technology Policy wouldn’t provide Nextgov any specifics on the timeline. According to the executive order, agency leaders were supposed to receive the guidance by mid-August.
Artificial intelligence has already been used in countless ways across a wide array of fields, and as such, tech experts have urged regulators to avoid one-size-fits-all, prescriptive approaches to the technology. While the government must create standards that all AI tools must abide, individual agencies will also be responsible for directing the development of the tech in specific areas, according to Kratsios.
“Whether you’re the [Federal Aviation Administration] trying to regulate drones, whether you’re [the Transportation Department] trying to regulate autonomous vehicles, whether you’re the [Food and Drug Administration] trying to regulate AI-powered diagnostics, there are certain questions now that are fundamentally different around your regulatory approach because of the use of this particular technology,” Kratsios said. The forthcoming memorandum will offer agencies a blueprint for answering those questions, he added.
During the event, current and former government tech leaders also stressed the need for high-level standards to ensure the global AI industry grows in line with democratic values, as opposed to authoritarian tendencies of technological competitors like China. In May, the U.S. and 41 other countries agreed to a set of high-level principles to promote transparent and trustworthy AI applications, and already American scientists and government officials are working together to draft more comprehensive standards.
Last month, the National Institute of Standards and Technology released its own guidance for how the government should approach developing those standards. During the event, NIST Director Walter Copan said federal leaders should also look beyond the country’s borders to build a consensus on how AI should advance in the years ahead.
“Standardization is indeed a global effort,” he said. “It’s a process that’s totally inclusive, and so may the best ideas globally win. The United States needs to be a player, needs to commit to the standardization effort, and we as NIST are proud partners in this journey … to develop the kind of consensus approaches that are necessary for ongoing leadership.”
And beyond engaging with global leaders, it’s important that federal officials also dedicate more funds to those efforts, according to Jason Matheny, the former director of Intelligence Advanced Research Projects Activity who currently leads Georgetown University’s Center for Security and Emerging Technology. While government spending on AI has grown substantially in recent years, there’s still more room to invest in developing standards and creating frameworks for evaluating AI tools, he said.
“I think that’s one of the most important [areas] where government R&D matters,” Matheny said in a conversation with Nextgov after his panel. “The commercial sector and academia are just by nature going to under invest in testing and evaluation, it’s sort of a public goods problem. NIST and other organizations … have historically played an important role in being that testbed. We need to do the same thing for AI.”