Agency leaders see data analytics and customer service uses for AI

OSTP Director Arati Prabhakar, shown here at a May 2023 conference in Los Angeles. Prabhakar and other senior Biden administration tech officials testified before a House panel about AI on Thursday

OSTP Director Arati Prabhakar, shown here at a May 2023 conference in Los Angeles. Prabhakar and other senior Biden administration tech officials testified before a House panel about AI on Thursday Jerod Harris/Getty Images

Tech officials told lawmakers they are focused on a socio-technical approach to AI use cases in the government, saying it is not “one size fits all.”

Federal leadership in technology policy testified before the House Subcommittee on Cybersecurity, Information Technology and Government Innovation on how the government is continuing to adopt more advanced artificial intelligence systems for agency operations and what steps they are taking to install guardrails on the emerging technology.

Officials from the White House Office of Science and Technology Policy, the Department of Homeland Security and Department of Defense discussed AI’s future in federal work. Specific applications varied across agencies, but a common theme was prioritizing responsible AI deployment.

“Used responsibly, AI can help us deliver better outcomes and to create new possibilities for the American people,” said OSTP Director Arati Prabhakar on Thursday.

Prabhakar further defined OSTP’s role as a nerve center to coordinate federal AI policy and research and development between agencies. 

“When a massive new shift like this great acceleration in AI happens, one of our important roles is to be clear with our colleagues in the White House, with the president, with our colleagues and departments and agencies about how the technology is progressing, what issues they will need to contend with, what the big opportunities are,” she said. 

Witnesses noted opportunities to leverage AI for advanced data analytics and to improve how administrative tasks and customer service in government agencies are conducted. 

“If you step back and you think about how much the government does that is about interacting with citizens, providing information, taking information, those are areas where this new generation of language-based AI of course, can have tremendous benefits, but it has to be used thoughtfully and carefully,” Prabhakar said. 

Craig Martell, the chief digital and artificial intelligence officer at the Department of Defense, said that incorporating high quality data for Defense operations and supporting the national defense strategy are arenas where AI can come into play. 

“We are focused on holistically improving the quality of the data that enables most DOD use cases,” he said. “We've been focusing a lot not only on the data sharing, and how do we do that effectively, according to regulations, but also how do we build models together and evaluate the effectiveness of those models together.”

Within DHS, combating crime has proven an effective use case for AI. Hysen said his agency has used AI systems to rescue victims in abusive living situations and to resolve cold cases. On the national security front, he said DHS has been prioritizing using large language models to combat cyber attacks.

“We are working with the Cybersecurity and Infrastructure Security Agency to look at how we can effectively partner with critical infrastructure organizations on safeguarding their uses of AI and strengthening their cybersecurity practices writ large to defend against evolving threats,” Hysen said.

Across the diverse types of use cases and applications for AI solutions in the federal government, Martell emphasized that AI is not a “monolithic technology” that can function as a blanket solution for every operational problem. He said that his agency is focusing on how to accurately gauge an AI system’s capabilities and success in one use case versus another to prevent misuse.

“We need different algorithms, different success criteria, and different data to train the different models underpinning each of our different use cases,” Martell said. “It's really important to us to build systems that aren't simply dependent upon that algorithm, but that have humans wrapped around it. It's really human-machine teaming so that a human can say ‘oh, no, they got it wrong.’”