Safe AI pathfinding is essential for government adoption, officials say

BlackJack3D/Getty Images

GSA's architecture lead for AI services said standardized evaluation helps users identify the right models for their needs.

Meaningful adoption of artificial intelligence across federal agencies should be viewed as a continuing journey that starts with experimentation, officials said during the Elastic Public Sector Summit on Thursday. 

Agencies have increasingly been deploying, piloting and exploring uses of AI capabilities in recent years, with these technologies being used for everything from veteran suicide prevention to training new asylum and refugee officers on how to interview asylum seekers. 

But while AI use cases across the U.S. government often differ in scope, the need to prioritize fit for purpose deployments is universal. 

Ryan Palmer, architecture lead for AI services at the General Services Administration, said a lot of the work that needs to be done today “is to help people pathfind [AI tools] in a safe way.”

GSA, in particular, has been working to provide federal employees with the opportunity to experiment with and use AI capabilities through the rollout of its USAi.gov platform. The website serves as a centralized testing suite of AI capabilities to help drive more governmentwide use of the technologies. A GSA official previously said that the agency plans to analyze data from the site to get a better understanding of how these tools are being effectively used. 

Palmer said USAi helps to standardize the government’s evaluation of AI models so federal users can have a better understanding of the ways the tools actually work, and then identify the right models for their needs. 

Maj. Christopher Clark, the AI lead for the U.S. Marine Corps, also said this type of experimentation-based approach helps with determining if an AI solution is actually needed. 

He noted that the Marine Corps hosted a four-day GenAI Workshop earlier this month to explore uses of the capabilities and said “one of the briefs that was my favorite presentations was, ’it's not AI, and that's okay,’ because a lot of things that we need to do are not AI problems.”

Clark said the focus on AI has “raised the water for everything else,” including by prioritizing the need for agencies to collect and share high-quality data. But he added that agencies also have to be somewhat AI-agnostic when it comes to troubleshooting challenges that could potentially benefit from emerging capabilities. 

“I think the key that we’re seeing is, you know, it has to start from a mission problem first and then work its way up the stack, rather than starting with the AI as the solution and then work our way back to what was the problem, because then we end up with a very expensive solution that may not even solve the problem,” Clark said.