Survey: More than half of federal agencies now planning agentic AI pilots

J Studios/Getty Images

A Market Connections survey sponsored by ServiceNow of more than 200 IT executives across civilian and defense agencies suggests there is significant momentum for agentic AI adoption.

Agentic AI has entered the chat for federal agencies.

According to a March survey of more than 200 technology executives across government, more than half (53%) said their agencies are exploring agentic AI or actively planning pilots of the technology. Another 15% are currently implementing agentic AI systems or have completely done so already, compared to 6% who said they were not yet considering agentic AI.

However, even as agencies race toward agentic AI, respondents identified potential barriers to adoption in the forms of inadequate oversight policies and disparity between the perceived necessity of governance frameworks and their implementation. The findings were released today by Market Connections on behalf of ServiceNow.

“This research confirms what we're hearing from agencies every day — the appetite for agentic AI is real, but oversight hasn't kept pace," said Mike Hurt, Global Vice President of U.S. Public Sector at ServiceNow. "Seventy-seven percent of federal leaders say oversight frameworks are essential, yet fewer than a third have actually implemented them. Agencies that build accountability into their AI workflows from the start, not as an afterthought, will be the ones delivering strong results for citizens.”

The findings come after a major uptick in AI use across the federal government in 2025 despite a significant decrease in the total number of federal employees. 

In April, the Office of Management and Budget unveiled its 2025 Federal Agency Artificial Intelligence Use Case Inventory, which indicated AI use more than doubled across federal agencies from 2024. In total, agencies reported more than 3,000 AI use cases, with significant jumps in AI use at NASA and the departments of Health and Human Services, Veterans Affairs, Justice and Energy.

Agentic AI is generally defined as autonomous systems capable of pursuing complex goals and reasoning, with the ability to take independent actions across software systems with minimal human oversight. Those agentic capabilities to perform some tasks without human intervention have made it an attractive option, and it has been touted by some of President Donald Trump’s top tech officials as a key way to do more with less. 

But per the findings, not every agency is ready for agentic AI, even as a growing number of companies offer agentic solutions. Only 20% of respondents said their agencies have defined policies for pre-deployment testing or generic agentic AI use, and only 8% have a defined framework for incident response. Even fewer (6%) have a framework for third-party or vendor governance.

"Agentic AI has definitely entered the chat. Over half of those we surveyed are currently in the planning stages or have actively launched a pilot effort,” said Aaron Heffron, president of Insights and Research at GovExec. “The main question remains, however, if the current infrastructure, both human and technical, is up to the task.”

Findings also indicated some agencies struggle moving agentic AI pilots from the sandbox to production environments. Those challenges move beyond policy and into data readiness.

“One of the very big questions that you have to ask is, ‘How am I getting my data ready for AI consumption? That governance piece becomes critical [to] making sure that your data within your organization and your AI are working together,” advised one unnamed IT director in his response.

ServiceNow Federal Chief Technology Officer Jon Alboum said one way to address the data problem is bringing data and workflows together in a single environment.

“The current environment of fragmented, siloed systems and disconnected workflows only increases complexity and hinders adoption,” Alboum said. “To move forward, AI adoption should focus on bringing everything together in an AI control tower so that policies can be applied, controls enforced, and results delivered efficiently across the organization.”

Research findings indicated that while governance frameworks tended to lack maturity, consistent oversight was a near-universal requirement. Almost 90% of respondents said they required logging and audit trails for all actions, and more than 80% requiring automated policy checks and guardrails.

The findings further point to a strong correlation between the demand for human oversight and criticality of data. For national security, critical infrastructure and emergency response data, 79% of respondents said their agencies mandated “human-in-the-loop” oversight, with approval needed for every action performed by AI. For high-risk data, like benefits claims or agency financial data, 78% of respondents said their agency requires formal human approval before high-risk actions are taken by AI, but not every action. Conversely, more than 90% of those surveyed said they favored reduced direct involvement for low and moderate risk data, requiring only periodic check-ins.

To ensure accountability for agentic AI solutions, 84% of respondents said their agencies had documented escalation policies, while 78% had structured post-incident review processes. Fewer than half (44%) said their agencies included liability or responsibility clauses for AI vendors in contracts, and fewer than one-third (29%) had documented “kill switch” procedures.

“Federal leaders say they want human control over high-risk AI — but less than a third have a kill switch to enforce it, leaving a dangerous gap between intent and capability where trust is won or lost,” the report states. “Agencies must act now to define intervention triggers, ensure data readiness, and unify oversight into a single platform — or risk losing control as systems scale. When failures happen, they cannot be crises; they must be contained, repeatable workflows. Built-in accountability is no longer optional — it is the prerequisite for any agency serious about deploying agentic AI at mission speed.”

Editor’s note: Market Connections is a business division of GovExec, the parent company of Nextgov/FCW.