The government is buying AI faster than it is assigning authority

Douglas Rissing/Getty Images
COMMENTARY | Federal AI governance will become credible when agencies can answer a simple question before deployment, not after failure.
The federal government is spending a great deal of time talking about responsible AI. It should spend more time deciding who gets to stop it.
That is not a rhetorical point. It is a practical one.
A great deal of federal AI governance still turns on familiar language: fairness, transparency, explainability, risk management, accountability. Those concepts matter. But they do not answer the harder operational question that agencies are already facing as they test, buy and deploy AI-enabled tools: who, exactly, has the authority to pause, question, override, or suspend a system when conditions change?
That gap matters because government does not encounter risk at the level of principles. It encounters risk at the point of use.
A model may pass legal review. It may satisfy procurement requirements. It may come with documentation, guardrails, vendor assurances and internal approvals. But the real test comes later, when a tool is being relied upon in a benefits workflow, an enforcement setting, a contracting environment, or a high-speed operational process and someone begins to suspect that the system is being used outside the assumptions that justified its approval.
At that point, a policy framework is not enough. An agency needs clearly assigned authority.
That means at least three things.
First, agencies need explicit override rights inside the organization. Someone should be able to halt or narrow the use of a system without having to navigate an internal maze of legal, procurement, technical and managerial reviews after the risk has already materialized.
Second, agencies need auditable decision trails. If government cannot reconstruct who approved a system, what limits were attached to that approval, what changed over time and who chose to continue relying on the tool, then accountability becomes largely performative.
Third, federal procurement needs to be treated as a governance instrument, not just a purchasing function. Contracts determine whether agencies retain access to meaningful logs, change notices, testing information, intervention rights and usable explanations. If those terms are weak, the government may be buying functionality while surrendering control.
This is one reason current federal AI debates can feel oddly incomplete. They often ask whether a tool is safe enough, accurate enough, or compliant enough to field. They spend less time asking whether the institution has built the internal authority structure necessary to govern reliance once the tool is live.
That is a serious omission.
Government does not need more elegant statements of principle nearly as much as it needs clearer lines of authority. Agencies are already under pressure to modernize, automate and demonstrate results. That pressure is not going away. But speed without decision control is not modernization. It is exposure.
Federal AI governance will become credible when agencies can answer a simple question before deployment, not after failure: when this system starts to drift, misfire, or exceed its intended role, who has the power to stop it?
Until that question is answered, the government is not really governing AI.
It is buying it.
Arthur D. Sidney is an attorney and public policy strategist, former congressional chief of staff and advisor on technology governance, public sector oversight and institutional accountability.
NEXT STORY: How to scale value without scaling complexity




