Trump admin floats policy language limiting contractor say on agency uses of technology

sakchai vongsasiripat/Getty Images

Ongoing drafts of policy documents feature language that would limit the private sector’s ability to dictate how their artificial intelligence models are used in government missions, according to sources familiar with their development.

The federal government is circulating draft policy documents that contain language clarifying the government’s ability to use private sector technology without outside stipulations for how they do so, two sources familiar with their development told Nextgov/FCW

While it remains unclear if the language being passed between various government agencies —  namely the Department of Defense and components of the Trump administration — will manifest into an executive order or finalized policy, that language centers on ensuring the government has control over how its acquired technology products are used. 

One source familiar with the ongoing development told Nextgov/FCW that the goal of the language is to clarify that “it is for that democratically elected government to determine what is a lawful and appropriate use of a particular technology, not solely a company.”

The White House is mulling an executive order that would create a working group for AI models before they are deployed, according to a person familiar with the matter. The New York Times first reported the administration’s consideration of the order. It’s not clear if the contracting language is a separate initiative or would be a provision embedded into a forthcoming directive.

Other language featured in the draft documents examines how the government can manage emerging cybersecurity threats posed by AI models like Anthropic’s Mythos Preview and OpenAI’s GPT 5.5, according to the same source and another person familiar with the matter.

The discussions and draft documents highlight how the Trump administration is looking to take a more hands-on approach on the AI sector, despite prior policy positions that signaled a more permissive environment for the evolving technology.

“There is likely going to be another wave of AI government statements,” the first source said. 

When asked to confirm the existence of these documents, a White House official told Nextgov/FCW that “any policy announcement will come directly from the President. Discussion about potential executive orders or policy directives are pure speculation.” The Department of Defense referred questions to the White House. 

The Trump administration’s efforts to refine the government’s rights when licensing private sector AI models and systems follow a dispute between Anthropic and the Department of Defense over using the company’s AI products in autonomous weaponry and domestic surveillance. 

The Pentagon designated Anthropic a supply chain risk, and federal agencies were subsequently required to offload the company’s products from federal workloads. Some lawmakers took issue with the perceived retaliation on behalf of the administration, and Rep. Sam Liccardo, D-Calif., attempted to amend the Defense Production Act to prevent government blacklisting in March. 

The administration has telegraphed some of its wants for the relationship between vendors and industries since that debacle, with Emil Michael, the undersecretary of defense for research and engineering, saying on a March episode of the All-In podcast that “all lawful use seems like a good thing” to benchmark against.

Anthropic model capabilities piqued government officials' interest, however, when the company announced the release of its new high-powered Mythos Preview model and associated Project Glasswing for select companies to test in their digital networks. 

Leadership at the Pentagon has not been wholly opposed to guardrails on tech use in defense and warfighter operations. Michael told CNBC on May 1 that the Pentagon wants guardrails “in some ways,” but maintained that these guardrails have to align with the government’s needs. 

“When they deploy on our networks, they’re deploying models that are tuned for national security purposes,” Michael said. “And that's why the partnership with the executive team and the management is so important, because things are evolving in the threat landscape. And whatever guardrails, whatever principles they want to develop against, has to be consistent with our values, our mandate, our restrictions, even, and that’s where the guardrails come in.”