Vendors struggle to navigate the Anthropic ban’s fallout

Jonathan Raa/NurPhoto via Getty Images

Tech contractors say ambiguity in how Anthropic’s products are able to be used by companies working with the federal government is leaving “traps” they may unknowingly fall into.

The artificial intelligence vendor landscape in Washington, D.C., has been rocked by the ongoing dispute between Anthropic and the administration, with the business community searching for clarity in contracting requirements amid increasing anxiety over how technology contracts with the government will be handled.

The Pentagon declared Anthropic a supply chain risk after the company refused to allow its products to be used for surveillance of Americans or in lethal autonomous warfare. President Donald Trump subsequently ordered that all federal agencies stop using Anthropic products. A judge on Friday temporarily barred the government from enforcing either the supply chain risk designation or the governmentwide ban, though the government has until April 2 to seek an emergency stay on the injunction.

Multiple sources within the federal tech industry spoke with Nextgov/FCW about how they are adjusting to a post-Anthropic ban procurement and contracting landscape on the condition of anonymity to speak candidly about the ongoing situation. 

One industry source said their organization is closely monitoring the Trump administration’s “evolving approach” to AI procurement strategies, but remains uncertain about the future.

“Some of the requirements under consideration for government acquisition of AI tools and services are prompting more questions than answers, and industry is actively engaging to help policymakers understand how certain far-reaching proposals could unintentionally undermine the White House's goal of furthering American AI dominance,” they said. 

The governmentwide ban raises concerns for companies that have Anthropic products — like its generative AI, Claude — embedded in different parts of their software stack. 

Harold Schultz Neto, head of product and AI at Labrynth — a company that builds AI platforms to expedite permitting, documentation and compliance processes — told Nextgov/FCW that his firm continues using Claude internally, namely for coding, product design and prototyping. He said, however, that he has had to pivot from using it for customers in order to comply with Trump’s new mandate. 

“Our solutions are built on top of Claude,” Schultz Neto said. “And when the federal mandate came, we had to not hire Claude for [customers] directly, and also stop our development on top of Claude for [customers].” 

Schultz Neto said Google’s Gemini is now the core component of Labrynth’s customer-facing products and, although none of the company’s products that go to market use Claude, the loss of the Anthropic tool’s powerful programming abilities is a paramount concern, particularly regarding Labrynth’s internal operations. 

“If we serve the government, we can use other models inside our tools, but not being able to code with Claude because there's some federal mandate, that's a big concern that we have right now, and I think that should also be a concern from the federal agencies,” he said.

“Our member companies … might use Anthropic, not in the final product that they're delivering to the government, but to test the security of it, to validate or review the code,” a second industry source said, noting that companies are struggling to interpret the scope of both the supply chain risk designation and the government ban. 

“We've been sharing what we know from the government with our member companies,” they said. “Everybody doing business with the government is used to and expects fully that things will be documented, that there will be clear requirements, that things will be spelled out, that there will be terms and conditions in the contract, and that they can follow along with those. In this case, a lot of that is missing.”

Current and prospective contractors are missing this clarity following the Anthropic ban, and they have more anxiety surrounding how contracts with the government will be handled alongside new and developing policies.

“There's a fear that the standard administrative requirements are being transformed into legal traps by making specific policy mandates material,” a source within technology contracting told Nextgov/FCW. “Companies are increasingly worried about contractual retribution, and the concern that the administration may use its power of suspension and debarment, traditionally reserved for the most bad-faith actors, against firms that aren't in lockstep with the latest executive priorities.”

That source also said there are new questions surrounding how the government will decide if a company is a fit partner and if it is now more subjective.

“Contractors fear their past corporate decisions or public stances are scrutinized through new lenses from the administration, the fear that you can be disqualified for lack of integrity or lack of compliance with those views, not because you've done a poor job,” they said. 

That concern highlights the lack of clear guidance and resulting fears reverberating through the tech industry. 

“The most common question that we've gotten is just: ‘Have you seen any official guidance on this? Have you seen anything officially posted anywhere? Have you seen anything that would pass for actual policy that could be held up?’ And, unfortunately, so far, the answer to that has been no,” the second tech industry source said.

Beyond uncertainty surrounding guidance in contracting with the federal government, companies share similar concerns with Labrynth in terms of whether or not internal Anthropic use will impact their ability to work with the government.

New draft guidance from the General Services Administration offers some clarity, suggesting the government is looking for more freedom as to how to use procured technology systems. 

Quinn Anex-Ries, a senior policy analyst at the Center for Democracy and Technology, said the draft terms and conditions serve as the most recent resource to help guide companies and AI developers in working with the government, and that it centers on “unbiased AI principles” for large language models. Anex-Ries said these latest updates to previous GSA memos on AI acquisition may offer clarity but include terms that would undermine “key safety measures in AI systems.”

“Implementing GSA's draft terms and conditions across all AI solicitations and contracts in the federal government could result in systems with fewer safety protections and worse outcomes, and a vendor community that is reticent to push back when their products are used unsafely,” Anex-Ries said. 

The second tech industry source told Nextgov/FCW that the draft GSA language will weed out vendors — potentially more than the administration anticipates — due to how broad the language is written. 

They added that the GSA draft guidelines include verbiage that is not typical contracting language, which offers limited clarity. 

“There are a lot of terms used in [the GSA draft guidelines] that are not defined,” they said. “The whole point of having contract language is to give clarity on the terms of the contract so that there's something that's enforceable.” 

One senior government official told Nextgov/FCW that, following the Anthropic ban, the government is trying to send a message to the technology sector about being a “disciplined buyer” of advanced systems. 

“For a long time, the balance in government technology procurement has favored vendors, particularly in emerging areas like AI where the market is moving quickly,” the official said. “What you’re seeing now is a willingness to set clearer expectations with industry that access to federal markets requires transparency, fair terms, and a genuine partnership with government. Most companies understand that and are adapting to it.”

Earlier executive policy memos offer insight into the Trump administration’s priorities in contracting and procurement, particularly within the AI landscape. Pursuant to President Donald Trump’s July 2025 executive orders, the Office of Management and Budget issued a memo to instruct agencies to evaluate if the large language models they use comply with the White House’s “unbiased AI principles.” 

Other documents, including the April 2025 OMB memo outlining AI procurement management guidance, left industry wondering how stipulations in this order would be executed.