It would take the Pentagon months to replace Anthropic’s AI tools: sources

Anthropic CEO Dario Amodei speaks at the Viva Technology show at Parc des Expositions Porte de Versailles on May 22, 2024. Chesnot/Getty Images
AI maker digs in with Thursday statement rejecting DOD pleas for unfettered use.
If the Pentagon carries out its threat to blacklist Anthropic’s Claude AI platform, it could be three months or even longer before the U.S. military regains access to such a powerful tool on its classified networks, according to multiple sources familiar with the fight between the Defense Department and the AI maker.
On Thursday, Anthropic CEO Dario Amodei reiterated his refusal to allow Claude to be used for mass surveillance of U.S. citizens or to guide fully autonomous weapons, rejecting Pentagon requests to make unfettered use of the model.
Claude is one of just two large generative-AI models that the Pentagon has made available on classified networks, and it is the only one that belongs to the cutting-edge group of frontier models. The Defense Department isn’t saying just how it uses such models. But Emil Michael, defense undersecretary for research and engineering, has suggested that their uses include intelligence (“to synthesize a lot more intelligence using a machine than a human analyst”) and warfighting (“How do you predict what might happen in the conflict, what things you might need in the conflict?”).
Earlier on Thursday, Pentagon spokesperson Sean Parnell said that DOD only seeks the ability to “use Anthropic's model for all lawful purposes,” adding that the idea that the Pentagon wants fully autonomous weapons or mass surveillance is a false narrative “peddled by leftists in the media.”
But Amodei said those are the only two limits he insists on.
In “a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do,” he said in his statement.
Pentagon officials have threatened various reprisals should Anthropic insist on its limits, including invoking the Defense Production Act to use the company’s product without the company’s permission.
On Wednesday, a defense official told Defense One, “The Secretary will not hesitate to invoke the DPA if an agreement cannot be reached.”
Parnell’s post on Thursday made no mention of the DPA. The company, he said, has “until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW.”
In his statement, Amodei responded quizzically. “They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a ‘supply chain risk’—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
Easier said than done
If the Pentagon does designate the San Francisco-based AI startup as a supply-chain risk, it would touch off a lengthy and likely expensive series of protective measures, the people familiar said.
Operators would have to reconfigure data inputs that they are feeding into models, re-examine how to share data in real-time with the intelligence community which also uses Claude widely, and re-validate that replacement models were functioning as the military expected it to, they said.
In July, Anthropic received a $200 million contract to provide its frontier-model tools to the Pentagon, as did the other three U.S. makers of such products: OpenAI, Google, and xAI.
Department leaders have urged their people to use the new tools, though they have declined to say how publicly. And even the Pentagon doesn’t really know; it is reportedly asking various commands to describe how much they use Anthropic. (Michael, however, has described U.S. INDOPACOM as “probably one of the premier users.”)
So why is Claude the only one deployed on classified networks? One key reason, according to a defense official: Anthropic’s tools were the easiest to deploy on cloud networks powered by AWS, which contributes the largest chunk of the Pentagon’s Joint Warfighting Cloud Capability.
The two companies are especially close. AWS is the leading cloud-service provider to Anthropic, which trains its models using Amazon’s proprietary Trainium chips.
By contrast, Google runs Gemini on its own cloud and trains it on TPU v5p chips. xAI is partnered with Oracle and does most of its Grok training on NVIDIA H100 GPUs. OpenAI has a “primary” relationship with Microsoft Azure, though it recently announced a “strategic training” partnership with AWS.
None of these relationships are static. Anthropic trained its first models on NVIDIA chips. But as demand grew, the various frontier AI companies inked long-term strategic contracts that mean migrating from one environment to another would undo months of work.
The individuals said it could be twelve months or longer to replace the capability. However, a Defense Department official said that he expected additional frontier AI models to be widely available on the Pentagon’s GenAi.mil interface before summer.
AWS did not respond to requests for comment.
Breaking up for the wrong reasons
Michael has said that his objection to Anthropic’s stance is that it creates unpredictability. What if, he said last week, operators were using Claude during a mission, and “then the model itself learns what you're trying to do… and it stops working. That’s a risk I cannot take.”
But Anthropic executives counter that they must draw lines precisely because of AI’s unpredictability. They say there’s no way to guarantee that their models can perform safely in scenarios that involve lethal autonomy—at least not without meaningful human supervision—and they don’t believe the model is safe in situations that might involve AI for mass surveillance, according to sources familiar with the discussions.
And they agree with Michael’s contention that some of the Pentagon’s frontier models might perform better at various tasks than others.
The sources also said the conversations between the Pentagon and the company had been proceeding along more or less normal lines. Anthropic, they say, had been willing to make various accommodations. But the tone changed after the discussions became public.
On Tuesday, the company released a new version of its safety policies, which many saw as an abandonment of its core safety promise.
In the blog post announcing the change, the company said that it would be moving toward “nonbinding but publicly declared targets” for safety. “Rather than being hard commitments, these are public goals that we will openly grade our progress towards.”
Lawmakers are dipping a toe into the debate. Sen. Mark Warner, D-Va., called the fight “another indication that the Department of Defense seeks to completely ignore AI governance–something the Administration’s own Office of Management and Budget and Office of Science and Technology Policy have described as fundamental enablers of effective AI usage,” in a statement. He called the episode further evidence of “the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.”
The Pentagon has in the past placed policy limits on the use of autonomous weapons, but Congress has passed no legislative limits.




