State offloads Claude as underpinning model in flagship StateChat

Hu Yousong/Xinhua via Getty Images

The agency moved its chatbot to operate on OpenAI’s GPT 4.1, internal document shows.

The State Department has shifted the model underpinning its internal chatbot, StateChat, from Anthropic’s Claude Sonnet 4.5 to OpenAI’s GPT-4.1, following President Donald Trump’s Feb. 27 directive that all government agencies take steps to shed Anthropic tools from their systems. 

An internal document obtained by Nextgov/FCW confirms that Anthropic’s software is no longer powering StateChat and that the change to a new LLM contractor has also impacted the data on the internal chatbot, as it has now been set back to data available as of May 2024.

The earlier version of StateChat, when powered by Claude, was trained on more recent data from June 2025, a source familiar with the situation told Nextgov/FCW. The person spoke on the condition of anonymity to provide details about the model’s training data.

State employees using a customGPT setup running on Claude were also asked to transition to another government-approved model that isn’t created by Anthropic by March 6, the document says.

“In line with the president’s direction to cancel Anthropic contracts, Anthropic’s Claude models are no longer available on the Department’s enterprise generative AI platform,” a State Department spokesperson told Nextgov/FCW. “The department is taking all necessary steps to implement the directive and bring our programs into full compliance.”

Reuters previously reported directives in multiple agencies, including State, requiring switches from Claude to ChatGPT.

Claude was initially made available for federal agency operations as part of the General Services Administration’s OneGov deal that brokered favorable software rates for the government, many for a temporary period of time.  

Following the governmentwide ban of Anthropic technology, the company filed two lawsuits Monday. One, filed in the D.C. circuit court, invokes provisions in the Federal Acquisition Supply Chain Security Act of 2018 as rationale against the government’s designation of Anthropic as a supply chain risk.

The second, filed in the U.S. District Court in the Northern District of California, names dozens of federal agencies and officials as defendants in allegations of inappropriate retaliation against Anthropic and calls for an injunction.