OpenAI makes frontier model available to critical cyber defenders

Beata Zawrzel/NurPhoto via Getty Images
The announcement of GPT-5.5-Cyber follows Anthropic’s debut of its own Mythos AI, which the company said has already uncovered thousands of cyber vulnerabilities.
OpenAI is beginning to make its cybersecurity-focused artificial intelligence model, GPT-5.5-Cyber, available to the federal government and its “critical cyber defenders.”
First confirmed to CNN, the availability of OpenAI’s frontier cybersecurity model follows the announcement of Anthropic’s newly released Mythos AI, a powerful frontier model distributed to select companies for testing via Anthropic’s Project Glasswing.
Alongside GPT-5.5-Cyber release, OpenAI published a new Cybersecurity Action Plan to help leverage AI as an asset in national defense cybersecurity operations, including pillars for democratizing access to cyber-capable AI models, coordinating responses between government and industry, and ensuring visibility and control of AI models during and after deployment.
“AI is reshaping cybersecurity,” OpenAI told Nextgov/FCW in a statement. “The same capabilities that help defenders are also being used by malicious actors. Some believe the answer is to tightly restrict advanced cyber capabilities to a very small group of approved partners. We believe the better path is responsible, trusted access for defenders so they can move faster than adversaries can adapt.”
OpenAI added that its Cybersecurity Action Plan is focused on working alongside government and private sector entities to support the diffusion of AI-powered defensive tools across trusted partners.
On X, OpenAI CEO Sam Altman added that the company wants GPT-5.5-Cyber to focus on securing critical infrastructure.
OpenAI’s diffusion of GPT-5.5-Cyber follows the company updating its contracts with Microsoft and engaging with Amazon Web Services to offer its agentic tools and AI models across various cloud providers.
The release of cyber-focused AI models from both OpenAI and Anthropic comes as the latter is wading through the fallout of a Pentagon supply chain risk designation after the company refused to allow its products to be used in autonomous weaponry or in the surveillance of Americans. A judge recently issued an injunction on that designation and a White House directive to cease agency use of Anthropic products, though the government has said it plans to appeal.
At least one agency — the State Department — has pivoted from using Claude Sonnet 4.5 to OpenAI’s GPT-4.1 model for its internal AI system.




