Anthropic’s Glasswing initiative raises questions for US cyber operations

Samuel Boivin/NurPhoto via Getty Images
Intelligence officials and industry are weighing how Claude Mythos Preview could reshape hacking and cyberdefense. The company has also briefed senior officials on the AI model it says has already uncovered thousands of cyber vulnerabilities.
Anthropic’s decision to hold back a powerful frontier AI model over cybersecurity risks, paired with a new initiative to study its effects on global networks, is prompting discussions about how such tools could reshape hacking operations within the U.S. intelligence community, and how they might be used to identify and exploit weaknesses in adversary systems.
The company unveiled Project Glasswing on Tuesday, seeking to help secure critical software against AI-driven attacks, with partners including Amazon Web Services, Apple, Cisco, Google, Microsoft and others. Those participants will gain access to Claude Mythos Preview, an unreleased model the company says has already uncovered thousands of vulnerabilities as Anthropic looks to steer its tools toward defensive cybersecurity use.
“The fallout — for economies, public safety, and national security — could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes,” the AI company said in a Tuesday blog. The Mythos Preview model “has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser,” it says.
The intelligence community is reacting to the news, according to a person familiar with the thinking of multiple IC agencies.
“They want secure code and to use AI to find network vulnerabilities as well,” said the person, who, like some others in this story, spoke on the condition of anonymity to describe sensitive internal deliberations.
Prior to any external release, Anthropic briefed senior officials across the U.S. government on Mythos Preview’s full capabilities, including both offensive and defensive cyber applications, said an Anthropic official. That engagement has included discussions with the Cybersecurity and Infrastructure Security Agency and NIST’s Center for AI Standards and Innovation, among others.
“Bringing government into the loop early — on what the model can do, where the risks are, and how we’re managing them — was a priority from the start,” the company official said.
Analysts inside the National Security Agency have also been casually chatting about the release of the Mythos model, another person familiar with the matter told Nextgov/FCW.
Multiple intelligence agencies and Defense Department components play roles in both offensive cyber operations and defending U.S. networks. Because offensive missions often depend on understanding a target’s defenses, tools like the Mythos model in the wrong hands could help adversaries identify and exploit weaknesses in critical systems. Agencies are already known to stockpile hacking exploits for future use.
The development is also drawing major attention and concern, in some cases, from cyber-focused firms that engage with the intelligence community.
“How is anyone supposed to defend against all of this at once?” said one executive at a cyber investment firm, alarmed by the scale at which the Anthropic model was able to identify vulnerabilities.
The Glasswing news is “scary and ominous” because it isn’t clear how Mythos Preview could be used offensively, especially if it falls into the hands of a foreign adversary, said Hayden Smith, a co-founder at Hunted Labs, a company focused on software supply chain risks.
It’s very possible the model could land in the possession of governments considered hostile to the U.S., he said, explaining that “even with deep vetting, the odds of Mythos flowing into the wrong hands is barely a hypothetical given the landscape of current attacks on the open source ecosystem and software supply chain.”
Because much of the internet runs on widely used open-source software maintained by developers around the world, tools like Mythos could uncover weaknesses in code that underpin large parts of the digital ecosystem.
That dynamic has come into sharper focus following recent software supply chain incidents that had widespread repercussions — including a compromise of the Axios JavaScript library disclosed last week — and amid concerns that some developers behind critical open-source projects are affiliated with companies the U.S. government considers tied to foreign adversaries.
Capitol Hill is also paying attention to the Anthropic development.
“We are already seeing cyber threat actors using AI tools to improve their capabilities, putting government, businesses and consumers’ security and personal information at risk,” said Sen. Mark Warner, D-Va., the vice chairman of the Senate Intelligence Committee. “As AI dramatically accelerates the discovery of new vulnerabilities, I hope industry will correspondingly accelerate and reprioritize patching.”
Observers have been awaiting the release of a model like Mythos Preview that could identify and exploit cyber vulnerabilities at scale for some time, said Morgan Adamski, the former executive director at U.S. Cyber Command and lead for PwC’s Cyber, Data & Technology Risk services.
“For those in the offensive cyber community, for the U.S. government, there’s obviously a huge potential there from an adversarial perspective,” she said in an interview.
But offense and defense are, in many ways, one and the same. If cyberintelligence analysts find a novel vulnerability in an enemy computer network, it’s possible a U.S. system might have the same vulnerability, too.
“There’s going to be a real equity conversation that occurs,” Adamski said. “If we exploit something in an adversarial network, we’re going to have to be able to defend against it in our own critical infrastructure.”
She also said to expect more of these innovations in the AI space, as “typically, when these types of models come out, other models aren’t far behind.”
In an interview, Gary DePreta, the senior vice president of Cisco’s U.S. Public Sector Organization, told Nextgov/FCW that the company’s participation in Project Glasswing is part of its larger aim to address cybersecurity threats while bringing the benefits of AI to its customer base.
“We’re going from an age of detect-and-respond — and as we automate with AI — to predict-and-prevent threats,” DePreta said on Wednesday. “We keep saying this phrase at Cisco: ‘there is a paradox of progress as it relates to AI and the enterprise.’ And what it simply means is the capabilities of AI are far exceeding the enterprise’s ability to implement it in a safe and secure way.”
Anthropic has become a major voice in the line AI companies are willing to draw in ethical uses of their technology, though that stance has drawn friction with the U.S. military. Earlier this year, the company declined to ease restrictions against its tools being used for domestic surveillance or fully autonomous weapons for Pentagon use, triggering a “supply chain risk” designation from the Defense Department and a White House order that all federal agencies phase out their uses of Anthropic tools. The company has legally challenged the move.
It’s possible that the Mythos announcement may reshape how the Defense Department interacts with the company.
The government “needs to make amends with Anthropic and help them and Glasswing members maintain the American lead on AI by preventing Chinese model theft,” said Leah Siskind, an AI research fellow at the Foundation for Defense of Democracies think tank.
“Anthropic is making the responsible call — but adversaries won’t,” she said. “China is already exploiting U.S. AI models to accelerate its own capabilities, and when they reach Mythos-level performance, they will weaponize it.”




