NIST aims for summer release of AI cyber guidelines

Shutthiphong Chandaeng/Getty Images
Draft iterations of cybersecurity guidance for AI-driven threats across different types of emerging systems are in development as the federal government wades into AI model risk assessments.
The National Institute of Standards and Technology is slated to debut new guidance on artificial intelligence-specific cybersecurity to help mitigate AI-enabled digital threats while maximizing the benefits of safe AI adoption.
Speaking at a Qualys conference on Thursday, Victoria Pillitteri, the manager of the Security Engineering and Risk Management Group at NIST, said she expects a cybersecurity framework profile for AI to debut “sometime this summer” pending agency approval.
The forthcoming draft AI cybersecurity framework is slated to be accompanied by guidance on control overlays — or sets of tailored cybersecurity baselines to manage risks unique to different AI systems — with the help of NIST’s Center for AI Standards and Innovation.
Pillitteri told Nextgov/FCW that her team and CAISI have started to develop a series of overlay guidance focused on cyber threats targeting agentic, predictive and generative AI systems.
“This [administration’s] priority is speed; being innovative and scaling at speed,” Pillitteri said. “So that means everything does have to move faster. We're trying to evolve the way that we develop, maintain and engage with our stakeholders. For our standards and guidelines, we're trying to ensure that we are addressing these critical areas where cybersecurity intersects AI on multiple fronts.”
She said the draft of overlay guidance for predictive AI is expected to arrive this summer, while the overlay guidance on agentic systems is due in late summer to early fall. NIST plans to finalize the guidance by 2027.
“The intention is to issue all of these guidelines sequentially in draft,” Pillitteri said. “This way we can take lessons learned, improvements, revise everything, but yet still get something out quickly, because we realize adoption is happening now.”
As the federal government mulls how best to understand the digital risks posed by increasingly powerful AI models — such as Anthropic’s Mythos Preview model that was unveiled in its Project Glasswing testing initiative — CAISI will play a key role.
Last week, the Department of Commerce announced it secured new agreements between leading AI developers Google DeepMind, Microsoft and xAI to submit their models for testing in CAISI. Researchers at CAISI will specifically look at the national security ramifications of these companies’ frontier AI models.
As of Thursday, the press release announcing the private sector partnership with CAISI has been removed. The Washington Post reported on Monday that a person familiar with the decision said the page was removed because of sensitivities within the White House.
Pillitteri said that while standards disseminated by NIST can’t keep pace with rapidly evolving technologies like AI, the forthcoming guidance aims to offer a fundamental playbook for vendor and supply chain partners to constantly fortify their data security infrastructure.
“It's not a rip and replace,” she said. “It's, ‘How do we augment what we have? How do we adapt the technologies, the architectures, our products, solutions and services, and how do we augment our workforce … to support the future needs that we're building for ahead?’”



