Is artificial intelligence a friend, foe or frenemy? NIST wants to find out

MF3d/Getty Images
The standards agency will be hosting a working session to discuss how AI-empowered attacks can be used to sometimes get around traditional defenses.
Artificial intelligence is fast becoming cybersecurity’s ultimate double agent. The same tools that help defenders spot anomalies, detect intrusions and speed up response times are also making it easier for adversaries to scale and automate their attacks. AI can generate code to patch vulnerabilities, or it can exploit vulnerabilities whenever it finds them. It can help identify phishing attempts or draft realistic attacks designed to trick even the most savvy recipients. That dual-use nature of AI is exactly what the National Institute of Standards and Technology is trying to untangle in a series of virtual working sessions that bring government and industry experts together.
The first two NIST Cybersecurity Artificial Intelligence Profile working sessions were held in August and were extremely interesting and informative, with lots of experts from government, industry and education participating. The first broke down the various ways that AIs function, as well as how to secure them. The second was all about how to build up AI-empowered cyber defenses. The entire series is virtual and open to anyone.
The final session is scheduled for September 2, and this time, they are addressing the elephant in the room: how AI-empowered attacks can be used to sometimes get around traditional defenses. According to NIST, this final working session will also cover how agencies and organizations can build resilience in the face AI-enabled attacks. That concern was actually brought up in the two previous sessions, so it’s certainly a concern for both government and the private sector.
In their post-session blogs, NIST researchers noted that while AI is improving the speed and scale of defensive tools, it’s also doing the same for attacks. Typical attacks like phishing campaigns, data poisoning and model inversions are no longer labor-intensive operations when AI can do most of the work. Adversaries have also started experimenting with agentic AI, which can autonomously adapt and execute multi-step campaigns with little human intervention. And its skill in executing those kinds of advanced attacks is increasing.
Agentic AI, with its ability to autonomously perform complex, multi-step operations and adapt its tactics in the middle of an attack, could prove to be particularly dangerous in the near future. And it’s already being used for offensive operations. A recent report by Palo Alto Network’s Unit 42 detailed how agentic AI can be used to increase the speed, scale and sophistication of attacks. Unit 42 simulated ransomware and data exfiltration attacks for their study, letting the agentic AI learn about and adapt to whatever defenses it encountered. In many cases, the AI was able to thwart or trick most traditional defenses. The average time it took an agentic AI to break in and start exfiltrating data was just 25 minutes, 100 times faster than with a normal, non-AI enhanced attack.
So, how can defenses protect against AI-empowered attacks? Hopefully that will be answered or at least discussed at the next workshop. But in the meantime, NIST and other experts say that the key to stopping AI-enhanced attacks is a proactive defense. Automated red teaming, zero-trust principles, better identity controls and tight privilege management all become critical in a world where AI adversaries don’t need to rest or sleep. But the technology itself isn’t enough. The human factor matters too.
Consider AI in software development. A recent study conducted by researchers from the University of San Francisco, the Vector Institute for Artificial Intelligence in Toronto and the University of Massachusetts Boston analyzed 400 code samples across 40 rounds of improvements using four prompting strategies. One even explicitly asked the test AIs to improve the security of existing code by eliminating vulnerabilities. The results were not good. After just five rounds of AI changes, there was a 37.6% increase in critical vulnerabilities that other AIs could easily exploit. And with more iterations, meaning each time an AI was asked to work with the source code, the problems got worse.
That study’s recommendations came down to one recurring theme: human oversight. Developers should review all code iterations, use automated tools as supplements, limit consecutive AI-driven refinements and keep an eye on code complexity. That means organizations should invest in training so that developers are equipped with the security skills needed in today’s AI-driven environments. Otherwise, it just makes it too easy for agentic AI to exploit code created by its AI brethren.
The Open Worldwide Application Security Project also recently warned about a new concern involving agentic AI: the growing presence of agentic AI variations of existing threats. These include very dangerous and effective attack techniques like memory poisoning, misuse of integrated tools and privilege escalations whenever an AI agent acts on behalf of a human user. When agentic AI is asked to execute those kinds of attacks, it does so extremely quickly and with great skill. According to OWASP, the best defense right now is to employ strong identity controls and strict privilege management, along with constant monitoring to detect unusual behavior.
The challenge with AI right now seems to be trying to tip the scales toward the defender’s side. And that is what makes NIST’s upcoming Thwarting AI Enabled Cyber Attacks workshop so important. It’s not just about patching today’s vulnerabilities, but also about preparing for tomorrow’s threats. And in the fast-moving world of AI, tomorrow might already be here.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys




