Lawmaker calls for GAO review of threat actors weaponizing AI

Rep. August Pfluger (R-TX) speaks during a House Homeland Security Committee hearing on February 10, 2026 in Washington, DC. Samuel Corum/Getty Images
In a letter to the federal watchdog, Rep. August Pfluger, R-Texas, warned that “the nature and scope of the challenges associated with countering these AI-enabled threats are not well understood.”
A top GOP member of the House Homeland Security Committee is calling for a federal watchdog to assess how malicious actors are using agentic and generative artificial intelligence technologies to amplify their activities.
In a March 17-dated letter to the Government Accountability Office, Texas Republican Rep. August Pfluger — chair of the panel’s Counterterrorism and Intelligence Subcommittee — warned that illicit uses of the emerging capabilities poses “a broad and evolving” national security risk.
“GenAI enables violent extremists and other illicit actors to create large volumes of tailored propaganda, misinformation, and recruitment content at low cost and machine-level speed, significantly lowering barriers to radicalization, manipulation, and online influence,” Pfluger wrote. “Agentic AI further amplifies these activities by autonomously planning, adapting, and executing tasks over time, allowing terrorist-related efforts to persist, scale, and adjust in real time with minimal human oversight.”
The letter, which was publicly released by Pfluger’s office on Saturday, warned that “the nature and scope of the challenges associated with countering these AI-enabled threats are not well understood.”
Cyber threat actors, international terrorist groups and violent domestic extremists have successfully weaponized social media platforms over the past decade-plus to spread propaganda and recruit members. With the rapid expansion of more effective AI capabilities, these groups have increasingly relied on AI to create realistic deepfakes, hone their cyberscams and sow mid- and disinformation with minimal cost.
The lawmaker’s letter called for GAO to review what is currently known about how AI had changed bad actors’ “ability to conduct terrorist activities,” how federal law enforcement and intelligence agencies have changed their deterrence efforts in response to uses of the technologies and how these agencies are working with tech firms “to detect, disrupt, and limit the use of AI by violent extremists and other illicit actors to conduct terrorist activities.”
Pfluger has been particularly vocal about how AI can further the aims of terrorists and other bad actors, including holding a subcommittee hearing last March to discuss how terrorists are using the internet and technology to further their efforts.
The congressman also introduced legislation in February 2025 to require the Department of Homeland Security “to conduct annual assessments on terrorism threats to the United States posed by terrorist organizations utilizing generative artificial intelligence applications.”
Although that measure passed the House in November, the Senate has not yet held a vote on the bill.
NEXT STORY: White House releases regulatory vision for AI




