Watch for GenAI browsers, purple teaming and evolving AI policy in 2026

Olga Kaya/Getty Images

COMMENTARY | Government IT leaders cannot afford inaction, especially as AI adoption accelerates.

Artificial intelligence will continue to drive federal modernization in 2026, but with this rapid adoption comes increased risk.   

Last May, the FBI publicly warned that cybercriminals and state-linked threat actors were using AI-generated text and voice deepfakes to target U.S. federal and state government officials in sophisticated phishing campaigns. Similarly, in September, Anthropic security researchers discovered one of the first documented cases where AI, with minimal human intervention, operated a large-scale cyberespionage campaign.  

It’s clear that as more organizations begin to embrace AI in all levels of government, bad actors are also turning to the technology.  

U.S. agencies must proactively address these unique agentic AI threats before they impact mission-critical operations. But it's not easy; mitigating these risks means investing in new security approaches, such as autonomous purple teaming and staying informed about evolving AI policy and regulations.

New security strategies, including autonomous purple teaming, will emerge

As federal agencies adopt AI, new security approaches are essential to national protection. Agencies like DARPA have implemented dedicated programs and turned to approaches like red teaming to assess the battlefield systems using AI.  

While this is a good start, traditional red-and-blue teaming cannot match the speed and complexity of modern adoption and AI-driven systems. Instead, agencies should look to combine continuous attack simulations with automated defense adjustments, enabling an automated purple teaming approach.  

Purple teaming shifts the paradigm from one-off testing to continuous, autonomous GenAI security by allowing agents to simulate AI-specific attacks and initiate immediate remediation within the same platform. This AI-driven approach allows agencies to quickly identify and address vulnerabilities in pace with evolving threats, closing the gap between AI offense and defense.  

However, new threats, such as AI browsers, are continuously emerging.

2026 will be the year of AI browsers

AI is now embedded in the core browsing experience, transforming browsers from passive tools into active decision interfaces. 

These AI-powered browsers fundamentally change how users interact with the web. Through the use of LLMs, they can summarize documents, interpret context, autofill forms and execute multi-step workflows through language commands. While this evolution does increase productivity, it also introduces a new attack surface. 

As browsers integrate AI through extensions and native AI browsers, such as Perplexity’s Comet and OpenAI’s Atlas, become more common, security challenges increase. Traditional browser security models are designed for static scripts and predictable input, not equipped to monitor or control risks such as prompt injection, data leakage and hallucination-driven actions. 

Most standard monitoring systems depend on factors like network telemetry and known-compromise indicators. Since AI agent interactions occur within the browser process and are often triggered by natural language inputs, attacks often remain undetected because malicious prompts resemble benign ones, and resulting actions appear as legitimate user events.

Federal agencies deploying AI must quickly address these risks, especially since Perplexity and OpenAI have both partnered with the GSA’s OneGov program to advance federal AI adoption. Without proper security guardrails, these threats could jeopardize national security.  

To protect against AI-based exploitation, agencies should implement runtime policy enforcement, context-aware monitoring, identity isolation and guardrail injection. Ultimately, the key challenge and goal is ensuring AI browsers’ intelligence remains accountable, observable and secure for federal use.

Policy and regulatory implications for AI adoption 

The U.S. has entered a new phase of AI regulation. Federal oversight of AI is advancing through executive orders and sector-specific guidance from agencies in healthcare, finance and employment.  

Specific frameworks such as NIST’s AI Risk Management Framework (AI RMF) and ISO/IEC 42001 are establishing standardized expectations for AI governance. For Atlas and similar systems, these frameworks require operational transparency, risk-based assessment and continuous monitoring, which agencies must integrate into procurement and implementation. 

State initiatives also often coincide with global frameworks such as the EU AI Act, which categorizes AI systems by risk and imposes obligations on high-risk and systemic models, and the UK’s principles-based approach, which emphasizes safety and accountability without binding rules. This has resulted in a patchwork of frameworks at national and international levels.  

However, federal efforts to limit states’ ability to regulate AI through recent executive orders and potential inclusion in the NDAA may affect this landscape. 

For agencies deploying GenAI, compliance must be integrated into security measures and governance from the outset. As the nation determines how to regulate and safely deploy AI, government IT leaders in 2026 cannot afford inaction, especially as AI adoption accelerates.

Elad Schulman is the CEO and co-founder of Lasso Security. He is a seasoned tech entrepreneur, with experience in both enterprises and startups.