The cyberwarfare landscape is changing — here’s how to prepare

peshkov/Getty Images

COMMENTARY | Entering 2026, cyberattack targets and tactics are poised to shift from perpetuating fraud and theft to societal disruption.

To date, damaging cyberattacks have largely focused on financial gain or network access, resulting in major business impacts and some disruption to daily life, rather than life-threatening consequences. 

Even incidents like the Colonial Pipeline ransomware attack, which showed us how the cyber world and our physical lives intersect, stopped far short of societal disruption. Recovery was relatively quick once Colonial Pipeline paid its ransom.

However, the threat of cyberwar has been building, influenced by advancements in AI and increased presence of actors in U.S. systems and telecommunication networks. A military conflict could escalate these attacks to scale, crippling critical infrastructure and public safety systems like power grids, transportation networks and emergency response, even disrupting military communications and undermining response. 

When used as cyberwarfare, attackers are unlikely to offer an option to pay a ransom and resume normal operations without gaining a strategic advantage. And even more alarming—the barrier to entry is low and getting lower as AI allows cybercriminals to scale. 

Government entities and critical infrastructure providers need to prepare by recognizing the realistic capabilities of bad actors and staying ahead. 

Security from the start

Creating the right baseline for cybersecurity is always crucial, but especially so when bad actors have the means and interest in accessing systems that we depend on in our daily lives. 

Security baked in from the start must become the standard as government and critical infrastructure partners prepare for the threat of cyber warfare. 

Baked-in security needs to extend across the entire software security supply chain. This means scanning all code, especially open-source, for vulnerabilities and leaning on emerging AI capabilities for discovery and remediation at scale. Federal agencies like CISA and NIST are increasingly recognizing this need and providing agencies with guidelines for maintaining secure software supply chains, like NIST’s Secure Software Development Framework.

Cloud environments can further complicate this security baseline. If agencies are operating in the cloud, these environments must be engineered to rely on native services to harden security, rather than bolting on tools. 

A criminal mindset = secure systems

Technological advancements like AI have lowered the barrier to entry for sophisticated cyber threats, advancing and scaling more rapidly than traditional cybersecurity tools. Government agencies and organizations dealing with critical infrastructure need to be strategic in their approach to identifying threats, and this requires testing systems using the same tactics and techniques as the criminal organizations and nation-states executing the attacks. 

Penetration testing and enlisting red teams to test security controls allow agencies to identify unseen vulnerabilities and protect against well-resourced, AI-empowered bad actors.

Testing should include a mix of realistic tactics, techniques and procedures in the virtual and physical domains that can imitate APTs and mimic social engineering, so they know where to patch vulnerabilities, reconfigure systems and educate workforces on cyberthreats. 

To compete with bad actors that are using AI to find vulnerabilities, we also need to lean on AI, coupled with human creativity. 

Take action against invisible threats

The reality is that cybercriminals are likely to have already infiltrated critical networks. To be effective, cybersecurity efforts need to operate based on this assumption. 

One of the biggest challenges is that while we can assume they’re on federal networks, it is harder to predict what they plan to do. This requires casting a broad net—making AI-enabled threat hunting a critical tool to fight against invisible cyber threats at scale. 

Hunt activities should use AI to correlate data from intelligence sources, previous security assessments, and current log activity to discover suspicious behavior at scale. Strategies should strike a balance between immediate correction and actor observation to ensure detection mechanisms stay current with adversary tactics. 

The threat of real, damaging cyberwarfare is a concern that agencies need to prepare for in the coming year. Baseline cybersecurity solutions won’t be sufficient to protect against the impact these threats could have—agencies and critical infrastructure providers need to build security into their systems from the start and conduct penetration testing and AI-powered threat hunting that allows organizations to think like the cyber criminals they are defending themselves against.