Unleashing AI across the US government: The data security challenge holding back decision advantage

Greggory DiSalvo/Getty Images

COMMENTARY | Agencies are making significant investments in AI, but those efforts will be wasted if they can't secure the sensitive data that the emerging capabilities need to provide real value.

During my years leading IT strategy at the Department of Defense and the Navy, I witnessed firsthand the frustrating paradox that continues to plague government artificial intelligence initiatives: we're sitting on mountains of valuable data that could revolutionize mission outcomes, yet we can't actually use most of it with AI systems.

The problem isn't technology adoption, since federal agencies are rapidly deploying AI and machine learning capabilities. The challenge is that our most sensitive data — the information that could provide genuine decision advantage — remains locked away because our current security architectures can't protect it at scale once AI systems begin processing it.

The promise of augmented intelligence

Let's be clear about what's at stake. When properly implemented, AI — or what I prefer to call "augmented intelligence" — represents a crucial advancement in how government operates. From predictive maintenance on weapons systems to accelerated threat detection in cybersecurity, from streamlined acquisition processes to improved resource allocation, AI has the potential to enhance every aspect of federal operations.

The Pentagon's emphasis on responsible AI — built on principles of equitable, traceable, reliable, governable and transparent usage — provides the right ethical framework. We understand that humans must remain in the loop for critical decisions, particularly those involving national security or individual rights. We've established governance structures and invested in quality, auditable data pipelines.

But here's what keeps CIOs and CISOs awake at night: all these careful preparations become meaningless if we can't secure the data during AI processing.

The decrypt-to-use vulnerability

Today's AI systems, including the increasingly popular Retrieval-Augmented Generation (RAG) models that federal agencies are deploying, have a fundamental security limitation. To analyze data, they must decrypt it first. This creates a vulnerability window where sensitive information sits exposed in memory and processing systems.

For classified defense data, this is often a complete showstopper. Legal teams won't approve AI analysis of intelligence data when the architecture requires decryption during processing. The same applies to healthcare data protected by HIPAA, financial records subject to compliance requirements or personally identifiable information covered by privacy laws.

Consider the implications: the Department of Defense collects vast amounts of operational data, but by my estimates, we use only about 5% of it on our best days. Of that small fraction, only about 25% reaches commanders in time to inform mission-critical decisions. If I told corporate executives they were throwing away 95% of their competitive advantage, they'd be terminated that day. Yet we are forced to slow down our AI usage because we lack secure methods to process sensitive data with AI.

This isn't just inefficiency; it's a national security risk. Our adversaries are aggressively deploying AI without the same ethical constraints or security concerns that rightfully slow our adoption. The country that can most effectively harness AI for intelligence analysis, operational planning and strategic decision-making will have significant advantages in future conflicts.

The RAG security challenge

Retrieval-Augmented Generation models present particularly acute security challenges. These systems combine large language models with organizational knowledge bases, allowing AI to provide contextually relevant responses based on proprietary or classified information. They're powerful tools for everything from policy analysis to technical support.

But RAG architectures require constant interaction between the AI model and data repositories. Every query triggers retrieval operations that pull sensitive information from storage, decrypt it for processing, generate results and ideally re-encrypt everything. Each step creates potential exposure points. Each handoff between systems represents a possible vulnerability.

Traditional encryption approaches — encrypt data at rest, decrypt for processing, re-encrypt for storage — simply don't work when AI systems need continuous, high-speed access to sensitive information. The decrypt-to-use model creates too many windows of vulnerability, and the performance overhead of constant encryption and decryption operations becomes prohibitive.

Federal agencies attempting to deploy RAG systems on classified or sensitive data face an impossible choice: either accept security risks that breach their compliance obligations, or forgo AI capabilities on the data where AI would provide the most value.

What's needed: processing without decryption

The solution requires a fundamentally different security architecture; one that enables AI processing on encrypted data without ever decrypting it. This isn't theoretical cryptography research; the underlying mathematics exists. What's needed is engineering that makes continuous encryption practical for real-world AI operations.

Such an architecture would allow RAG models to query encrypted databases, retrieve encrypted information, process it while encrypted and return results; all without creating those vulnerability windows that make security officers reject AI deployments. The encryption never comes off. The data is never exposed, even to the AI system itself.

This would unlock AI capabilities across government operations that are currently off-limits:

In defense and intelligence: Analysts could use AI to identify patterns across classified datasets, generate intelligence assessments and provide decision support; all while maintaining required security clearances for the data itself.

In healthcare: VA hospitals could deploy AI diagnostics and treatment recommendations using complete patient records, improving care quality without HIPAA violations.

In law enforcement: Investigators could leverage AI for case analysis and threat detection using sensitive criminal justice information that currently can't be processed by AI systems.

In financial oversight: Regulators could use AI to detect fraud patterns and compliance violations across sensitive financial data that institutions legally cannot decrypt for AI analysis.

Beyond technical solutions: cultural change

Technology alone won't solve this challenge. As I emphasized throughout my time at DOD, we need fundamental shifts in how government approaches risk management. Too often, "risk management" becomes a sophisticated way of saying "no" to innovation. We need to ban that reflex and focus instead on mission outcomes.

The right question isn't "How do we eliminate all risk?" It's "How do we enable critical capabilities while managing risks to acceptable levels?" For AI security, that means demanding encryption architectures that protect sensitive data throughout the AI processing lifecycle, not just when data sits idle in storage.

We also need acquisition reform that allows government to rapidly adopt innovative security solutions. It's absurd that a $500,000 contract costs the same to execute as a $2 billion program. Small companies developing breakthrough security technologies can't navigate our procurement bureaucracy. Meanwhile, adversaries move faster because they're unburdened by our process constraints.

The path forward

Federal agencies are making significant investments in AI infrastructure, training programs and governance frameworks. These investments will be wasted if we can't secure the sensitive data that AI needs to provide real value.

The good news is that solutions are emerging. Continuous encryption technologies that enable AI processing without decryption are moving from research labs to commercial availability. Forward-thinking agencies should be evaluating these capabilities now, running pilot programs and preparing their organizations for a security architecture that finally allows sensitive data and AI to work together.

The stakes are too high for incremental progress. Every month we wait to solve the RAG security challenge is another month where America's adversaries gain ground in the AI race. Every sensitive dataset that remains off-limits to AI is a missed opportunity for better decisions, faster response times and more effective government operations.

We built the responsible AI framework. We established governance structures. We invested in data quality. Now we need the security architecture that allows us to actually use AI on the data that matters most.

The technology to unleash AI across government operations exists. What we need now is the leadership commitment to demand it, the acquisition agility to procure it, and the cultural shift to deploy it. Our missions, and our national security, depend on it.

Terry Halvorsen served as Chief Information Officer at the U.S. Department of Defense from 2015-2017 and as CIO of the Department of the Navy. He currently serves as Vice President of Federal Client Development at IBM. He is a veteran Army intelligence officer who served during Operation Just Cause and Operation Desert Storm.