How AI fights fraud to enhance healthcare delivery

Presented by GDIT GDIT's logo

Health care spending represents nearly $5 trillion of the United States economy, making it a prime target for scammers. Meanwhile, surging national medical costs and increasingly sophisticated attacks are making it difficult for traditional oversight tools to keep fraudsters out and protect taxpayer dollars. The boom of AI-powered solutions is being harness to deliver efficient and accurate services while reducing fraud, waste and abuse. AI tools flag anomalies and identify suspicious billing patterns at greater speed and accuracy. At the same time, they enhance service delivery by processing legitimate claims faster than ever and reducing backlogs.

Preventing healthcare fraud, waste and abuse speeds service delivery

Federal health agencies like the Centers for Medicare and Medicaid Services (CMS) and the Veterans Health Administration (VHA) are attractive targets for scammers, largely due to volume and complexity. Volume of claims processed and paid, volume of patient and provider records, and complexity of healthcare programs, benefits and policies. The result is an ecosystem rife with opportunities for fraud, whether via identity theft, false claims or other constantly evolving attacks.

“Bad actors are always finding new approaches as new services are approved, or new policies come out,” said Karen Painter, senior program director for fraud, waste and abuse at General Dynamics Information Technology (GDIT). “We see a lot of adaptation among the bad actors in how they're targeting the programs.”

By leveraging AI tools, agencies can strike the right balance between comprehensively vetting claims and services and speed of care — particularly crucial for vulnerable populations. The goal is to stop fraud before it happens and usher legitimate claims through the pipeline to benefit recipients as quickly as possible. 

“A lot of AI capabilities have been able to help us streamline detection, quickly triage and prioritize risk and alert agencies to the most egregious behavior more quickly,” Painter said. Fraud detection and prevention are intrinsically tied to improved service delivery, she added. When agencies are able to more efficiently identify and mitigate bad behavior, it becomes easier for legitimate claims to be resolved without excessive burden on providers or beneficiaries.

Case study: AI for FWA at the Centers for Medicare and Medicaid Services

When it comes to volume, CMS tops the list. It’s the largest single health payer in the U.S. and maintains the nation’s largest collection of health data. CMS and related contractors process more than 1 billion Medicare claims per year, according to the agency. Improper payments alone have reached billions of dollars of wasted federal funds. 

To combat fraud, waste, and abuse CMS partnered with GDIT to develop the agency’s first AI and machine learning model.

“We have been supporting CMS and their program integrity efforts for decades,” said Dave Merrill, vice president of CMS programs at GDIT. “Our partnership is grounded in a shared commitment to safeguarding public trust within every aspect of program execution across the agency. By tailoring our AI solutions to their needs, we can detect and combat fraud faster and more accurately, supporting their mission at every level.”

The solutions GDIT developed with CMS leverage AI and machine learning to identify fraud schemes with more complex patterns than humans could detect. Within two years, GDIT’s solution out-performed manual methods, demonstrated more than 90% of accuracy in detection, and cut model development time from months down to minutes. The result: GDIT’s AI for FWA solution for CMS identifies more than $1 billion in suspect claims per year. 

Approach to trustworthy AI

“We’ve found machine learning is very efficient at identifying fraud,” Painter said, “while generative AI and large language models have been helpful in incorporating the large policy manuals, health care records and other unstructured data, making it easier to more accurately target fraud and make processes more efficient.”

In other words, rather than requiring people to manually search through policies to identify violations or assess whether something is clinically indicated — processes that are tedious, time consuming and prone to error — generative AI-powered solutions can perform such tasks in minutes.  

According to Painter, no AI solution will be successful without a solid foundation and framework of security, trustworthiness and responsible development, especially in the government space. Explainability is critical to AI, and the results it produces should not be simply accepted at face value. Solutions and models must have a level of transparency to keep humans in the loop and demonstrate how and why decisions are made.

“As much as we are excited about the technology advancements, we understand the importance of making sure that we're taking that right due diligence,” Painter said, “to ensure that we're using the technology properly, in a safe way that will protect beneficiaries and providers.”

What’s next for AI in the health space

Just as adversaries evolve, so must the solutions to thwart them. To that end, GDIT’s work with CMS will continue to advance as their mission grows in complexity and sophistication. Painter noted that the team explores emerging technologies to continually look for ways to improve solutions in place today.

AI success also depends on tapping into the right partners. As the Trump administration highlights the criticality of AI to the nation’s future and emphasizes the importance of industry partnerships, collaboration is more important than ever. GDIT works with leaders like Snowflake and others to ensure it’s leveraging the best available tools and platforms in the data space to help manage such data-heavy projects as those with CMS.

“We’re seeing an all-of-America approach to addressing technology, with providers, users and government hand in hand,” said Matthew Rose, head of corporate and government affairs at Snowflake. “What we’ll see in the near term are key use cases specifically focusing on reducing fraud or reducing legacy cost of current solutions, all while thinking about new innovations and the foundation to build them.”

Merrill also highlighted how rapid and exciting AI development has been even just over the past year as new capabilities emerge on a regular basis. The next evolution, he predicted, will be more deeply integrating AI solutions into the mission to further enhance results and speed.

“Agentic AI and the evolution of standards like model context protocol is going to help us knit this together in a way that will drive down the length of time it takes to close gaps quickly,” Merrill said. “I think you're going to see a continued evolution over the next six to 12 months that's going to help solve problems we've had a hard time solving for a very long time, and it truly is because the technology is here to enable it.”

Learn more about how GDIT is leveraging AI to move government missions forward.

This content is made possible by our sponsor GDIT; it is not written by and does not necessarily reflect the views of Nextgov's editorial staff.

NEXT STORY: AI-powered efficiencies target improper government payments