The Pentagon Wants to Automate Software Assurance

BeeBright/Shutterstock.com

The Defense Advanced Research Projects Agency is working to build tools that determine what security criteria certain systems must meet and check whether they hit that threshold.

The Pentagon’s research office is working to build tools that automatically check military software applications for potential vulnerabilities.

Today, the department relies heavily on human analysts to dissect IT systems and determine whether they meet security standards set out by Defense officials. But by handing over much of that process to machines, the Pentagon could get more consistent, speedy and trustworthy results, according to the Defense Advanced Research Projects Agency.

“Current certification practices are antiquated and unable to scale with the amount of software deployed by the Department of Defense,” DARPA officials said. “The use of humans to evaluate the quantities of assurance evidence that support software systems results in superficial, incomplete, and/or unacceptably long evaluations."

DARPA recently started recruiting researchers for the Automated Rapid Certification of Software program, which aims to create tech that both automatically determines what security criteria a given system must meet and checks whether it hits that threshold. Ultimately, the tools could help the Pentagon reduce software certification costs and deploy new systems in the field faster, officials said.

According to the solicitation, the tech would follow a “compositional certification” process when checking whether a given system is safe to use. Essentially, that means determining that individual software components are secure and then making sure they remain safe when combined into a bigger application.

Different software systems often share the same components, so this compositional approach would allow tools to transfer what they know about a certain chunk of code from one analysis to another, DARPA said. By applying the information it already knows about a given component to every system that contains it, the tech wouldn’t need to recreate the wheel with every analysis, something human analysts often find themselves doing today, officials said.

At the end of the process, the tools would rate the security of each system and justify their assessments in a format human analysts can read and review.

The program will be divided into four main research tracks and run for roughly four years. DARPA will host a proposers day on May 14, and interested teams must submit their proposal by July 9.