The JAIC’s new director recently offered fresh details on the joint common foundation.
Two years into its existence, the Pentagon’s Joint Artificial Intelligence Center is metamorphosing into what top defense officials are calling ‘JAIC 2.0’ to readjust focus across its extensive user base.
At the heart of potential success in this next and future phases is its in-the-making data and AI development platform: the joint common foundation.
On top of providing the department’s less tech-savvy insiders with an accessible technical base to leverage data and put AI to work, the JCF will also be infused with software, or “soft services” that can keep users aware of ethical principles and other considerations they should make when using AI in warfare, Marine Corps Lt. Gen. Michael Groen confirmed Tuesday.
“I think this is so important—and I tell you, I didn't always think that way. When I came into the JAIC job, I had my own epiphany about the role of an AI ethical foundation to everything that we do,” Groen, the center’s new director, told Nextgov during a press briefing. “It just jumps right out at you.”
Assuming the role as the JAIC’s second commander last month, Groen has since joined other defense leaders in spreading the message about the move from JAIC 1.0 to 2.0. While the first stage jump-started AI projects in various realms of the department and military, overall utilization across the enterprise remains uneven. To address the challenge, the center aims in its next phase to “shift from a transformational perspective to start looking at that broad base of customers and enable them,” Groen explained recently.
The joint common foundation—a cloud-enabled platform to drive forward the development, assessment and fielding of new defense-centered AI capabilities—will essentially be a centerpiece of that enablement to help the department’s breadth of users harness the tech-based solution. Defense Department Chief Information Officer Dana Deasy recently noted history books of the future will say JAIC was about joint common foundation.
The JCF is set to reach initial operating capability in 2021. During the briefing Tuesday, Groen said that the center aims to update its tools and capabilities monthly, and “rapidly change it” beyond that point.
“That platform now provides a technical basis for especially disadvantaged users who don't have access to data scientists, who don't have access to algorithms, who are not sure how to leverage their data,” Groen said. “We can bring those folks to a place where now they can store their data, they might be able to leverage training data from some other program, [and] we might be able to identify algorithms that can be repurposed and reused in similar problem sets.”
While that makes up JCF’s technical advantage, another edge it’ll offer to users is support to ensure they’re putting AI to use in ways that align with the Pentagon’s ethical priorities.
“There's also the software, called the ‘soft services’ side of it, which is, now we help them with AI testing and evaluation for verification and validation—those critical AI functions—and we help them with best practice in that regard,” Groen said. “We help them with AI ethics and how to build an ethically grounded AI development program, and then we create an environment for sharing all of that through best practice.”
After production was prompted in part by the 2019 National Defense Authorization Act, the Defense Department adopted its own AI Ethical Principles early last year. The JAIC is charged with leading the practice and implementation of those guiding points.
“Many people might think, ‘Well yeah, of course, we do things ethically. So when we use AI, we'll do them ethically, as well.’ But I think of it through the lens of, just like, the law of war,” Groen explained.
The law of war is an element of international law that governs the conditions for war and warring parties’ conduct. Groen noted all of the components of the law of war drive the military’s decision making, and have a significant impact on the way that the force is organized and fights today. Having a very mature targeting doctrine, and a targeting process full of checks and balances helps guarantee those involved are complying with the law of war.
“This process is unprecedented and it is thoroughly ingrained in the way we do things—it changes the way we do business in the targeting world,” Groen said. “We believe that there's a similar approach for AI and ethical considerations.”
Pointing back to the department’s ethical principles for AI, Groen noted that when building AI, the defense enterprise intends to ensure “its requirements or outcomes are traceable,” and that the systems are ultimately reliable and equitable. “We do that through tests and evaluation in a very rigorous way,” he said. Further, it’s crucial to ensure that insiders use AI responsibly, within the boundaries that it’s all been tested—and that it can be turned off as needed.
“Honestly, we and the nations that we're working with in our AI partnership for defense really are kind of breaking ground here for establishing that ethical foundation,” Groen said. “And it will be just as important and just as impactful as application of the law of war is on our targeting doctrine, for example.”
The new director added that there aren’t yet all that many experts or ethicists who truly understand the art of this work and can effectively communicate it in a way that helps designers design systems, testers test them, and implementers implement them.
Good news according to Groen is that the JAIC employs some of them.
“They're fantastic people, and they punch way above their weight,” Groen said. “We're really hoping to give access to their expertise across the department by linking it to the joint common foundation.”