The acquisition of AI technologies and large language models across DOD will bolster the department’s cyber operations and analytical capabilities, officials with the Defense Information Systems Agency said.
Artificial intelligence technologies have the potential to transform the Pentagon’s operational capabilities in cyberspace, cloud environments and commercially-sourced solutions, Defense Department officials said during a media roundtable at the Defense Information Systems Agency's Forecast to Industry 2023 event on Monday.
Steve Wallace, DISA’s chief technology officer and the director of the agency’s emerging technology directorate, said DOD is “making significant investments in AI” and added that the department’s “distance efforts” run the gamut from predictive analysis in operations to the adoption of large language models.
“And we're seeing now some of these models moving into the government cloud environments and potentially up to the classified environments,” Wallace said. “And I think that will be the inflection point where you see quite a bit more adoption across the board.”
Wallace noted that DOD vendors are also “integrating more and more and more [AI] into their platforms, so as we adopt commercial solutions, we're seeing a lot of integration of AI capabilities in there and a lot of excitement.”
These emerging technologies also have the ability to transform the agency’s internal processes, particularly when it comes to employees analyzing large troves of data.
The agency’s strategic plan for fiscal year 2022-2024 includes a focus on empowering the agency’s workforce, with the goal of addressing “institutional silliness” — bureaucratic red tape, outdated technologies and counterproductive policies. Officials cited several examples where new tools, such as large language models, could transform previously frayed systems and processes.
Wallace referred to large language models as a “digital concierge” that can help employees across the department “in all aspects of their job.”
Brian Hermann, the director of DISA's cybersecurity and analysis directorate, said defensive cyberspace operations, in particular, were “a really good target for where the data can be used.”
Ideally, around 80% of the information that analysts review “on a day-to-day basis would be automatable, and then their brains can be applied to those really high-end problems,” Hermann said.
“I would argue that that’s really the only way we can react with speed to appear competitive,” he added.
While Wallace said DOD is “seeing adoption of AI in many different places,” he added that “the ethical use is something that we’re quite interested in and we've spent a fair amount of time trying to better understand.”
This process includes reviewing “the datasets that are all ultimately consumed by the AI model, and making sure that they're not polluted, and they're not corrupted [and] that, you know, an adversary hasn't found their way in to start to do things,” he said.
DOD has moved to establish guardrails around its use of new technologies, including drafting a set of ethical AI principles in 2020 and launching a new initiative in August to oversee the implementation of generative AI capabilities.
Last week, the Pentagon released a new AI and data strategy that positioned “quality of data” as the base of the department’s “hierarchy of needs” when it comes to adopting emerging technologies. The strategy’s release came after President Joe Biden issued an executive order at the end of October outlining steps for the development of trustworthy AI.