DHS Tech Directorate Sets Goals to Guide Risk-Aware Artificial Intelligence Use

Gwengoat/istockphoto

It’s meant to complement the department’s broader enterprise strategy and could prove helpful to other agencies.

Officials in the Homeland Security Department’s Science and Technology Directorate are being intentional about how their hub supports the entire enterprise in pursuing artificial intelligence and machine learning capabilities.

In an 18-page AI/ML Strategic Plan, released by the research and development arm on Friday, they point to inherent risks associated with advancing those novel technological capabilities and discuss how S&T aims to move forward responsibly.

“We spent probably 10 months developing this document, and in the first two or three months, it was more of a brainstorming session to determine how we were going to move forward and how we were going to frame this out,” Acting Deputy Director of DHS Technology Centers Division John Merrill told Nextgov in an interview on Tuesday. “Over the course of those early months, we had numerous sessions and discussions—looking at actual AI/ML capabilities and use cases, then the operational components coming back and giving us the input in terms of their expectations or what they wanted to do.”

Merrill said he’s been with DHS “since the beginning.” He was active in the Coast Guard for many years, and when the department was created after 9/11 he fell under its purview. Upon retiring from the Coast Guard in 2007, Merrill was hired to work on GPS- and navigation-related issues for DHS, where he’s moved across two offices and remained for over a decade. 

He detailed what went into this new plan’s making and how it fits into the department's broader technology-pushing vision.

Meeting its leadership’s objectives, DHS published a department-wide AI Strategy in December, meant to prioritize the responsible use of the technology by all its personnel. S&T’s new strategic plan is meant to complement that, according to Merrill, and ultimately charts a path based on the directorate’s unique position to support the department’s overall aims. 

AI and ML capabilities are increasingly present across many aspects of everyday life, though experts aren’t completely united on how to describe the technologies. In the new plan, officials explicitly define the terms ‘artificial intelligence’ and ‘machine learning.’ 

But it wasn’t simple. Merrill said more than 60% of the first few months were spent discussing descriptions of those terms and how to ensure they matched with concepts in DHS’ strategy. 

“We had people that were extremely passionate about this in their views, and it took us I want to say over two months to settle on a definition to potentially work from,” he explained. “That was probably the biggest challenge.”

Officials then started considering how and where S&T could make the most impact—from the AI-aligned perspective—to assist its operational components and other components, as well as the broader community they serve.

They arrived at three strategic goals, which are outlined in the plan: Drive next-generation AI/ML technologies for cross-cutting homeland security capabilities; facilitate the use of proven AI/ML capabilities in homeland security missions; and build an interdisciplinary AI/ML-trained workforce.

Throughout the document, officials place a strong emphasis on operating thoughtfully and responsibly as the emerging tech is deployed. They note that “DHS must develop and field this new technology in a way that protects privacy, civil rights and civil liberties, and protects against bias, both to ensure effectiveness and to maintain public trust,” and that insiders must respond effectively when problems arise. 

“We need to ensure that, as we move forward, anything that we push for especially with AI and ML that's going to be public-facing or associated with the public—the collection of any type of data—that it is going to be transparent to them that we are not violating their privacy,” Merrill said. “Because all it takes is one incident and it will come crashing down. It will have residual effects across every program within the Department of Homeland Security.”

In the plan, officials express intent to grow the tech workforce. They mention plans to assist the Office of Personnel Management with developing criteria for evaluating the technical expertise of potential hires. They’ll work with external agencies beyond this point, building on previous collaborations. “Prior to doing this strategic plan, we canvassed other federal agencies to find out what they were doing,” Merrill explained, adding, “what we want to do is we want to complement each other.” Specifically, he said DHS works closely with the National Institute of Standards and Technology and National Science Foundation as they develop guides in this realm. 

“We know by just going public with the document that we have that the expectation is that other federal agencies will start looking at this, potentially use it within their own departments, and it could possibly influence some of their decisions,” he noted.

Merrill is involved in DHS’ exploration of first responder-related technologies, so many of the AI use cases he sees within the agency are law-enforcement sensitive. Still, he shed some light on an in-the-works AI and ML opportunity that the new plan could inform. Next Generation 911, for example, is a broad and complex initiative to move America’s go-to dial-in service for emergencies from legacy systems to a digital or Internet Protocol-based 911 system. The infrastructure set to start coming online will leverage 5G in public safety answering points, Merrill noted. 

As it’s extremely difficult for call responders and dispatchers to distill all the information that’s captured and taken in, he said DHS is investigating how AI and ML capabilities on the backend can improve the process.

“We've developed these certain algorithms associated with the types of calls that are coming in to distill that information to actionable information that the dispatchers can push back out to the responders,” Merrill said. “However, the challenge is when we receive that information, how do you teach this AI capability to be able to extract the information that's particular to that particular incident?”