Tech officials caution on data security in public sector AI applications

State, Commerce and Defense department officials detailed AI use cases already in play and where they might go next.

State, Commerce and Defense department officials detailed AI use cases already in play and where they might go next. Anadmist / Getty Images

A group of tech policy leads across federal agencies discussed the current and future state of play of artificial intelligence in government systems.

Officials from the State, Commerce and Defense departments previewed some of the AI use cases each agency is researching at an event hosted by the Advanced Technology Academic Research Center Wednesday. 

The applications under discussion are diverse, but a common theme is the usage of AI algorithms to facilitate predictive analytics while still ensuring data privacy. 

André Mendes, the chief information officer at Commerce, and Ola Olude-Afolabi, a data scientist with Commerce’s AI Center of Excellence, highlighted how the National Oceanic and Atmospheric Administration has been using predictive AI capabilities to forecast weather conditions for some time, but notes that the rapid advancement of AI systems brings new opportunities.

“That predictive analytics will be a part of every single thing in our lives from now until the end of time,” Mendes said. “The potential for good is fantastic, and the potential for evil is just as fantastic, and so we need to make sure that we do it appropriately.”

Within the State Department, officials are looking to deploy predictive AI to expedite various passport and visa processes. Landon Van Dyke, the chief technology officer at State’s Office of Management Strategy & Solutions, said combating fraud within government-issued documents currently utilizes predictive analytics, but that there is always a human reviewer.

“Because of our comfort within the department of predictive analytics, no matter what we're using it for, there's always a human eye that immediately looks at it afterwards,” Van Dyke said, citing a departmentwide policy enforcing a human review step following an AI review.

“From our perspective, we're still very much, ‘Let's put a human eye on it,’ even though we have the predictive analytics here and we have a level of confidence,” he said. State is also employing machine learning systems to sift through data stored in emails, documents, and other reports to gauge the level of risk of each document if they are made public. 

On the Defense side, AI systems are currently helping predict required maintenance and repairs as well as other logistics, but more predictive analytics use cases will come into play as officials look to scale technologies across the DOD.

“A lot of times that's where our predictive analytics will start to come in play is when we actually need to scale and we need to go past that initial prototype,” Stephanie Wilson, a contracting and agreements officer with the Army, said.

These potential and ongoing use cases aren’t without challenges. Several officials mentioned the complications surrounding predictive algorithms' need to analyze large datasets to learn and function precisely. Mendes said that ensuring good data privacy practices amid AI models can be daunting given the sheer volume of the data being collected and used. 

“I think that that's going to be a constant challenge, especially as we marry the predictive analytics and the amount of data that we're collecting on citizens on a constant basis with the desire to have an open government where a lot of that data is published,” he said. “And so the intersection of that is that privacy that needs to be proved to be protected at all times.”

Van Dyke discussed how understanding the risk of the specific data stored in networks — such as emails and servers — can help reinforce security measures.

“We're coming up with different ways to look at our data, different lenses, national security, foreign policy,” he said. “And we want to understand what its value is so that we can protect it better. And I think we're going to continue to do work like that as AI continues to grow in its capability.”

Editor's note: The Advanced Technology Academic Research Center is owned by the Government Executive Media Group, the parent company of Nextgov/FCW.

NEXT STORY: Scaling and talent development dominate Senate’s AI-focused NDAA amendments