Agencies Need Tech-Savvy Feds to Address AI Challenges

Liu zishan/Shutterestock.com

Algorithmic bias, security and explainability are among the top issues the government must address as it hands over more responsibility to AI, according to a recent report.

The government will need to overcome a number of ethical and technical challenges as it adopts more artificial intelligence tools, and its success will hinge largely on building a workforce that can grapple with the tech, according to a recent report.

Algorithmic bias, security and explainability are among the top issues that agencies, and society more broadly, must address as they lean more heavily on AI to accomplish their work, the Partnership for Public Service said in a report published Thursday. Federal leaders have long seen AI as a way to derive better insights from government data and relieve employees from tedious tasks, but unless critical challenges are addressed, the tech could do more harm than good, according to the report.

The report, drafted by both the Partnership and IBM, lays out the primary issues facing the government as it adopts AI and offers potential solutions. It builds on a previous report that examined the tech’s potential impact on government. In the coming years, researchers found estimated 130,000 feds could see their jobs affected by AI.

As the government turns to AI to make more consequential decisions, it will be critical to ensure the systems’ results are free from bias, researchers said, and the government should create a framework for assessing the fairness of different tools. Similarly, agencies will need to address issues around AI transparency and explainability so they can understand how algorithms arrive at their conclusions. Today, explainable AI is a major focus of the Defense Department’s research efforts.

Agencies will also face an uphill battle securing AI tools against digital attacks, especially given the complexity of the systems. By altering training data or reverse-engineering the systems, adversaries can manipulate the tools to behave in ways that developers never intended and potentially cause “enormous damage,” researchers wrote.

Improving the trustworthiness and security of AI will require the government to build a workforce that’s comfortable with the tech, researchers said, but today that’s still a work in progress. Agencies have historically struggled to hire young technologists who are more likely to have experience with AI, and many current feds don’t think their agencies are doing enough to prepare them to use the tech.

“We need to work to improve people’s understanding of what AI does so they can trust it generically and also what their data contains so they can understand it specifically,” Dorothy Aronson, chief information officer of the National Science Foundation, said Thursday at the report launch event. “We have a tremendous, powerful workforce, and we’re not really using them to the best of their abilities because we haven’t invested in this way.”

At the same time, because many employees are hesitant to learn new technologies and processes, agency leaders need to help them come to terms with these changes, Aronson told Nextgov after the panel. AI and analytics are becoming an integral part of the economy, she said, and navigating the 21st-century job market will require tech and data literacy from everyone

“What you say to people is ‘here’s the world, let’s look at [how] it’s been changing,’” she said. “It doesn’t matter how much anyone resists. There’s a tidal wave, and it’s already struck.”