Report: Agency Leaders Must Embrace Risk to Adopt Artificial Intelligence

Dilen/Shutterestock.com

Not every AI pilot is going to pan out, and government leaders need to make sure their organizations know that’s OK.

If government leaders want to bring more artificial intelligence tools into their agencies, they need to make sure their employees are comfortable with failure, a government tech trade group found.

The government is still in the early days of AI adoption, and agencies are bound to hit some obstacles as they figure out how to fit the tech into their IT ecosystems, according to the Professional Services Council Foundation. Some early efforts won’t pan out, but it’s critical that agency leaders push people to learn from their mistakes and continue trying new approaches, PSCF said in a report published Wednesday.

“When it comes to shifting an organization’s culture from being risk-averse to more risk-tolerant, strong leadership can be helpful,” the federal tech trade group wrote. “Many of the federal AI practitioners we interviewed agreed that having a champion for AI at a high level is perhaps the most critical ingredient to success.”

Beyond encouraging employees to take risks, agency leaders could also play a big role driving other cultural changes that would promote wider experimentation with AI.

In the report, PSCF said promoting the use of analytics and data-driven policymaking could help leaders overcome much of the internal resistance to the tech. Clearly communicating the organization’s goals for AI and discussing how the tech will impact the day-to-day lives of employees could make people more comfortable with the changes, they said.

“Leaders can be effective if they convey the message that AI is not something to be feared, but rather something that will empower the workforce to deliver better results,” PSCF wrote in the report.

PSCF also recommended agencies create robust policies for improving data literacy and technical skills within their workforce, and ensuring the responsible use of any AI tools within their organizations. As the Defense Department works to draft principles for AI ethics, other agencies must also start thinking about ways to guarantee the decisions their tools make “are consistently fair, ethical, accurate, transparent, accountable, and safe,” researchers said.

Editor's note: This article was updated to correct the organization that conducted the study.