Government Can Tackle Ethics in Artificial Intelligence Better than Industry, Experts Say


The government can invest in some of the big picture problems in AI.

The government needs to make targeted investments in artificial intelligence to continue the country’s global leadership in the burgeoning industry and prepare Americans for a changing workforce, according to technology experts.

Policymakers must to limit regulations on artificial intelligence, create more job retraining programs and invest more heavily in research the private sector may not have incentives to pursue, panelists said Thursday at a forum hosted by the Center for Strategic and International Studies.

The discussion coincided with the release of a report from CSIS and Booz Allen Hamilton detailing key factors the government should consider when creating a national strategy for artificial intelligence innovation.

“A much as other countries are going to be a part of the development of this technology, the U.S. needs to lead and needs to invest [in artificial intelligence] because this technology has huge potential implications for us,” said Will Carter, deputy director and fellow at the CSIS Technology Policy Program, who co-authored the report.

Artificial intelligence research and procurement constitutes a “relatively small” portion of the federal budget, Carter said, but it’s expected “to generate the bulk of our next generation of capabilities.” Given the value the government sees in the technology, it should devote more resources to their development and adoption.

Despite the Trump administration’s “deep commitment” to artificial intelligence, the White House cut civilian research and development funds by more than 19 percent in its fiscal 2019 budget proposal. Still, panelists highlighted areas of artificial intelligence research where the government can get the most bang for its buck.

While private companies direct their dollars to areas with the highest financial payoff, the government is in a position to tackle some of the big picture problems related to artificial intelligence, said Josh Elliot, director of machine intelligence and data science solutions at Booz Allen Hamilton, in a conversation with Nextgov.

“Government needs to play a significant role” in addressing the safety, ethics and transparency issues surrounding artificial intelligence that private companies don’t have the incentives to fix, he said. Validating algorithms and making them more transparent could lift the curtain on an otherwise opaque technology, which Elliot said would help citizens engage more positively with AI tools.

Another key component to changing the narrative surrounding AI is giving American workers a leg up in the changing economy. The country’s technological talent gap “is one of the biggest fears that political leaders and government officials are going to have to manage” in the years ahead, Carter said, but increasing federal investments in STEM education and digital literacy programs will train the next generation for the 21st-century workforce.

Panelists also underscored the important role federal job training programs could play in retooling the current workforce for positions created by AI, and the ways government can incentivize companies to create tech-focused apprenticeships and learning positions.

They also echoed many of the tech industry’s arguments for keeping federal regulators away from artificial intelligence, at least in its early stages.

Policymakers should focus their efforts on curbing the negative effects of the technology instead of picking winners and losers in the industry, said Ken Anderson, a professor of business law at American University. Such an approach would protect public safety and civil rights while allowing industry to experiment and innovate new AI tools and applications.