Analysts disagreed with the assessment, calling the EU white paper a good start for developing a risk-based approach to regulating artificial intelligence.
The European Union released a white paper Wednesday outlining principles for regulating artificial intelligence, similar in nature to a set of principles issued by the U.S. government in January. However, at least one aspect of the EU’s document falls short, according to U.S. Chief Technology Officer Michael Kratsios.
The EU white paper mirrors many of the primary focal areas outlined in the U.S. principles released in January, including the need for transparency in development and trustworthiness of any AI technologies put into wide use. But, during an interview at an event Thursday hosted by the Hudson Institute, Kratsios said the EU document was too blunt when it comes to making determinations of risk.
“We found, what they actually put out yesterday, really, I think, in some ways clumsily attempts to bucket AI-powered technologies as either ‘high-risk’ or ‘not high-risk,’” he said.
The white paper, as Kratsios explained it, suggests Europe will develop a regulatory structure in which a designated body will determine whether a certain technology is high-risk or not, and impose strict regulations on the former while allowing the latter to develop unhindered.
“We believe this all-or-nothing approach is not necessarily the best way to approach regulating AI technologies. We think that AI regulations best serves on a spectrum, of sorts,” Kratsios said. “There are certain kinds of AI-powered technologies that will require heavy regulatory scrutiny, and we in the United States are prepared to do that. But there are quite a few that need just a little or not at all, and I think creating this spectrum is important.”
“There’s a lot we have in common,” he added. “But I think this approach of bluntly bifurcating the entire AI tech ecosystem into two buckets is a little bit harsh.”
Kratsios said the U.S. approach will have more gradation in the risk assessment, thereby leading to a lighter regulatory touch.
“The second key [of the U.S. principles] is really around limited regulatory overreach, which we believe we need to create a model that is risk-based and use-based and sector-specific,” he said. “The types of regulations that you may have for an autonomous vehicle or for a drone is very different for the type of regulation you’ll have for AI-powered medical diagnostics. Rather than bucketing the technologies as all, ‘Well, these are all high-risk, so you have to do these 12 things,’ or, ‘These are not high-risk, you don’t have to do anything,’ there has to be a sort of spectrum and flexibility in the model so that you’re actually able to regulate appropriately for the risk each of those technologies provide.”
That assessment might be reading too much into a white paper which, while it will inform future regulatory policies, is not itself a regulatory framework, according to Aaron Cooper, vice president of global policy at BSA | The Software Alliance.
“It’s too early in both the development of AI policy in the EU and the U.S. to know for sure whether they’re really going to diverge,” Cooper told Nextgov Thursday. “The white paper itself doesn’t create a regulatory framework. It is laying the foundation for more conversations around what the right framework would be: where there’s significant risk, where there’s less risk. I think we’re doing that in the U.S., as well.”
Cooper noted that both the EU and U.S. documents focus on a risk-based approach, and while the European white paper divides risk into only two buckets, “those aren’t unusual buckets,” he said.
“I think we have similar issues in the U.S., where if you’re going to have software—including AI-enabled software—in a medical device, for instance, the FDA has specific procedures because of the risks involved,” he said. “In the EU white paper, they’re starting to move down the track toward trying to figure out what’s the appropriate style of regulation.”
Agustin Huerta, vice president of technology, AI and process automation studios at Globant, agreed, and added further criticism of the U.S.’s regulatory efforts.
“What Europe is doing now with this latest release of digital strategy proposals is taking the next steps after its guidelines were released last year,” he told Nextgov. “I think the EU guidelines released last year are far more developed than the most recent guidelines provided by the U.S. and … the EU has far more developed data regulations than the U.S.—which is a great baseline for achieving AI regulation. From the U.S. side, there seems to be too much focus on leading the industry compared to actually protecting citizens from biases or unethical uses of AI.”
Agustin argued that the gap in data regulations will actually continue to hamper the U.S.’s ability to properly regulate AI.
“I think what Kratsios is trying to showcase is that the U.S. is still generally leading when it comes to AI, from development to regulation,” he said. “However… Data regulation is the basis for strong AI regulation, and the U.S. hasn’t progressed further in this space, with the exception of health-related data regulations. There is a need for the U.S. to catch up when it comes to data regulation since some major flaws have occurred within the U.S. where several biases have been detected in AI algorithms.”