Developing Standards For AI Won’t Go Like Past Technologies

80's Child/Shutterstock.com

Featured eBooks

Digital First
Cloud Smarter
Cybersecurity & the Road Ahead

Top experts and officials working on artificial intelligence say global competition has set the stage for a very different standards discussion than with past technologies.

As with any technology, artificial intelligence will need near-universal standards in order to meet its potential. But those standards will be developed much differently than that of past advancements, according to experts.

While some countries across the globe came together to establish a set of high-level principles for artificial intelligence technologies, U.S. scientists and officials are working on more comprehensive standards.

The standards set forth by the international Organisation for Economic Cooperation and Development “begin to answer the question of how we should think about developing and deploying AI technologies,” Walter Copan, director of the National Institute for Standards and Technology, said Thursday at the start of the Federal Engagement in Artificial Intelligence Standards Workshop. “We now need to work on how we implement these principles in the real world, hence this workshop. We need science- and technology-based standards that we can use and then translate these aspirational principles into actionable and measurable solutions.”

But the global marketplace for technology and shift from American corporate dominance will mean a very different standards setting process than we’ve seen in the past, according to Lynne Parker, assistant director for AI in the White House Office of Science and Technology Policy.

“Because most of the inventions and innovations were coming from American companies, for the most part, the folks that were at the table conversing about technology—technical standards—were primarily American companies,” she said. “But, we can’t assume that because it’s worked out well for American companies in the past it will continue to work out well going forward.”

Unlike with early advancements in computing, mobility and cloud, as examples, the global race to develop stronger AI—and, in turn, stronger AI standards—is extremely competitive, Parker said. The goal this time around should be to infuse American principles into those international discussions.

“I think the process that has worked well in the past for the era of the information age, we want to foster that going forward—the open, transparent, consensus-driven approach, voluntary approach—but we have to recognize that we’re in a new climate now,” Parker said. “That global competitiveness requires us to be more intentionally proactive in promoting that open, transparent process so that we can make sure all of our good ideas that are coming out of the United States have an equal footing.”

Jason Matusow, general manager of Microsoft’s Corporate Standards Group, agreed the landscape has changed, though he noted tools like open source and the global market of ideas will mean much faster advancements than we’ve seen in the past.

“That does not then take away from the role that standardization is going to play,” he said. “But it means that things are fundamentally different, not in a way of saying that other nations will beat us by standardizing first. But it’s going to be a function of how engineers and contributors come to the table and lead with innovation and lead with ideas that bring about a foundation of understand and a splaying out of issues so that things are transparent, so that you can understand what the technologies are and policies can be built on them.

Matusow also urged engineers working on standards to bring non-technical people to the table, particularly when it comes to ethics.

“If you talk to an engineer about deontological ethic or utilitarian ethic or different ethical models, they’re going to look at you really blankly because it’s not something they’ve studied. We have a need to have people who are truly trained ethicists, people who have a basis for the discussion,” he said.

Matusow said there are things engineers can do, such as building statistical models to track inherent bias in systems, such as how data is collected and categorized. But the underlying ethical questions, such as fairness in AI systems, have to start with trained ethicists.

“And then, the reaction from the engineering community is one of responsibility for what gets built and how they build it against those models,” he said.

This need becomes more apparent when trying to export ethical standards from one country and culture to another. In that instance, it is more about gaining a global consensus rather than being proscriptive, as Matusow mentioned earlier.

“We have to find a joining of the conversation to say that it’s not some magical standard that gets written that defines ethics. Because I guarantee you the French see it differently than the Saudis who see it differently than the Australians,” he said. “And you don’t want to have a standard that exports one ethic because it will get ignored by everybody. But good engineering practices that support the application of strong ethics no matter where you’re employing them is an entirely different discussion.”