US Must Proactively Participate in International AI Standards-Setting, Officials Warn

Olemedia/Getty Images

Maintaining the country’s leadership in artificial intelligence hinges on contributions to international technology standards, as experts advocate diversity in feedback.

A House hearing on Thursday examined the steps needed to ensure continued U.S. leadership and trust in artificial intelligence systems, as the technologies become more ubiquitous in everyday devices. 

Speaking before the House Science, Space and Technology Subcommittee on Research and Technology, several witnesses—including sitting government officials—discussed how to promote trust and reduce bias in AI systems that handle sensitive human data.

Elham Tabassi, the lead at the National Institute of Standards and Technology’s Trustworthy and Responsible AI Program, testified about the agency’s forthcoming AI Risk Management Framework as part of its socio-technical approach to helping industry and government entities use and navigate AI safely. 

Tabassi noted that in addition to the public-private partnerships the Biden administration has wanted to foster for emerging technologies, NIST is also looking to craft standards with international cooperation.

NIST’s recommendations emphasize “technically solid, scientifically valid standards” as well as the “importance of international cooperations on development of standards that are technically sound and correct but also reflect our shared democratic values.”

Jordan Crenshaw, the vice president of the Chamber Technology Engagement Center at the Chamber of Commerce, also emphasized the need to foster international levels of guidance on AI technology, with U.S. needs in mind. 

“I would note it's also important to remember standards bodies internationally as well, in order for us to maintain our leadership in this front,” Crenshaw said. “We need to make sure that we have American interests represented with American businesses, and American policymakers.”

He specifically noted that competing foreign technology companies are working to participate more in the development of AI technical and operating standards abroad. As regulatory bodies, such as those within the European Union, issue stricter operating laws for major tech firms, large American companies like Google and Apple could have trouble competing in those markets. 

“When it comes to international standards bodies, we need to make sure the CHIPS and Science Act actually help provide funding to ensure we can participate in that space,” Crenshaw added.

Tabassi further confirmed that NIST’s pending AI standards will recommend what the U.S. needs in order to maintain leadership in technical and scientific standards development, partially to build foundations for international standards. 

The goal of this participation is to have diverse input into the operating standards AI technologies must abide by to be as ethical and fair as possible. Tabassi told Rep. Randy Feenstra, R-Iowa, that NIST’s forthcoming RMF will work to answer questions on the security, data protection and biases in AI models. 

“AI RMF is trying to provide a shared lexicon, [an] interoperable way to address all of these questions, but also provide a measurable process metrics and methodology to measure them and manage these risks,” she testified. 

Crenshaw added that developing trust in AI technologies to not discriminate when reviewing and analyzing broad swaths of data demands a collective approach. He cited the diversity of feedback given to NIST and the Department of Commerce in the RMF’s open comment period as critical to developing standards that are applicable to all sectors. 

“If you look at the comment record, it's comments from across the board, everyone from civil society all the way to industry and developers,” he said. “The partnership has been excellent.”

Tabassi agreed, adding that as trustworthiness varies as a concept, NIST has employed a transparent and collaborative approach to formulate national AI standards.

“There has [sic] been many definitions and proposals for definition for trustworthiness,” she said. “So we ran a stakeholder-driven effort to converge to the extent possible on the definition of trustworthiness.”