Building the Public’s Trust in AI Is Key to Coming Guidance, White House Official Says

deepadesigns/Shutterstock.com

The administration’s assistant director for artificial intelligence shared details about an in-the-works memo to modernize agencies’ regulatory approaches to the emerging tech.

The White House Office of Science and Technology Policy’s Assistant Director for Artificial Intelligence offered fresh details Wednesday into a memo being developed to help foster public trust and build agencies’ confidence in regulating artificial intelligence technologies.  

“This is a memo directed to agencies that suggests regulatory and non-regulatory principles for how you oversee the use of AI in the private sector,” Lynne Parker, OSTP’s assistant director for artificial intelligence, said. “So these will establish some common principles [and] some predictability across agencies in terms of how they think about regulatory and non-regulatory approaches to the use of AI.”

In February, President Trump issued an executive order to accelerate American advancements in AI. One of the key priorities of the order, Parker noted, is to “foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.”

“The question is—how do you implement trust and confidence?” she asked. 

Late last month, Federal Chief Technology Officer Michael Kratsios initially announced the memo. Implementing advanced technological solutions will require modern regulatory approaches, and Kratsios noted that it will be the first document that has “legal force” around how agencies should go about regulating AI tech. The memorandum is currently in the works through “close coordination” with the Office of Management and Budget’s Information and Regulatory Affairs office.

The in-the-making memo marks one of the key efforts the administration is embarking on to address issues around worries the general public has around adopting AI. Parker said the memo’s crafters are taking a risk-based approach and thinking of AI not as a single monolithic concept, but more unique in terms of how it works for each specific application domain. She added that some application domains don’t need as many approaches as others, depending on whether they raise more concerns around deployment. 

Though Kratsios and Parker did not offer up a timeline, the assistant director of AI said once it’s crafted in draft format, the memo will be released for the public to weigh in. Parker noted that that element is critical as the team wholeheartedly aims to “get it right.” Once the input is taken into account, a final memo will be released. 

“After that, agencies will be directed to come up with their own plans for their own regulatory space, for how they want to ensure the appropriate regulatory and non-regulatory approaches for AI within the user application domains that they have oversight in,” she said. 

The order also calls for the establishment of AI technical standards, and efforts to limit the barriers around testing the technology and accelerating adoption. Technical standards enable interoperability across AI systems. Parker noted that putting them in place would support the measurement of the system’s performance, accuracy, robustness and trustworthiness. In support of the administration’s initiative, the National Institute of Standards and Technology recently instituted a plan for federal engagement to develop the necessary guidelines. 

“On the one hand, we say we want AI that’s trustworthy, but on the other hand, we have no way of knowing how to achieve it—because we don’t know the standard for trustworthiness,” Parker said. “So these technical standards are critically important.”

Both the memo and the establishment of standards will support those deploying AI on the frontline in measuring bias and addressing concerns around it. Parker said the memo’s approach will allow agencies to consider use cases that present implications of bias.  She added proper tools must be developed to determine whether training data for machine-learning systems is appropriately representative for specific use cases.

“We also have to make sure that we are comparing systems to the current state, and the current state is that people are making decisions, and often, people are biased,” Parker said. “So, we don’t want to hold AI systems to an unreasonable level of perfection when we know that the AI systems can do better than that current state.”