White House Proposes 'Light-Touch Regulatory Approach' for Artificial Intelligence

MONOPOLY919/Shutterstock.com

The guidelines are intended to help govern the FDA’s approval process for AI-powered medical devices, and other regulations around private sector AI-use.

Federal agencies will soon have to demonstrate that any proposed regulations for artificial intelligence technologies in the private sector abide by a new, “first-of-its-kind” series of 10 principles set forth by the Trump administration this week. 

In a preview call with reporters Monday and subsequent op-ed in Bloomberg Tuesday morning, U.S. Chief Technology Officer Michael Kratsios and other senior administration officials from the White House Office of Science and Technology detailed the principles proposed to govern the future development and private sector use of AI technologies. The guidelines, published in a draft memorandum Tuesday afternoon, are “a first of their kind—from any government” insiders said, though they also emphasized that the U.S. government’s own use of the budding technology is outside the purview of the document. 

“On its face, the guidance we describe provides agencies with a common sense, pro-innovation approach to deal with various AI regulatory issues,” Kratsios said on the call. “As countries around the world grapple with similar questions around the appropriate regulation of AI, [the principles] demonstrate America is leading the way to shape the evolution of AI in a way that reflects our values of freedom, human rights and civil liberties.”

The principles were initially called for in the American AI Initiative, the administration’s national AI strategy created through an executive order early last year. Kratsios noted that the “light-touch regulatory approach” was designed to achieve three goals: ensuring public engagement, limiting regulatory overreach and promoting trustworthy technology. The ultimate intent is to ensure that agencies avoid “regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.” Further, the memorandum requires agencies to conduct risk assessments and cost-benefit analyses prior to any regulatory actions to ensure that they adequately evaluate the potential tradeoffs. While the guidelines do not apply directly to independent agencies, officials said all other federal agencies must oblige. 

“This is the first actually binding document created by any country in the world,” one official noted. 

Essentially, when any agency with regulatory authorities considers regulations to govern industry use of AI in the future, the White House Office of Information and Regulatory Affairs will determine whether the proposed rules adhere to the guidelines. “So agencies will be bound to these principles through that process,” an official said. 

The move also kicks off a larger process within the White House and across federal agencies. Now that the impending draft version of the principles is published, the public will have 60 days to comment. The White House will then issue a final, official memorandum—and then agencies will be expected to submit their own relevant implementation plans “on achieving consistency with [the memorandum]” 180 days after it is issued.  

Deputy U.S. Chief Technology Officer Lynne Parker outlined and offered a short description of each of the 10 principles:

  • The government's regulatory and nonregulatory approaches to AI must promote reliable, robust and trustworthy AI applications.
  • Agencies should provide ample opportunities for the public to provide information and participate in all stages of the rulemaking process, especially in instances when AI uses information about individuals.
  • Agencies should develop technical information about AI through an open and objective pursuit of verifiable evidence that both inform policy decisions and foster public trust in AI.
  • A risk-based approach should be used to determine which risks are acceptable, and which risks present the possibility of unacceptable harm or harm that has expected costs greater than expected benefits.
  • Agencies should carefully consider the full societal costs, benefits and distributional effects before considering regulations related to the development and deployment of AI application. 
  • Agencies should pursue performance-based and flexible approaches that can adapt to rapid changes and updates to AI applications.
  • Agencies should consider issues of fairness and nondiscrimination with respect to outcomes and decisions produced by the AI application at issue, as well as whether the AI application at issue may reduce levels of unlawful discrimination compared to existing processes.
  • In addition to improving the rulemaking process, transparency and disclosure can increase public trust and confidence in AI applications. At times, such disclosures may include identifying when AI is in use for instance if appropriate in addressing questions about how application impacts human end users. 
  • Agencies should pay particular attention to the controls in place to ensure the confidentiality, integrity and availability of information processed, stored and transmitted by AI systems.
  • Agencies should coordinate with each other to share experiences and ensure consistency and predictability of AI-related policies that advance American innovation and growth and AI, while appropriately protecting privacy, civil liberty and American values and while allowing for sector and application-specific approaches when appropriate.

The memorandum also encourages agencies to consider other nonregulatory approaches and offers up examples of actions outside of regulations that can reduce barriers to AI innovation, such as releasing public datasets. It also calls for agencies to follow the National Institute of Standards and Technology’s plan for federal engagement in AI technical tools and standards, which was born out of the executive order. 

Officials stressed that agencies should avoid “top-down, one-size-fits-all blanket regulation,” as different uses of the technology render different policies. Still, in highlighting some of the principles' expected applications, officials added that they will help “ensure consistency across how all the agencies that have regulatory authority over use cases of AI should proceed.” Though they are not relevant to the government’s own AI deployments, including its expanding use of facial recognition, officials said the principles will help govern things like the Food and Drug Administration’s approval process for AI-powered medical devices, or the Transportation Department’s rules around AI that’s engrained in automated vehicles or commercial drones.  

“Those are examples where we would want to see some consistency,” an official said. 

Insiders also detailed how the administration engaged like-minded international partners ahead of the document’s release. Though some European countries have opined on how they may govern AI technologies going forward, the administration officials repeatedly stressed that internationally, the principles mark a first from any government. 

“That’s why we believe this is important and momentous globally for the AI regulatory environment, because these will serve as an example for other Western democracies to think about how to incorporate these rules,” an official said.