Sweeping White House AI order includes mandate for commercial developers

President Joe Biden speaks about artificial intelligence in the White House alongside tech executives (L to R) Adam Selipsky, CEO of Amazon Web Services; Greg Brockman, President of OpenAI; Nick Clegg, President of Meta; and Mustafa Suleyman, CEO of Inflection AI  on July 21, 2023.

President Joe Biden speaks about artificial intelligence in the White House alongside tech executives (L to R) Adam Selipsky, CEO of Amazon Web Services; Greg Brockman, President of OpenAI; Nick Clegg, President of Meta; and Mustafa Suleyman, CEO of Inflection AI on July 21, 2023. ANDREW CABALLERO-REYNOLDS/AFP via Getty Images

The administration is leveraging the Defense Production Act to get access to safety testing for high-risk artificial intelligence use cases.

The Biden administration’s long-awaited executive order on artificial intelligence builds upon previous executive actions shaping the direction of U.S. AI policy and introduces new oversight measures, while calling on Congress to support the White House’s regulatory efforts through legislation. 

The 100-plus page order, scheduled to be signed Monday afternoon, keys in on protecting Americans from potential dangers of autonomous systems, especially in health and national security, while also leveraging the benefits of the emerging technology in delivering government services.

“The president several months ago directed his team to pull every lever and that's what this order does,” a senior administration official said on a call with reporters on Sunday. “Bringing the power of the federal government to bear in a wide range of areas to manage AI's risks and harness its benefits.”

On the safety front, the Biden administration announced plans to leverage the Defense Production Act to require that commercial developers creating potentially risky AI models share details on testing and training with the government.

The official added that forthcoming standards from the National Institute of Standards and Technology on safety testing will ask what makes an AI system safe prior to it being released into the world. 

The Biden administration also pledged to develop an advanced federal cybersecurity program to leverage AI software to patch vulnerabilities in digital networks, to set new standards for life sciences research that protect against the engineering of dangerous biological materials and to develop standards for federal agencies to use in labeling AI-generated content.

To support federal adoption of AI, the administration plans to issue new guidance covering agency procurement and deployment of the technology and to surge hiring efforts for AI professionals, with an effort led by the Office of Personnel Management, the U.S. Digital Service and the U.S. Digital Corps.

Other actions include focusing on protecting workers from labor market disruptions AI systems can pose. It specifies the need for collective bargaining and workforce upskilling to prevent major worker displacements. The executive order also continues the Biden administration's efforts to protect civil rights and equity, in keeping with the previously-issued AI Bill of Rights

“One of the administration's top priorities is advancing equity and civil rights and protecting Americans from discrimination,” the administration official said. New guidance for federal contractors and benefit program managers to prevent algorithmic discrimination will be disseminated pursuant to the executive order, as well as directing the Department of Justice to investigate violations of civil liberties related to automated systems. 

“In each of these areas, the executive order works to set high standards, and then to prepare the federal government to live up to those standards and to — where it is appropriate — enforce them,” the official said. 

The order also looks to foster U.S. leadership in AI through advanced research and investments in businesses working to leverage AI solutions for issues such as climate change and health care.

“The EO will also help promote a fair, open and competitive AI ecosystem by providing small developers and entrepreneurs access to technological assistance and resources, helping small businesses commercialize AI breakthroughs and encouraging the Federal Trade Commission to exercise its authorities as appropriate,” the official said.

The timeline for these action items is variable; some items will launch within 90 days, while other reporting oversight efforts have deadlines closer to one year. 

At the same time, the administration is looking to Congress to support the aims of the executive order with legislation.

“To better protect Americans' privacy from the risks posed by artificial intelligence, the president is reiterating his continued call to Congress to pass bipartisan data privacy legislation,” the official said. “We are not at all suggesting this is the end of the road on AI governance, and we look forward to engaging with the Congress to go further.”

Industry experts also note that the fast-moving nature of AI systems demand updated legislation to offer deeper reforms for the increasingly automated future.

Divyansh Kaushik, the associate director for Emerging Technologies and National Security at the Federation of American Scientists, said that Congress needs to take next steps to ensure the U.S.’s legal and institutional landscape is hospitable for emerging AI systems.

“If we are to truly capitalize on the potential of AI, we must legislate holistically and quickly, from privacy to immigration,” Kaushik told Nextgov/FCW. “We can't afford to operate on laws that haven't changed since the 1990s, especially considering the rapid growth of both our economy and the field of AI. This task of updating and reforming falls squarely on the shoulders of Congress.”