Commerce launches new safe AI center

Commerce Secretary Gina Raimondo speaks at the UK's AI Safety Summit in Bletchley Park on Nov. 1, 2023. Her agency is taking a leading role in implementing the Biden administration's executive order on AI.

Commerce Secretary Gina Raimondo speaks at the UK's AI Safety Summit in Bletchley Park on Nov. 1, 2023. Her agency is taking a leading role in implementing the Biden administration's executive order on AI. Leon Neal/Getty Images

The National Institute of Standards and Technology is slated to helm many of the new AI executive order’s guidance documents for safe and responsible development.

The U.S. Department of Commerce and its subcomponent, the National Institute of Standards and Technology, will open a new center dedicated to prioritizing safety in AI systems, part of the agencies’ updated roles and responsibilities to support President Joe Biden’s executive order on artificial intelligence.

Announced on Wednesday, the U.S. Artificial Intelligence Safety Institute — or USAISI — will lead government efforts in evaluating the safety of advanced AI models and develop associated standards for recognizing and authenticating AI-generated content, as well as provide test beds for researchers to explore and probe AI models’ risks. 

“Through the establishment of the U.S. AI Safety Institute, we at the Department of Commerce will build on NIST’s long history of developing standards to inform domestic and international technological progress for the common good,” said Commerce Secretary Gina Raimondo in a statement. “As President Biden takes swift action to ensure the U.S. is a global leader on innovative, safe and responsible AI, the Department of Commerce is eager to play its part.”

The USAISI will also work with external stakeholders and experts in academia and industry groups to develop guidance for AI safety. 

These new tasks for Commerce and NIST were stipulated in Biden’s executive order that debuted on Monday. The order requires that within 270 days of the order’s signing, NIST, the Department of Energy and Department of Homeland Security would need to develop guidelines to promote a consensus of industry standards to design and deploy safe, responsible AI models. 

NIST is taking a leading role in developing standards and safety measures for the adoption, oversight and risk management of AI. The agency will launch benchmark creation initiatives for AI auditing and develop a companion resource to the Secure Software Development Framework to safely incorporate generative systems and dual-use foundation models. NIST is also charged with making recommendations to the White House for detecting, authenticating and watermarking AI-generated content and developing guidelines to support AI risk management across federal agencies.

“I am thrilled that NIST is taking on this critical role for the United States that will expand on our wide-ranging efforts in AI. The U.S. AI Safety Institute will harness talent and expertise from NIST and the broader AI community to build the foundation for trustworthy AI systems,” said NIST Director Laurie Locascio. “It will bring industry, civil society and government agencies together to work on managing the risks of AI systems and to build guidelines, tools and test environments for AI safety and trust.”