The Federal Artificial Intelligence Risk Management Act is designed to give statutory support to many of the efforts outlined in the Biden administration's AI executive order.
Legislation looking to bolster federal guidelines for artificial intelligence technologies was introduced in the Senate on Thursday, underscoring the ongoing executive actions to bring regulations to bear on the rapidly-expanding AI industry.
Backed by Sens. Jerry Moran, R-Kan., and Mark Warner, D-Va., the Federal Artificial Intelligence Risk Management Act would codify several of the measures that were outlined in President Joe Biden’s sweeping executive order on artificial intelligence signed on Monday. Moran had previously introduced a similar measure as an amendment to the Senate version of the National Defense Authorization Act, but it was not adopted.
Companion legislation is expected to be introduced into the House of Representatives by Rep. Ted Lieu, D-Calif.
Some of the key provisions outlined in the bill give entities like the Office of Management and Budget and the National Institute of Standards and Technology codified goals towards providing more specific AI regulations.
News of the bill was first reported in Politico.
The legislation asks NIST to develop test and evaluation capabilities for federal AI technology acquisitions, and it tasks OMB with issuing corresponding guidance to help federal agencies leverage NIST’s AI Risk Management Framework in their own risk management efforts.
OMB would also have to establish a workforce initiative for federal agencies. Separately, the administrator of Federal Procurement Policy and the Federal Acquisition Regulatory Council would also be required to ensure federal offices utilize NIST’s framework in procurement operations.
The bill’s provisions look to safeguard sensitive data passing through AI-based systems and mitigate other risks inherent to automated systems.
“AI has tremendous potential to improve the efficiency and effectiveness of the federal government, in addition to the potential positive impacts on the private sector,” said Moran in a statement. “The sensible guidelines established by NIST are already being utilized in the private sector and should be applied to federal agencies to make certain we are protecting the American people as we apply this technology to government functions.”
The AI Risk Management Act is the latest in several bipartisan efforts to place greater safeguards around the development and use of artificial intelligence. Legislation introduced in the Senate in late October would require AI developers to include notices for content created using their systems.
In September, Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., proposed an AI legislative framework that aims to serve as a “blueprint for real, enforceable AI protections,” according to Blumenthal. That framework’s imperatives include establishing a licensing regime for AI models, ensuring legal liability for harms caused by such technology, preventing AI technology from falling into the hands of countries like Russia and China, promoting transparency and protecting consumers.
NEXT STORY: Commerce launches new safe AI center