“The federal government’s commitment to the AI Bill of Rights would show that fundamental rights will not take a back seat in the AI era,” the lawmakers argue.
A group of 16 Democratic lawmakers led by Sen. Ed Markey, D-Mass., and Rep. Pramila Jayapal, D-Wash., is urging the White House to bake its AI Bill of Rights into the administration’s forthcoming executive order on artificial intelligence.
The lawmakers want federal agencies to be bound to use the principles and best practices from the existing blueprint for an AI Bill of Rights, which was released in 2022 and is currently followed on a voluntary basis. They emphasized that they are echoing a similar request made by over 60 civil society, tech, labor and human rights organizations last month.
“Your forthcoming AI executive order is an important opportunity to establish an ethical framework for the federal government’s role in AI,” the group of Democrats wrote in a Wednesday letter to President Joe Biden. “This moment calls for the adoption of strong safeguards on algorithmic discrimination, data privacy and other fundamental rights.”
Focused on “whenever automated systems can meaningfully impact the public’s rights, opportunities or access to critical needs,” the AI Bill of Rights outlines five principles: data privacy, system safety and efficacy, user notice, human alternatives and protections from algorithmic discrimination.
“These principles should apply when a federal agency develops, deploys, purchases, funds or regulates the use of automated systems that could meaningfully impact the public’s rights,” the lawmakers wrote.
The AI Bill of Rights also includes a technical companion with some specific to-do items like pre-deployment testing, ongoing monitoring of systems, ensuring representative training data for systems and more.
Although there are over 1,100 AI use cases in the government currently, federal chief information officer Clare Martorana said in May, there is relatively little binding guidance for agencies to follow on AI.
“The reason we aren’t looking at, ‘Hey, are agencies meeting the requirements of that law or not, or that guidance?’ is because there are no specific requirements. It’s all aspirational,” Kevin Walsh, a director in the Government Accountability Office’s Information Technology and Cybersecurity team, previously told Nextgov/FCW.
President Biden said last month the executive action on AI was on the horizon for this fall, as lawmakers continue to grapple with how to regulate the technology from Capitol Hill. Arati Prabhakar, director of the White House’s Office of Science and Technology Policy, said that the forthcoming executive order will be “broad,” reflecting “everything that… the president really sees as possible under existing law to get better at managing risks and using the technology.”
Co-author of the AI Bill of Rights, Suresh Venkatasurbramanian, a computer science and data science professor at Brown University, told Nextgov/FCW via email that requiring agencies to use the AI Bill of Rights in the forthcoming executive order is a “great idea” that would require detailed guidance from the Office of Management and Budget on what is expected of agencies.
He argued for a similar course of action in a July op-ed for Wired, urging the White House to draw upon the AI Bill of Rights and the National Institute of Standards and Technology AI Risk Management Framework to issue the promised executive order with requirements for federal agencies, recipients of government funding and contractors supplying systems to follow established best practices.
The group of Democrats co-signing similar demands on Wednesday wrote that “as a substantial purchaser, user and regulator of AI tools, as well as a significant funder of state-level programs, the federal government’s commitment to the AI Bill of Rights would show that fundamental rights will not take a back seat in the AI era.”
NEXT STORY: The CIA’s data-challenged AI imperative