A new report recommends that the federal CIO Council set up a group to work through what exactly AI accountability is and how to achieve it.
Government agencies don't have a "precise definition" of artificial intelligence accountability in federal guidance, according to a new report from the non-profit American Council for Technology and Industry Advisory Council's group focused on emerging technology.
Although non-binding risk management and accountability frameworks exist, currently the only governing document requiring action in terms of AI accountability is a 2020 executive order, leaving agencies with "a lot of latitude," the report states.
As for what exactly accountability is, the report says the following: "being accountable for behavior is to answer to individuals who are affected by the behavior."
The Trump-era executive order laid out a principle that AI systems should have "appropriate safeguards" that limit systems' use to only those intended and ensure they work correctly. That order required agencies to inventory their AI systems and create plans to bring all systems up to the standards in the order, including agency accountability for those safeguards on the use and functioning of AI systems, as well as principles like accuracy and transparency.
One recommendation in ACT-IAC's report on accountability is that the Federal Chief Information Officers Council create an "AI accountability discussion group" to discuss accountability definitions and implementation.
The new report advises agencies not only follow actions required by the 2020 executive order, but also take accountability practices from existing frameworks at the National Institute of Standards and Technology, which issued a draft AI risk management framework in early 2022, and the Government Accountability Office.
"Artificial Intelligence Accountability is an important opportunity for agencies because AI capabilities present risks not anticipated by other federal regulations and policies," the report states. "Failure to plan for accountability in the adoption of AI may result in greater risks."
Since that executive order, the Biden administration also issued a non-binding "AI Bill of Rights" in 2022 that outlined principles like notice and explanation, data privacy and algorithmic discrimination protections. At the time, the White House also committed to forthcoming actions, like new procurement policies around AI.
Former top Republican on the Senate Homeland Security and Governmental Affairs Committee Rob Portman also recently pressed the Office of Management and Budget on its implementation of the AI in Government Act. So far, OMB hasn't yet issued guidance on AI use as required by the law, putting the "value of those [AI] systems… in doubt," he wrote in a letter to OMB director Shalanda Young. OMB declined to comment to FCW about the status of the required guidance.
As for the stakes, a December report by the Partnership for Public Service and Microsoft on the use of AI in public service delivery in government called out the potential for AI to automate biases or inaccurate results at scale.
Among the considerations the report offered for the use of AI in public benefits particularly are due process and appeals processes for customers. It also recommended that agencies define metrics to evaluate AI systems and their outputs, audit and evaluate systems regularly and create transparency mechanisms around systems.
"Public sector organizations must put responsible AI principles at the center of their decision-making," that report states. "But to successfully apply these principles, agencies need to have in place the building blocks that create an environment that fosters responsible AI use: data, talent and governance structures."