Make AI Accountable by Adding Redress Tools, Researchers Say

GettyImages/ John Lund

The white paper from the University of California, Berkeley also called for regulators to appoint an ombudsman to investigate complaints and monitor issues with the technology.

Governments should protect residents from the harms of artificial intelligence and provide means of redress, according to researchers at the University of California, Berkeley.

In a white paper from Berkeley’s Center for Long-Term Cybersecurity, the researchers said that anyone who has been harmed or damaged from the deployment of AI should have the right to complain to an agency or department that has oversight of that AI-driven system or be able to take legal action.

The report comes as many government entities have started to use AI to help manage some programs, make operations more efficient and combat fraud. It also notes  issues with AI like algorithmic bias or errors that could discriminate against some members of society.

Researchers said the appointment of an ombudsman, who could investigate and resolve complaints, could play a key role as an “independent arbiter of disputes or complaints.” They also suggested that the ombudsman be a public repository of incidents that involve AI, which would facilitate the monitoring of trends and risks from the technology, as well as providing “an opportunity for developers and deployers of AI systems to learn from each other’s mistakes.”

The report cites the European Union’s General Data Protection Regulation and the California Privacy Rights Act as examples of technology-focused laws that include appropriate redress.

The report also urged governments and regulators to empower their civil society organizations like those in consumer advocacy, academia and research institutions to represent consumers.

[Most] of the harmful or biased applications of AI systems discovered in recent years were uncovered by civil society organizations conducting their own investigations or research,” the report reads. “Such organizations should therefore be empowered to bring actions against systems that cause harm by encouraging and/or requiring stakeholder engagement or resource and intelligence sharing.”

Governments’ interest in AI has accelerated in recent years, with New York City last year releasing a strategy for its use that called for “approaches, tools, collaborations, and governance to ensure that the use of this technology is appropriate.” And Congress passed the National AI Initiative Act as part of a nationwide push to accelerate research into the technology, while the EU is debating its own AI Act.