Agencies Could Struggle to Remove Bias From Algorithms

Dmitry A/Shutterstock.com

It’s a regulatory challenge to ensure that artificially intelligent systems aren’t making decisions based on biased information.

Federal agencies are still figuring out how to approach artificial intelligence, but they’ll soon come up against a major challenge: eliminating biases in the algorithms underlying those systems.

This is especially important if the AI systems are used to make decisions that directly affect citizens. For example, algorithmic bias might cause a creditor to offer a lower score to someone living in a certain neighborhood because that characteristic is associated with an inability to pay off credit cards. In the criminal justice system, an algorithm might also be used to lengthen person’s sentence based on predictors about their likelihood to commit another crime.

The Obama administration had started investigating the risks associated with big data and civil rights, but there’s still work to be done, Michael Garris, a scientist within the National Institute of Standards and Technology, said on a Thursday panel hosted by GovernmentCIO Magazine.

NIST has “heard a lot of concern” about algorithmic bias, but “how do you even define that, quantify it?” he said. And “once you quantify it, how do you mitigate it?”

Machine learning is not always interpretable by nontechnical employees, and it’s often “very difficult to derive an explanation” for how algorithms make decisions, he explained. “These are all areas very ripe for standardization.”

One of NIST’s goals is to “responsibly work with the community...so that when [products are] brought to market, they are reliable,” he added.

It could be a while before agencies really start tackling the ethical implications of algorithmic biases, especially when many of them are still learning how the technology could be applied at all. The General Services Administration is coordinating a working group that would describe to leaders the policies, acquisition strategies, and interagency programs related to artificial intelligence, according to Justin Herman, who heads up the Emerging Citizen Technology office within GSA.

Currently, there’s “no fully defined path” toward AI implementation, but “the worst thing is for you to walk away thinking AI is in some intangible future state,” he said.