The Case Against Understanding Why AI Makes Decisions

ktsdesign/Shutterstock.com

Not everyone wants to open the "black box" of artificial intelligence.

As deep-learning algorithms begin to set our life insurance rates and predict when we’ll die, many AI experts are calling for more accountability around why those algorithms make the decisions they do. After all, if a self-driving car kills someone, we’d want to know what happened.

But not everyone is sold on opening the “black box” of artificial intelligence. In a Medium post for Harvard’s Berkman Klein Center, author and senior researcher David Weinberger writes that simplifying the processes deep-learning systems use to decide—a necessary step for humans to understand those processes—would actually undermine the reason we use algorithms in the first place: their complexity and nuance.

“Human-constructed models aim at reducing the variables to a set small enough for our intellects to understand,” Weinberger writes. “Machine learning models can construct models that work — for example, they accurately predict the probability of medical conditions — but that cannot be reduced enough for humans to understand or to explain them.”

Rather than deconstructing individual AI errors, Weinberger suggests focusing on what an algorithm is and is not optimized to do. This approach, he argues, takes the discussion out of a case-by-case realm—in reality, the system will never be perfect—and allows us to look at how an entire AI system works to produce the results we want. Is a self-driving car optimized for speed or safety? Should it save one life at the cost of two? These are problems that can be regulated and decided without expert knowledge of the internal workings of a deep neural network. Once societal expectations are set for a new technology, either through regulation or public influence, companies can optimize for those outcomes.

Weinberger’s argument represents a fundamental shift in how we think about machine-learning systems. Those arguing we need to interpret exactly why each decision is made see algorithms as capable of being perfected—with the right data and engineering, the errors will be negligible. But those who have left the idea of interpretability behind, including Weinberger and even Facebook chief AI scientist Yann LeCun, say that machines will inevitably make mistakes, and suggest looking at trends in what decisions the machines are making in order to rebuild them in the way we want.

Of course, there will always be AI failures that people want to understand explicitly. Consider NASA trying to figure out why a satellite was lost, or a scientist’s algorithm being able to predict the composition of a new material without knowing why it could exist. The proposed system works a lot better when thinking of it in terms of a public-facing product or service.

“By treating the governance of AI as a question of optimizations,” Weinberger writes, “we can focus the necessary argument about them on what truly matters: What is it that we as a society want from a system, and what are we willing to give up to get it?”