Powerful Algorithms Need a New Formula, Not More Transparency, To Be Kept in Check

Microsoft CEO Satya Nadella

Microsoft CEO Satya Nadella Rafiq Maqbool/AP

Algorithms are complicated so exposing the code behind them won’t make them more understandable, says Microsoft.

Algorithms today dictate our credit scores, our favorite music playlists, and, of course, what shows up in our social media feeds. So shouldn’t the public know what code and mathematical equations go into making them work?

“No,” says Microsoft. In a submission to a UK parliamentary inquiry into the use of algorithms in government and business, the software giant—which of course writes lots of algorithms—argues that transparency alone won’t help society hold such math constructs accountable for automated decisions. The submission was published today along with 46 others from academics, think tanks, and regulatory bodies.

Algorithms are complicated so exposing the code behind them won’t make them more understandable, says Microsoft. The heartbleed bug, for example, which made swathes of the internet vulnerable to password and other data theft, was found in one of the most widely used pieces of open-source encryption software on the web. Anyone could have scrutinized it, yet it took more than two years for the bug to be discovered.

“In this instance, there was total transparency regarding a publicly available algorithmic code, and yet it still took two-plus years to identify an algorithmic vulnerability,” Microsoft’s submission reads. It’s an example that has been used by Microsoft Research’s Danah Boyd to the European Parliament to make the same point.

Knowing how an algorithm is coded can be useless without knowing what data has been fed into it. Microsoft gives the conspicuous example of “a social media newsfeed,” where items may appear based on clicks or other interactions for each user. Facebook, of course, is famously opaque about the algorithm powering its News Feed. Without knowing what data was weighed by the algorithm, simply knowing what equations are behind it doesn’t help users understand it better, Microsoft argues.

Microsoft proposes that regulators focus on an algorithm’s inputs, and to figure out a definition for fairness that can be reflected in software. It’s pushing for a focus on the “overall fairness” of algorithmic results.

Some experts aren’t convinced. Frank Pasquale, a professor of law at the University of Maryland, thinks regulators should make transparency “essential” in many cases, and if transparency can’t be provided, then algorithms should be barred. A submission by the human rights group Liberty also demands transparency from an algorithm’s creator.

With so much riding on automated decisions made by math, the question of how to ensure algorithms deal with humans fairly is more relevant than ever.