Tech’s Fractal Irresponsibility Problem

Bloomicon/Shutterstock.com

Each new scandal reflects in miniature the shape of the industry’s big problems.

It’s Thursday, so there’s another small scandal in the tech world. Hate groups that Facebook had booted from its platform after the murder of Heather Heyer have slithered back into the blue-and-white universe, The Guardian reported. The Southern Poverty Law Center gave an exasperated quote; Facebook forcefully averred, “As organized hate groups, they have no place on our platform.” But the weird thing is: Before the Guardian reporter Julia Carrie Wong contacted the company—which is worth around $600 billion, has roughly 17,000 employees dedicated to content moderation, and has been talking about working on these problems for two and a half years—the pages were very much on the platform.

It recalled so many other minor tech scandals, any one of which has been written off as an “edge case,” as this “mistake,” or that “mistake,” as an “error,” as just “playful,” or as a component of an acceptable error rate. There are countless more of these situations, only some of which even rise to newsworthiness. A banned activist here, a person trolled off a platform therea local business hurta small country’s news ecosystem thrown into disarray, an image-recognition algorithm labeling black people as gorillas.

But these problems are all of a piece, and they feel that way too, to people experiencing them. Tech has a fractal irresponsibility problem: Each small worry reflects in miniature the shape of the industry’s big problems. The scale changes, but the substance doesn’t.

Tech companies behave carelessly, but retreat immediately into virtue when they are subjected to scrutiny. They see their problems through a statistical lensthat blinds them to the human particulars of situations. They foreclose or lobby against reforms that would hurt their profitability or efficiency. They have displayed towering overconfidence that when they “change the world,” it will be for good. They have a deep aversion to making public decisions that might be seen as value judgments, but they execute complex algorithmic management to shape people’s experiences of their platforms. They maximize their profits—becoming the most valuable companies in the world, and pushing out established small and large businesses—while externalizing the social costs of the new problems that their services introduce. They are becoming unimaginably rich and powerful by sucking up moneyattention, and business opportunities that used to be more evenly distributed.

Individual employees, or even large teams, cannot manage these platforms. They fight the rearguard battles, fixing things as they arise, trying desperately to come up with principles that machines can apply to human problems to the two-billionth power, and knowing, mostly, that they will fail. It’d be easy to write off Facebook’s failure to permanently ban hate groups as a small oversight on a massive platform—and that is surely what Facebook wants us to do—but for the fact that this keeps happening, and every time it does, the arc is the same.

On Tuesday, BuzzFeed published a memo from the outgoing Facebook chief security officer, Alex Stamos, in which he summarizes what the company needs to do to “win back the world’s trust.” And what needs to change is … well, just about everything. Facebook needs to revise “the metrics we measure” and “the goals.” It needs to not ship code more often. It needs to think in new ways “in every process, product, and engineering decision.” It needs to make the user experience more honest and respectful, to collect less data, to keep less data. It needs to “listen to people (including internally) when they tell us a feature is creepy or point out a negative impact we are having in the world.” It needs to deprioritize growth and change its relationship with its investors. And finally, Stamos wrote, “We need to be willing to pick sides when there are clear moral or humanitarian issues.” YouTube (and its parent company, Alphabet), Twitter, Snapchat, Instagram, Uber, and every other tech company could probably build a list that contains many of the same critiques and some others.

People encountering problems online probably don’t think of every single one of these institutional issues when something happens. But they sense that the pattern they are seeing is linked to the fact that these are the most valuable companies in the world, and that they don’t like the world they see through those services or IRL around them. That’s what I mean by fractal irresponsibility: Each problem isn’t just one in a sequence, but part of the same whole.