Why Facebook Wants to Give You the Benefit of the Doubt

Facebook CEO Mark Zuckerberg arrives to testify before Congress

Facebook CEO Mark Zuckerberg arrives to testify before Congress Andrew Harnik/AP

Mark Zuckerberg’s remarks about Holocaust denial once again showed Facebook’s optimism about human nature.

In an unusually revealing moment for Facebook’s CEO, Mark Zuckerberg told Recode’s Kara Swisher on Wednesday that he didn’t support taking down content about Holocaust denial on Facebook. Zuckerberg is Jewish, and he finds such denials “deeply offensive,” he said. But Holocaust deniers were not “intentionally getting it wrong.”

When Swisher followed up that “in the case of Holocaust deniers, they might be,” Zuckerberg retreated to a stance he’s never quite made explicit before. “It’s hard to impugn intent and to understand the intent,” he said.

In place of “understanding” the intent, this statement makes clear that Facebook takes a default stance of assuming users act in good faith—or without intention, at least. Zuckerberg and Facebook have been repeatedly criticized, and accepted the criticism as largely true, that they have been too willing to ignore the potential negative ways the platform can be used. And yet here, one of the basic principles of how they moderate speech is to be so optimistic as to give Holocaust deniers the benefit of the doubt.

Zuckerberg seems to be imagining a circumstance where somebody watched a YouTube video that makes a case against the (real, documented, horrifying) Holocaust and ignorantly posts it to Facebook. Under the rules the platform has established, there is no penalty for that (in countries where Holocaust denial is not illegal). The person is not technically harassing any one individual with the post, nor is the post what Facebook would call “inauthentic,” in the sense that it was shared by a real person who genuinely believes what they’ve posted. Holocaust denial is a dangerous thing with deep roots in ongoing anti-Semitism, but it’s not against any rule.

Zuckerberg sees Facebook’s mission as “giving people a voice,” and, along with that, the benefit of the doubt. He’s maintained this stance despite understanding that people say a lot of terrible things, and that if you’re the platform where people say stuff, then people are going to say a lot of terrible things on your platform. Maybe Facebook could create a list of awful things that are unsayable on the platform, then train moderators and AI to proactively search and destroy those posts. Maybe making that list and all its dark variations along with the system for dealing with it is a tractable problem.

But there are a lot of things to be terrible about in this world. Millions of Americans “authentically” hold racist views. If someone posits that black people were better off under slavery, what should be done? Or if someone denies the Reconstruction-era campaign of white racial terrorism in the American South against black people, what should be done? Should Facebook toss every racist assertion off the platform?

Or take very different people and situations that might nonetheless pose difficult challenges in moderating speech on Facebook. Assata Shakur “authentically” believed herself to be fighting for black liberation, and many people in America agree that she was. If someone praises her role in the death of a policeman, what should be done with that post? What about approving of the armed resistance Nelson (and Winnie) Mandela employed to help defeat apartheid in South Africa?

Policing what people say on Facebook is not a problem with easy solutions. This private company has deeply enmeshed itself in society’s information flows, which makes them one of the most important arbiters of what people know about the world. Is it ideal for a private company to define its own standards for speech and propagate them across the world? No. But here we are.

The stance that the company is evolving toward seems to be a kind of sliding scale of distribution. “What we will do is we’ll say, ‘Okay, you have your page and if you’re not trying to organize harm against someone, or attacking someone, then you can put up that content on your page, even if people might disagree with it or find it offensive,” Zuckerberg said, “But that doesn’t mean that we have a responsibility to make it widely distributed in News Feed.”

Facebook can gently stop posts from being seen without actually taking them down. Call it “sort of censorship.” We don’t know precisely how the downgrading system works, but it’s reasonable to assume that it is quite sophisticated, and not likely to be a simple toggle. Think about how that applies to the old “you can’t yell fire in a crowded theater.” Facebook can decide to let you yell fire with as many exclamation points as you like, but only let a small fraction of its users hear you.

You don’t need to be a free-speech absolutist to imagine how this unprecedented, opaque, and increasingly sophisticated system could have unintended consequences or be used to (intentionally or not) squelch minority viewpoints. Everyone, Facebook included, wants to find a way out of the mess generated by every voice having a publishing platform. But what if there is no way out of it?