Facebook’s Fight Against Bad Content Is a Mess

pixinoo/Shutterstock.com

The company has been saying for a while that misinformation doesn’t intrinsically violate the platform’s standards.

Facebook is trying to have it both ways.

As a private company, it can ban whatever bad content it wants from its site, so it outlaws nudity and hate speech. But it also says that it doesn’t want to be the arbiter of truth, so it doesn’t remove patently false information that plagues its platform.

Its convoluted—often seemingly arbitrary—policies leave Facebook performing mental gymnastics to decide what should be banned, and what should remain. On a day-to-day level, the confusing rules—in addition to the sheer amount of content uploaded to the platform—mean that a lot of illegal or harmful content lingers, for countless more eyes to see.

Its battle against bad content as chaotic and muddled as the company’s policies on what is allowed on the platform, and what’s banned.

What does Facebook do with fake news?

Facebook likes to talk about its efforts to limit misinformation on its platform. But the war against fake news doesn’t actually include taking down fake news—except in rare instances.

This isn’t new: Facebook been saying for a while that misinformation doesn’t intrinsically violate the platform’s standards. The issue came up recently when Facebook executives were pressed by reporters over why a website such as Alex Jones’ InfoWars, which routinely peddles conspiracy theories, including that the September 11 terrorist attacks were carried out by the U.S. government, is allowed on the platform.

The answer was the same as always: We’re open to all views, and just because something is false, it doesn’t mean it warrants being censored. This is also the philosophy followed by YouTube and Twitter.

If something is clearly untrue, the platform used by 2.2 billion people each month will demote a piece of content so it shows up in users’ feeds less often. For repeat offenders, Facebook will remove their page’s ability to advertise on the platform and make money off their content, the company recently told Quartz.

On July 17, Facebook announced that it will take down some falsified content, if it’s identified as something that could potentially cause physical harm to real people. For now, the policy is aimed at countries where there is ongoing conflict, Facebook told Quartz, and will later be rolled out globally.

The legions of moderators that Facebook contracts—which executives like to mention when questioned about their efforts to fight bad content—do not, generally, handle posts flagged as fake news. This task falls to external fact-checkers that Facebook partners with, like the Associated Press and Snopes.

Since it doesn’t remove the content, Facebook has tried many solutions for flagging to users that a link, video, or photo is being disputed. Several of them backfired. Currently, under every disputed piece of content, Facebook adds links from legitimate sources on the same topic. Every publisher is also supposed to be designated with a special “i” symbol that directs the reader to its Wikipedia page.

Why not just take down fake news?

Facebook says it doesn’t remove fake news. Its rationale: “The fundamental thing here is that we created Facebook to be a place where different people can have a voice,” John Hegeman, the head of News Feed, told reporters during the event earlier this month.

The company is trying to do everything to appear non-partisan and unbiased. But a recent study from the University of Oxford showed that extremist, sensationalist, and fake content is disproportionately circulated by right-wing and Republican social media accounts.

In the U.S. and abroad, the company faces intense pressure from the right, largely because of a 2016 scandal, when reports revealed that a team that curated the now-defunct “trending news” section of the site was suppressing conservative sites.

As a result, lawmakers have pressed Facebook, YouTube, and Twitter on anti-conservative bias in various hearings, repeatedly bringing up the same examples and anecdotes. The most popular cause was the case of Diamond and Silk, African-American pro-Trump social media personalities that were labeled by the Facebook as “unsafe.”

During a recent hearing called specifically to discuss bias on social media platforms, Monika Bickert, Facebook’s global head of policy, devoted an entire section of her prepared remarks to a public apology to the duo.

Democratic lawmakers in the U.S. have repeatedly called out these claims of anti-conservative bias as unfounded. “It is a made-up narrative pushed by the conservative propaganda machine to convince voters of a conspiracy that does not exist,” representative David Cicilline said during the hearing.

And right-wing content is doing just fine on Facebook. A study by social-media tracking firm NewsWhip shows that during the 2016 presidential election, top conservative publishers had higher user engagement than liberal ones. Research from the left-leaning ThinkProgress has shown Facebook’s recent algorithm changes affected everyone, regardless of political stripe. And you can just as easily bring up anecdotal evidence of social media censorship on all along the ideological spectrum.

Cicilline accused Facebook of “bending over backwards” to placate conservative accusations.

This shouldn’t be a surprise.

Popular pages, especially those with engaged users, are valuable customers for Facebook. And the top spender on political ads on Facebook is… Donald Trump.

When does content cross the line?

Here’s what Facebook says it does remove:

  • Spam
  • Fake accounts—which helps curb the spread of false news, it says
  • Everything else that violates its “Community Standards” including hate speech, nudity, or violent content. (In April Facebook published its standards in excruciating detail.)

The line between what the platform determines to be permissible fake news and a violation of its rules is not always clear. For example, it told Quartz in February that it was removing false claims that the survivors of the Parkland shooting were “crisis actors,” labeling them as attacks against the survivors.

Claims that the Sandy Hook elementary school shooting was a hoax are allowed to stay on the platform, but, as CEO Mark Zuckerberg said himself in an an interview with Recode last week, a claim that a grieving parent of one of the victims was lying would be classified as harassment, and taken down (even this, however, seems to be a new policy, NBC reported).

Zuckerberg got himself into hot water by trying to explain Facebook’s reasoning on conspiracy theories further, bringing up the example of Holocaust deniers, which he said he found offensive. “But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong,” he said, adding that he didn’t think it was right to take people off the platform “if they get things wrong, even multiple times.”

After backlash, Zuckerberg clarified in an email to Recode that he “absolutely didn’t intend to defend the intent of people who deny that.” But he repeated his belief that Facebook should not be taking down fake news.

Content moderators hired by firms that Facebook contracts make similarly head-scratching distinctions on a daily basis. Recently leaked training documents revealed that Facebook has discerned between white supremacy, white nationalism, and white separatism, for example. This essentially means the company bans blatant racism, but allows it even if it is slightly veiled.

A documentary from the UK’s Channel 4 released last week, showed a reporter going undercover as a content moderator hired by a Facebook contractor, reveals even more perplexing dissections. According to Facebook’s rules, “Muslims” are a “protected” group, which cannot be attacked, but “Muslim immigrants,” are not, one of the other moderators says. Gruesome images of self-harm, which is not allowed on the platform, are left up if the moderator determines that the image is an “admission” of self-harm, but will take it down if the post praises the action. A video of a child getting brutally beaten is allowed to stay on the platform, and it’s only marked as “disturbing.”

It’s also unclear what it takes to get a page banned for violating Facebook’s community standards. During the House hearing, Facebook’s Bickert told lawmakers that the threshold of violating posts varies—which raised some eyebrows about the company’s transparency. Facebook told Quartz that “the consequences for violating our Community Standards vary depending on the severity of the violation and a person’s history on the platform.” For example, if someone shares an image of child exploitation, they will be removed without a warning, but if it’s just a nude photo, they will get more chances.

A document leaked to Motherboard revealed that for these lesser violations—in the case of hate speech and sexual content—the company does in fact have a hard-and-fast rule. It takes down pages if they’ve exceeded five offending posts in 90 days, or if 30% of the content posted on the page by others violates community standards.

But the Channel 4 documentary showed that certain pages—specifically, it mentions popular UK far-right figure Tommy Robinson, and the now-defunct page of his organization, Britain First—are “shielded.” Instead of taking a page down after it passes the threshold, contracted content moderators send these popular pages to Facebook employees to deal with. And the reason why may be simple: “they have a lot of followers, so they’re generating a lot of revenue for Facebook,” one of the moderators says in the documentary.

Facebook vehemently disputes the claim that it considers revenue when making content moderation decisions, and that it had a policy of “shielded review.”

Zuckerberg admits that Facebook has mishandled many problems related to bad content. He says that it will take three years to deal with all the different issues the company created for itself—and that it’s already halfway through this process. But when it comes to policing content, it seems that no amount of one-off fixes will be enough, unless the company fundamentally re-thinks how to approach the unending flood of awfulness that the internet provides us. It seems like a tall order for a man who frequently describes himself as an optimist and idealist.