Facebook Is Hiring 3,000 More People to Monitor Facebook Live for Murders, Suicides, Other Horrific Video

Facebook CEO Mark Zuckerberg

Facebook CEO Mark Zuckerberg Noah Berger/AP

Facebook Live has a dark side

Imagine putting up with your commute, heading into work, and sitting in a cubicle watching murders, child abuse, drug use and suicides all day, then returning home.

Earlier today, CEO Mark Zuckerberg said on his Facebook page the company will employ an additional 3,000 people (on top of the 4,500 it already has) to monitor videos posted to the social network that are reported for inappropriate content. Zuckerberg said recent videos—presumably including the horrific video of a man killing himself and his 11-month-old daughter in Thailand—were “heartbreaking,” and that the company is working on a system to make it easier to report videos and get inappropriate ones taken down quicker.

“Just last week, we got a report that someone on [Facebook] Live was considering suicide,” Zuckerberg said. “We immediately reached out to law enforcement, and they were able to prevent him from hurting himself.”

Zuckerberg added Facebook already works with local community groups and law enforcement to try to help those who have posted about harming themselves or others on the social network, and he plans to make it simpler to do so in the future.

“We’re also building better tools to keep our community safe," he said. "We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards, and easier for them to contact law enforcement if someone needs help. As these become available they should help make our community safer.” (Zuckerberg did not provide specifics on what those tools would look like; and Facebook had no more information when Quartz reached out for comment.)

This may be welcome news for the nearly 2 billion Facebook users who likely don’t want to come across the darkest depths of the human condition while looking at pictures of their cousin’s bachelorette party in Schenectady or arguing about a Breitbart article. But what about the 7,500 people who will be trawling through live and prerecorded videos for this sort of content?

Many have written about and documented in recent years the psychological toll of watching this sort of content every day. Wired profiled workers at companies contracted by social networks like Twitter to watch and moderate content, and noted many quit after a short time; BuzzFeed spoke with people in similar positions at Google who needed therapy after traumatic exposure to things like child pornography and bestiality.

It’s unclear whether Zuckerberg’s plan will really change much, either. As it stands, regular users need to flag something as inappropriate for a moderator to then watch and decide if it should be removed and whether further action is required—meaning at least two people have had to watch the horrific video before it could be removed.

Growing the moderation pool could potentially decrease the number of people in the general public who see a traumatizing video, as there will just be more moderators available to look at videos, but won’t do much to help the mental health of the thousands of moderators or those who originally came across the video. But it’s likely more people wouldn’t really fix the issue of people actually posting videos like this in the first place.

As Quartz’s Hanna Kozlowska reported after the Thailand murders: “privacy advocates say there’s not all that much Facebook can do from a practical standpoint to prevent violent videos from being posted or broadcast,” and that Facebook’s turnaround time on moderating a reported video was already actually quite fast.

Perhaps in the future, Facebook will develop automated tools able to detect what’s going on in a video, and what’s being said, potentially even remove an offensive post before a human user or moderator even has to see it. Then again, that could easily backfire.

Last year, Facebook fired all of the human editors who ran the Trending news section of its homepage, believing it could replace the contracted workers with an AI system that sourced and distributed content without human curation. Within days of the AI taking over, fake news had found its way into the Trending section.

Suffice it to say, Facebook has industry-leading groups in both research and implementation of AI—which means if its algorithms are struggling with written information, it’s likely going to be a while before it can develop a system with the knowledge and computational power to understand and accurately block horrific video content in real time.

Companies are trying to figure this stuff out, but even analyzing a pre-recorded video is still a challenge. Predicting what will happen in a live video—and deploying the computational resources to do this for millions of videos a day—will be a tall ask even for a company as massive as Facebook.

NEXT STORY: Comey: The Metadata Made Me Do It