The Scariest Part of Facebook’s Safety Check in a Mass Shooting

Police officers stand along the Las Vegas Strip outside the Mandalay Bay resort and casino during a deadly shooting near the casino, Sunday, Oct. 1, 2017, in Las Vegas.

Police officers stand along the Las Vegas Strip outside the Mandalay Bay resort and casino during a deadly shooting near the casino, Sunday, Oct. 1, 2017, in Las Vegas. John LocherAP

Facebook’s algorithms do not favor original reporting.

shooter opened fire Oct. 1 on a music festival in Las Vegas, Nevada, killing at least 58 people, and injuring hundreds more. It’s the deadliest mass shooting in US history, and Facebook has turned on its “safety check” function, which is activated when a group of people are in a crisis situation, for Las Vegas. People can use the section to check in on friends and family who may have been affected by the event, donate to fundraising efforts, and to learn more about the situation as it unfolds.

Initially, the safety check feature for Las Vegas showed four news stories prominently on the page.

(Screenshot/Facebook)

Further down the page, there are links to news sites like ABC, Vice, and Sky News—more recognized news outlets. But before you reach them, you have to screen past four blog post links, and a series of photos. The first link, to a site called mytodaytv.com, appears to have uploaded someone else’s video of the shooting to YouTube (the description says the video “may contain copyrighted material”), and embedded it on the site. Lower down on the site, it asks for bitcoin donations from its readers—as of now, no one has donated.

The second link was to a site called “Anti Media,” which has a byline from “Tyler Durden”—Brad Pitt’s character in the 1999 film Fight Clubwhich has been co-opted by certain factions of the alt-right. The post appears to be a repost from ZeroHedge, a conservative finance blog.

The third was a link to a blog named after Dennis Michael Lynch, who focuses coverage on US politics. The post is sourced entirely from a video posted by Fox 35 News in Florida. This site is also running a storywithout skepticism around the claim that ISIS inspired today’s attack. (The FBI says there’s no reason to believe this is true yet.)

The fourth was a link to a site called “Red News Here,” which says on its disclaimer page that it “does not make any warranties about the completeness, reliability and accuracy of this information” on the site. It has recently posted articles with headlines like “These May Be The Final Days For North Korea – Look What POTUS Just Tweeted!” and “Woman Spends $50.000 To Look Like First Lady Melania, Here’s The Result.” The post on the Facebook safety check page appears to be entirely lifted from a Daily Mail article.

In this case, none of the information it surfaced was particularly wrong (some was outdated), but the sources of the information show that Facebook’s algorithms do not favor original reporting. Hopefully in the next crisis situation, there won’t be any misinformation that leads to innocent people being targets. (This was an issue earlier today on Google’s search results for the shooting.)

Facebook confirmed to Quartz that these stories were eventually were caught and replaced:

“Our Global Security Operations Center spotted these posts this morning and we have removed them. However, their removal was delayed, allowing them to be screen captured and circulated online. We are working to fix the issue that allowed this to happen in the first place and deeply regret the confusion this caused.”

Facebook did not respond to a follow-up question asking whether the original stories were surfaced algorithmically and then had to be replaced a human. The safety check page was updated around 1pm US Eastern Time—many hours after the safety check page went live—to include links from local news sources:

(Screenshot/Facebook)

Conversely, the information sourced on the Trending Topics section of Facebook on the shooting was always from more traditional news outlets, such as MSN, Yahoo, CNN, and NBC News.

The safety check activation comes as Facebook announced today that it would be hiring an additional 1,000 people to combat misinformation seeded through the social network’s advertisements. The company is under scrutiny for the role that it may have played in allowing Russian actors to affect the 2016 US election through paid advertisements on misleading or incorrect news stories about those running for office. It was also recently revealed that potential advertisers on Facebook, along with other tech platforms, could target specific hateful terms to sell advertising against.

In his Sept. 21 statement about the Russian ad campaign, CEO Mark Zuckerberg said the company plans to double its current staff of 250 workers in the ambiguous area of “election integrity.” He did not elaborate on what departments they’d work in or what they would be doing.

The day before that, COO Sheryl Sandberg posted a statement about anti-semitic ad targeting, saying the company would add “more human review” to the process. At the time, Quartz asked Facebook how many workers would be assigned to the issue, where they would work, and whether they’d be employed on a full-time or contract basis. Facebook declined to answer those questions, and referred us back to Sandberg’s post.

In May, in response to a rash of violent videos users had posted to Facebook, Zuckerberg said the company was adding staff to its “community operations” team, taking it from 4,500 workers to 7,500 by the end of the year. The function of that team, according to Zuckerberg, is to review the offensive content that users have reported. Those 7,500 workers, which Facebook is apparently still working on hiring, represent just 0.000375% of Facebook’s 2 billion monthly users. There are “millions of reports” each week, Zuckerberg said, but he didn’t specify how many user reports each worker would have to review each day.

Even with all this moderation in place, it’s not clear whether these community operations people would be able to solve the issue of organic discovery of fake news on Facebook. Posts like those shared on the Las Vegas safety check page today, and the countless ones shared by regular people during the 2016 election, show up in feeds because people have been clicking on and sharing them. They are weighted by algorithms as important, partially because they are popular.

In August 2016, Facebook fired the team of human editors it contracted to run the Trending Topics section of its site, as Quartz originally reported. Within days, the algorithms it employed in their stead surfaced fake news stories. Facebook remains adamant that it is just a platform for people to share ideas, rather than a media company.

But as long as it continues to trust algorithms’ editorial judgment, the possibility of fake stories causing more damage remains an entirely real possibility.