The Workers Who Police Terrorist Content on Facebook Were Exposed to Terrorists by Facebook

Gil C/Shutterstock.com

The data exposure was caused by a software bug that was discovered last year.

Facebook revealed the identities of dozens of moderators it employs to screen terrorist propaganda to the very groups who create those posts, The Guardian reported. The data exposure was caused by a software bug that was discovered last year. Facebook confirmed the incident.

Forty workers in Facebook’s Dublin offices who focused on removing terrorist content from the social network had their profile pages and names shown to administrators of Facebook groups where this content was posted. Six of these workers were deemed “high-priority” because their information had been seen by possible terrorists. The data exposure affected 1,000 workers in total, who moderate terrorist propaganda and other content that Facebook bans, like sexually explicit material. The workers are contractors who are paid €13 ($14.50) an hour, The Guardian reported.

According to The Guardian, Facebook discovered the software bug in Nov. 2016, and determined that it had been exposing the profiles of their moderators as far back as August of that year.

One moderator whose information was compromised feared so seriously for his safety that he fled Dublin, where he had been working at Facebook’s European headquarters, for five months. The Iraqi-born Irish citizen told The Guardian terrorists had kidnapped and executed members of his family in the past. He went into hiding in eastern Europe on his own savings but had to return to Dublin when he ran out of funds. The worker is now seeking compensation for the psychological damage caused by the data exposure through a legal claim filed against Facebook and CPL, the contractor Facebook used to hire him.

The contractor’s account to The Guardian describes a company with a cavalier attitude towards the safety of its workers. He says he was given two weeks of training before being asked to investigate reports of terrorist content on Facebook’s network. This included watching material of beheadings and stonings daily.

During Facebook’s investigation into the data security lapse, the company’s head of investigations told the contractor that there was “a good chance” the suspected terrorists would simply fail to notice the moderators’ names, because of the clutter in their Facebook group’s activity log. “I’m not waiting for a pipe bomb to be mailed to my address until Facebook does something about it,” the contractor replied at the time.

Facebook says it has “no evidence” of any threat to the affected workers or their families because of the data exposure. It says it has “fixed” the issue and conducted a “thorough investigation” that includes a risk assessment for each affected worker. The company also says it is working on a way for moderators to use accounts that are not linked to their personal profiles, and is making changes to its infrastructure to prevent employees from being exposed inadvertently in the future.