Twitter Can Now Quickly Shut Down Trolls Who Rejoice at Tragedy

Gil C/Shutterstock.com

In the past, the company’s efforts have fallen short or backfired.

Within the first hour of the terror attack in Nice, which killed over 84 people, at least 50 Twitter accounts—many of which were created solely for this purpose—applauded the tragedy using the hashtags #BlessedNiceAttack and #BattleOfNice in Arabic.

But unlike in past crises, where some have complained that Twitter has been slow to remove tweets, many of those messages and accounts were removed almost as quickly as they went up.

“Twitter moved with swiftness we have not seen before to erase pro-attack tweets within minutes,” according to a report by the Counter Extremism Project. “It was the first time Twitter has reacted so efficiently, including Orlando last month.”

The swift takedowns follow Twitter improving its staffing to do content identification and removal, as Twitter explained in a blog post.

“We have increased the size of the teams that review reports, reducing our response time significantly,” the post says. “We have already seen results, including an increase in account suspensions and this type of activity shifting off of Twitter.”

The social network has struggled to find the middle ground between allowing free speech and blocking out harassment, violence and abusive behavior. And in the past, the company’s efforts have fallen short or backfired: The platform was unsuccessful at blocking rape threats against two women politicians in the U.K., for example. And a British journalist accused the platform of categorizing criticism as hate speech.

The platform has been dubbed a recruitment portal of sorts for extremists, including the Islamic State and its followers. At least 46,000 Twitter accounts were used by ISIL supporters, a March 2015 report from the Brookings Institution said. Earlier this year, the micro-blogging site itself said it deleted 125,000 Islamic State accounts and expanded anti-terror teams to monitor extremist content. Co-founder Jack Dorsey has received death threats from ISIL for removing the group’s posts.

The problem, according to CEP, a nonprofit international policy organization that counters extremist narratives and online recruitment, is that “Twitter’s reporting protocol is a cumbersome, multistep process and does not have a streamlined reporting protocol specific to terrorist activity.”

Twitter has said in the past it takes up to 24 hours to take down any reported content. That’s plenty of time for the information to be ingested and disseminated.

Following the Bastille Day attack, extremist accounts shared posters and infographics that used memes to make fun of the attack, threatened France and other European countries with more attacks and showed images of dead children caused by coalition attacks on ISIL.

Many of the accounts celebrating the Nice attacks hijacked and spread propaganda under the hashtags #Nice06 and #PrayForNice, which were originally being used by people to show solidarity as well as look for missing friends and family members.

This time, Twitter ditched diplomacy and did not hesitate to purge extremist messages.

“We condemn the use of Twitter to promote terrorism and the Twitter Rules make it clear that this type of behavior, or any violent threat, is not permitted on our service,” a Twitter spokesperson told Quartz.

Twitter isn’t the only tech company looking to scrub such posts—talks with the U.S. government have led to a broader anti-extremism push. Along with Apple, Facebook and Microsoft, Twitter is working with the White House to identify and battle ISIL online.