Despite Progress, Social Platforms Still Not Terrorist-Proof

Khakimullin Aleksandr/Shutterstock.com

Lawmakers grilled Facebook, YouTube and Twitter over their efforts to block extremist content on their platforms.

Social media companies are making significant strides in weeding out content from extremists and bad actors from their sites, but without significant policy changes, “authoritarians and terrorists” will continue using the platforms to threaten the American public, said one national security expert.

For years, extremist groups have used social media to spread their ideology and recruit new members, and the 2016 presidential election highlighted the potential for foreign adversaries to use online platforms to manipulate voters.

Representatives from Facebook, YouTube and Twitter met with the Senate Commerce Committee on Wednesday to discuss the role social media companies should play in countering threatening activity on their sites. During the hearing, lawmakers asked a number of questions about steps companies are taking to prevent foreign adversaries from using fake accounts or paid advertising on social media disseminate misinformation to voters like Russia did in 2016.

While officials from Facebook and Twitter noted the platforms’ efforts to block bot accounts and modest changes to political advertising policies, their testimonies didn’t assuage lawmakers’ fears that Russia and other bad actors would meddle in the 2018 midterm elections.

“You’re not where you need to be for us to be reassured that you’re securing our democracy,” said Sen. Brian Schatz, D-Hawaii.

Though advancements in artificial intelligence and greater investment in content monitoring are helping companies curb much of the malicious activity on their sites, they “fail to detect that which hasn’t been seen before,” Clint Watts, a cyber warfare specialist at the Foreign Policy Research Institute, told the Senate panel. Working more closely with national security and counterterrorism experts could help social media companies anticipate bad actors’ new strategies and stop them in their tracks, he said, and the government could help facilitate the partnership.

In addition, he said increasing verification of user identities, eliminating bots and publicly displaying the sources behind political advertisements would also help social media companies combat misinformation campaigns.

When Sen. Amy Klobuchar, D-Minn., asked Watts if he supported her Honest Ads Act to increase transparency of online political ads, he said regulating social media ads is crucial because “it’s where all advertising is going in the future.”

“If it does not happen, the conversation will continue to get more polarized and more negative,” he said. “People won’t be able to trust information on the platform in the first place.”

Despite their struggle to fight misinformation campaigns online, companies highlighted the vast improvements they made in identifying and blocking extremist content in recent years.

Facebook’s Product Policy and Counterterrorism head Monika Bickert said the company’s filters catch 99 percent of pro-ISIS and al-Qaeda posts before it gets reported, and 83 percent of content flagged by users is removed within one hour. More than 90 percent of the terrorist accounts Twitter suspended last year were flagged by internal filters, and three-quarters of those accounts were removed before they could send a single tweet, according to Carlos Monje, Twitter’s director of public policy and philanthropy for the U.S. and Canada.

Lawmakers encouraged social media platforms to continue collaborating in their efforts to curb online extremism through groups like the Global Internet Forum to Counter Terrorism. Such coalitions allow developed companies to share best practices with newer, smaller platforms, whose encryption and privacy features Watts said make them particularly appealing to terrorists.