Artificial Intelligence is Too Dumb to Fully Police Online Extremism, Experts Say

ProStockStudio/Shutterstock.com

Algorithms excel at routine tasks, but understanding a post’s context requires a human touch, former Facebook security chief Alex Stamos told lawmakers.

Artificial intelligence tools aren’t yet smart enough to root out all the extremist content posted on social media sites, despite tech executives’ claims to the contrary, experts told lawmakers on Tuesday.

Platforms like Facebook, Twitter and YouTube have spent years building algorithms to weed out posts promoting terrorism and other radical causes, but today’s tech is unable to grapple with the complexities of human communication, witnesses said before a House Homeland Security Committee subpanel. As a result, tools often misidentify extremist posts as legitimate content or vice versa.

“The world’s best machine-learning resembles a crowd of millions of preschoolers—there are certainly problems … which a humongous group of children can be taught to solve,” said Alex Stamos, the program director at Stanford University’s Internet Observatory and Facebook’s former chief security officer.

“Modern machine-learning can be incredibly powerful for accomplishing routine tasks at amazing speed and scale,” he continued, but “decision-making based on societal values and cultural context is completely beyond [its] current capabilities.” 

After terrorist groups like al-Qaida and ISIS started turning to social media to radicalize and recruit new members, the government pressured tech companies to do a better job monitoring their sites for extremist content. Since then, companies have handed over much of the moderation work to machine-learning tools, which can review billions of daily posts much faster than their human counterparts.

But while the tech catches a broad swath of nefarious content, its inability to understand the context of those videos, photos and messages presents “substantial limitations” to its moderating ability, according to Ben Buchanan, a senior faculty fellow at Georgetown University’s Center for Security and Emerging Technology. For example, a video of terrorist violence could be propaganda in one setting and legitimate news reporting in another, he said, but algorithms wouldn’t be able to tell the difference.

The panel was also quick to note that constitutional protections on free speech also make it difficult for platforms to moderate content without getting themselves into hot water. Witnesses pointed to companies’ recent efforts to crack down on online hate speech and misinformation, which have led Republicans to accuse them of censoring conservative viewpoints.

While witnesses agreed platforms should be investing more in refining their automated moderation practices, they all said companies must look beyond technology to get a better grip on the problem.

“AI is a critical part of doing [moderation] at scale, but most of the hard decisions have to be made by humans,” said Stamos.

Beyond more human moderation, Stamos also recommended companies stand up a third-party organization to defend against extremist social media campaigns coordinated across the different sites and prevent blacklisted users from rejoining the platforms. Individual platforms should also be more transparent about the processes they use to flag and remove posts, he said, and Congress could potentially play a role in ensuring companies abide by their own standards.

He and other witnesses also called on platforms to release data about the content they remove from their sites to academia. The information could help researchers better understand misinformation campaigns and extremism, they said, while also allowing third-parties to weigh in on allegations of social media bias.

“The companies are operating as quasi-governments,” he said. “They’re making decisions on a global scale that have normally been reserved for governments. The lack of transparency about what decisions they’re making and how they make them is a critical problem that they have.”