These automated accounts have become a headache for Twitter.
If it wasn’t obvious by now, Twitter has a major bot problem.
Last month, executives from the social network told the US Senate’s intelligence committee they had found evidence that Russia was behind some fake and automated accounts active during the 2016 presidential election. Studies have found that around 20% of election-related Twitter activity came from suspect bot accounts. Other highlights from the past year include the bots deployed to slander French presidential candidate Emmanuel Macron and to exacerbate the standoff between Qatar and its neighbors in the Gulf.
Several research projects, including the Computational Propaganda Project at the Oxford Internet Institute (where I am a researcher), the Atlantic Council’s Digital Forensics Lab, and the Observatory on Social Media at Indiana University have begun to document the central role of bot accounts in spreading hyper-partisan and misleading “news,” perpetrating various hoaxes, and generally being a nuisance, especially in the run-up to elections and other important political events.
These automated accounts have become a headache for Twitter not only because they’re drawing Congressional scrutiny, but because they’re driving Twitter users away, a problem then-CEO Dick Costolo acknowledged in 2015.
There have been a number of suggestions as to what Twitter could do to battle bots. David Carroll, an assistant professor at the New School in New York, suggests that Twitter deploy a bot detection tool to help users identify automated accounts. Similarly, scholars at the University of Indiana recently suggested that Twitter could require certain users to prove they’re human by passing a “captcha” test before posting. A third option would be for the social network to allow users to directly flag suspected bot accounts.
But such undertakings are sure to fall flat without an appreciation for how Twitter created this problem in the first place.
Twitter has an open API, a system that lets third-party applications post tweets on a user’s behalf—from games that tweet a high score for you to tools such as Buffer or Hootsuite that many companies and organizations use to automate their Twitter feeds. Twitter doesn’t just allow automated apps to use this API; it encourages it. In its automation guidelines for developers, for example, it says:
- Build solutions that automatically broadcast helpful information in Tweets
- Run creative campaigns that auto-reply to users who engage with your content
- Build solutions that automatically respond to users in Direct Messages
Bots initially helped Twitter build up traffic. Not all bots are malicious; many organizational and institutional Twitter accounts, including Quartz’s, are in effect bots automatically tweeting the latest published stories.
But seven years ago, in one of the first conference papers about Twitter bots, researchers presciently predicted that this automation could become a double-edged sword.
Initially, the biggest problem was spam. However, as Twitter became an important tool for protest, political conversation, and mobilization, the possibilities for harm increased significantly. A few years ago, for example, ISIL (a.k.a. the Islamic State) built an Android app that allowed supporters to automate their accounts to help spread beheading videos and jihadist propaganda.
Today, the sheer scale of the problem is staggering. In a blog post that followed its recent congressional briefing, Twitter said it had suspended over 117,000 “malicious applications” in the previous four months alone, and was catching more than 450,000 suspicious logins per day. The most recent estimates suggest that up to 50% of Twitter traffic may be automated.
The reality is that Twitter is fighting a losing battle, and it is unwilling to deal with the possibility that state actors are using the platform for large-scale political interference. As senator Mark Warner recently put it (paywall), Twitter’s congressional testimony “showed an enormous lack of understanding from the Twitter team of how serious this issue is.”
So what should Twitter do?
A big step would be for it to clamp down on third-party applications and tweak the API to make automation much more difficult. It could require approval for new apps before they’re deployed. On Wikipedia, for example, bots have to identify themselves and adhere to a straightforward bot policy. Developers must demonstrate that their bot is:
- does not consume resources unnecessarily
- performs only tasks for which there is consensus
- and carefully adheres to relevant policies and guidelines
Even these simple, Asimovian guidelines would be a huge step in the right direction. Twitter isn’t just dealing with spammers anymore, but with governments and others who have both the resources and the incentives to manipulate the platform for political ends. By clamping down on bots it may risk losing users and traffic; but as it stands, it runs the much bigger risk of being an unwitting accessory to geopolitical destabilization.