It's Too Easy to Troll Like a Russian

U.S. Army / Patrick Buffett

We're scholars, but amateurs, and we found it alarming how quickly we imagined a personalized misinformation campaign with actual publicly available data.

Despite evidence that foreign actors are still manipulating social media users on platforms like Twitter, tech companies are continuing to leave personal data vulnerable in a way that facilitates the work of Russian intelligence operatives. As a hybrid warfare scholar and an artificial intelligence practitioner, we put on our Russian-troll hats to see for ourselves just how feasible it is to conduct an influence operation on Twitter. The results of our thought experiment as pseudo-Russian-social media agents should send chills down the spines of social media administrators and users alike. Furthermore, our findings demonstrate the necessity of getting rid of clear vulnerabilities similar to the “like histories” feature on Twitter.

Our real-world counterparts at the Internet Research Agency in Russia understand the utility of hybrid warfare operations in our increasingly digitized and networked world. Artificial intelligence is among the many important technologies that promise to change the scope of conventional warfare for years to come, particularly when combined with personal data gleaned from individuals’ social media activity. Public social media profiles provide a treasure trove of data on individuals and, combined with a lack of accountability for anonymous actors, social media platforms create an ecosystem ripe for hybrid operations. We recognized this vulnerability and, before thinking about this problem, we had strong reasons to believe that access to even small preferences on social media would allow us to build intricate profiles of our targets and tailor our messaging to them.

For example, a recent study highlighted that a surprisingly detailed psychological profile of an individual can be developed by simply analyzing a person’s “likes” on social media posts. With data on just 70 posts that someone has liked, an algorithm can predict personal traits better than a friend, and with 300 likes it can predict better than one’s spouse. Similar data-crunching algorithms can be used to infer the political dispositions of individuals and make them targets of campaigns to influence their opinions. For instance, it is widely believed that Cambridge Analytica used data it collected on Facebook to sway millions of voters during the 2016 U.S. presidential election. There were similar attempts to sway voters’ opinions during the Brexit campaign, and information was even gathered using social media during conflicts in Libya, Syria, and Ukraine.

With this in mind, our hypothetical scenario is designed as a three-part “operation” to influence American social media users about a piece of controversial legislation.

First, publicly available “like histories” can be used to identify impressionable users along with their topics of interest. Not only we can sort individuals by their easily accessible preferences, but we can also employ an army of tweet-producing bots to test users’ interactions with posts that were pro-, anti-, and neutral toward our piece of test legislation.

Second, we can create the content most likely to influence particular users based off the predictions from our model. You might wonder about the origins of the training data for such a model. The answer is…drum roll…Twitter itself. Using a model built with data easily accessed on Twitter, we can create tweets with the maximum chance of influencing users in the desired way. Tweets can be further finetuned using the information about target audience groups identified in the first step. Finally, we can use the advertising services provided directly by social media platforms to deliver the tweets to our audiences. Using characteristics about our audiences that we identified in the prior steps, we can maximize each tweet’s reach through tactics routinely used by advertising agencies.

For our campaign to be effective, however, our tweets need to appear in the targeted users’ news feeds. Fortunately for us, Twitter’s algorithms are constructed to optimize user engagement on the platform—i.e., to deliver tweets that are most likely to engage users. Because we can design our tweets with an algorithm that predicts such engagement, our operation creates tweets that achieve exactly that. Twitter’s algorithms then deliver our messages to audiences in a way that maximizes the impact.

This exposes the national security vulnerabilities that result from user preferences being publicly available on social media platforms such as Twitter. Of course, conducting a large-scale, successful hybrid disinformation campaign that actually changes minds is not a simple task for amateurs. Nonetheless, it is very alarming that with publicly available data one can quickly create a personalized misinformation campaign. Moreover, amateurs are not the ones conducting these operations, which makes the situation even scarier – troll factories like the Internet Research Agency have ample resources and years of practice that increase the effectiveness and malign nature of their activities and influence operations as part of the broader Russian hybrid warfare strategy.

If open societies like that of the United States learn anything from the history of malicious actors interfering with democracies through the growing influence of social media, it is that the task of setting up an influence operation must be made more difficult. It is way past time for us to re-evaluate social-media platforms from a national security perspective and lay guidelines that will protect both user privacy and address national vulnerability to malicious foreign attacks.

Ivana Stradner is a Jeane Kirkpatrick fellow at AEI. Pulkit Agrawal is an assistant professor in the Department of Electrical Engineering and Computer Science at MIT.