What, Exactly, Were Russians Trying to Do With Those Facebook Ads?

antb/Shutterstock.com

From what we know now, it was too small to seriously influence the election, but too big to be an afterthought.

Many questions remain about the ads purchased by Russian-linked accounts during the 2016 presidential election.

Earlier this month, the company announced that Russian-linked accounts had purchased $100,000 worth of advertising.

The scale of this advertising buy is mysterious. In an election where billions of dollars were spent, why even bother to spend $100,000? It seems like a drop in the bucket, but also more than nothing. For comparison, in 2015 and 2016, all campaigns directly paid Facebook a collective $11,313,483.59 across all races, according to Federal Election Commission numbers. The Trump campaign paid Facebook $261,685 directly for ads. But those numbers are only lower bounds for the amount of money spent on Facebook because many campaigns pay consultants, who then purchase ads on their behalf. (For example, Cambridge Analytica, which worked with the Cruz and then Trump campaigns, took in $15.4 million during the cycle, including $5 million in one payment from the Trump campaign on September 1.)

So, the Russian ad buy is a significant Facebook purchase, but not one that seems scaled to the ambition of interfering with a national U.S. election.

That could be because: 1) Not all the ads have been discovered, so the $100,000 is a significant undercount. 2) That was the right number, and the ads worked to aid distribution of disinformation. 3) The ads were part of a message-testing protocol to improve the reach of posts posted natively by other accounts. Think of it as a real-time focus group to test for the most viral content and framing. 4) That $100,000 was a test that didn’t work well, so it didn’t get more resources. 5) That $100,000 was merely a calling card, spent primarily to cause trouble for Facebook and the election system.

Let’s walk through these branching possibilities for what this advertising buy could mean.

We don’t know much about how Facebook conducted its investigation. We do know that they repeatedly denied there was Russian influence during the election, and then copped to it in early September.

Washington Post article fleshed out a few details, including that President Obama personally spoke with Mark Zuckerberg after the election to get him to take the misinformation campaign seriously.

The problem appears to have been that Facebook’s spam- and fraud-tuned machine-learning systems could not see any differences between the “legitimate” speech of Americans discussing the election and the work of Russian operatives.

Here’s the description of the process that eventually found the ad purchases:

Instead of searching through impossibly large batches of data, Facebook decided to focus on a subset of political ads. Technicians then searched for “indicators” that would link those ads to Russia. To narrow down the search further, Facebook zeroed in on a Russian entity known as the Internet Research Agency, which had been publicly identified as a troll farm. “They worked backward,” a U.S. official said of the process at Facebook.

I take this to mean that they identified known Internet Research Agency trolls, looked at the ads they posted, and then looked for similar ads being run, liked, or shared by other accounts.

Why this would have taken several months is unclear. Journalist Adrian Chen was able to build out a network of Russian operative–run pages without any of the data that Facebook has. Given that the story he wrote ran in The New York Times Magazine in 2015, you’d think that particular agency would have been the first place Facebook would have looked.

That could be one reason a Congressional investigator told the Washington Postthat Facebook had only hit “the tip of the iceberg.”

But that’s only one possibility. The ads could have done exactly what the Russians intended, even at this limited scale, as part of a broader information campaign.

Some context: Facebook ads can do several different things. They can promote a piece of existing content somewhere on the internet. They can be used to try to drive “likes” to a page. They can be used to get people to watch a video.

With the right (salacious/truthy/fake) material, even a little money can go a long way. The Daily Beast had a Facebook ad specialist calculate how far $100,000 worth of Facebook spending would go and came up with a range of 23-70 million people, depending on how they were targeted.

Vice News talked with the owner of a right-wing Facebook page who uses Facebook ads to juice conservative content. After spending $34,100, the man controlled pages with 1.8 million likes. With that distribution base, he was able to push out content that could, on occasion, do serious numbers. “With a few advertising dollars, one April video ... received more than 27 million views and over 450,000 shares, spreading so pervasively into the conservative media universe that Donald Trump’s official Facebook page shared it two days later,” Vice wrote.

So maybe that’s it. The ads were simply a smallish part of growing the distribution network for disinformation and propaganda.

Looking back at Adrian Chen’s reporting on the Russian troll farm known most commonly as the Internet Research Agency, there’s a mix of skill and blundering. The Agency was smart enough to set up Chen with a neo-Nazi, surreptitiously photograph it, get stories written about the encounter, and then promote those via social media. But they also struggled to find English speakers who could write with proper grammar. The Agency could orchestrate a very complicated hoax about a chemical plant, but also let a known activist and journalist slip inside the company as new hires. The Agency was playing in international geopolitics, perhaps funded by a billionaire oligarch, but Chen reported the agency’s rumored budget back in 2015 was a mere $400,000 a month, or $4.8 million a year.

Perhaps the best mental model is simply a digital-advertising agency. In that case, there are some other intriguing possibilities.

Regular digital agencies (and media companies) routinely use Facebook ad buys to test whether stories and their attached “packaging” will fly on the social network. You run a bunch of different variations and find the one that the most people share. If the Internet Research Agency is basically a small digital agency, it would be quite reasonable that there was a small testing budget to see what content the operatives should push. In this case, the buys wouldn’t be about direct distribution of content—they aren’t trying to drive clicks or page likes—but merely to learn about what messages work.

And there’s a variation on these two scenarios, too. It could be that only $100,000 got spent simply because the ads were ineffective. Facebook itself has a case study on the reelection bid of Senator Pat Toomey that showed substantial increases in “voter intent” for key demographic groups. But the Toomey campaign spent $2.8 million on digital strategy.

That said, there is certainly reasonable doubt that even millions of dollars of Facebook spending could change the outcome of even a state in the U.S. presidential election. And perhaps the digital agency came to the conclusion that its budget was better spent elsewhere. Or maybe one group within the Internet Research Agency began buying ads—we do know the place is obsessed with metrics—to make itself look better to superiors for some period of time.

But it seems possible, from Chen’s description, that this was just a small thing for the Agency, which never gained institutional support.

And the last possibility is that the Internet Research Agency wanted to make a buy that it knew would get Facebook in trouble with the government once it was revealed. Think of it as corporate kompromat. Surely the Internet Research Agency would know that buying Facebook ads would look bad for Facebook, not to mention sowing the discord that seems to have been the primary motivation for the information campaign.

Some of these questions will be solved by simply seeing the ads. If they were testing the same content with different headlines or a bunch of different videos or posts, that’d tell us something about their operation. If the messaging ended up outside the ads, that might tell us something. In all cases, seeing the ads will be a major part of deciding what the known world of Russian influence was up to, which seems important for other social networks and political campaigns to defend themselves against future operations.

Maybe all we’ll see is bungling and a scattershot, silly approach. That would be useful and interesting information, too.

In any case, the Russian effort remained hidden in plain sight, which could be due to sophistication. Or it could be that having a nonfinancial motivation essentially served as an exploit for Facebook’s security systems, which are tuned to fighting fraud.

“Various groups regularly attempt to use such techniques to further financial goals, and Facebook continues to innovate in this area to detect such inauthentic activity,” wrote Chief Security Officer Alex Stamos and two Facebook coauthors in a white paper on information operations. “The area of information operations does provide a unique challenge, however, in that those sponsoring such operations are often not constrained by per-unit economic realities in the same way as spammers and click fraudsters, which increases the complexity of deterrence.”

The bottom line is that Facebook was not prepared for the threat. And I highly doubt that we won’t see a lot more come out about Russian operations on Facebook.