“Why Am I Seeing This Ad” Explanations on Facebook Are Incomplete and Misleading, a Study Says
Some of the answers are vague at best.
Cambridge Analytica, the shadowy data firm at the center of the largest online privacy scandal in years, obtained data on up to 87 million Facebook users in order to “microtarget” political advertisements. Using precise information on the consumer—or in this case, voter—Facebook lets advertisers find the perfect audience, the optimal set of eyes for their message.
It’s not clear how effective Cambridge Analytica’s ads were in terms of changing people’s voting behaviors. What we do know is that Facebook ads can be incredibly accurate, if not invasive—so much so that many users have started to think the platform actually listens to their conversations through their phones’ microphones. As far as we know, this is not true. But the illusion is so strong because Facebook amasses copious information on all aspects of its users’ lives, from everything that you do on the platform and online, to the store you last visited on your Sunday stroll.
The platform gives its users an option to find out why they’re seeing a particular ad. But the explanations offered by Facebook have often not been satisfying. Unless it’s something clear— you’ve “liked” the advertiser’s page, or you’ve visited its website—you find answers that are vague at best: you are in the age-group the advertiser was interested in, or that you live in the country it wanted to reach. A recent study by an international group of computer science researchers, including from Northeastern University and Université Grenoble Alpes, reverse-engineered Facebook ads and concluded that the explanations can be incomplete or misleading.
The explanations have been available on Facebook since 2014, but they’ve gained new significance following Russian meddling in the 2016 election, and revelations from ProPublica that it was possible to place discriminatory housing ads on the platform, which excluded users who identified as racial minorities. The company has come under greater pressure, which has only been amplified with the Cambridge Analytica scandal, to be more open about its ad practices. Facebook has in turn vowed to be more transparent.
Explaining explanations
The researchers built a browser extension that would collect every ad shown to the users recruited for the study, as well as the ad explanation. They also constructed 135 different ad campaigns aimed at targeting the participants in order to see what explanations the users would see.
A common Facebook ad explanation has a two part structure: the first gives you one definitive reason for you seeing the ad, and a second part says “there may be other reasons,” such as reason A or reason B.
The researchers found that the first part of the explanation was often incomplete: where the advertiser specified two attributes for the user they wanted to target (these could be anything from “new parent” to a fan of “high-value goods”), the explanation would show only one. “Facebook obviously has the data, because they know how the advertiser chose to target,” says Alan Mislove, associate professor of computer science at Northeastern University and one of the study’s authors.
“The reason why you receive a particular ad is very complicated,” said Mislove, including a special bidding process between advertisers, among many others. “We recognize that making an explanation is not a trivial thing,” he says. A minutely detailed explanation won’t be useful to an individual user, and might be overwhelming. But while Facebook has taken a “step in the right direction,” showing just one vague reason is far from being transparent, the researchers argue.
Since the study was published, Facebook took an important step to make targeting less invasive. It got rid of an option for advertisers to use for targeting information provided by data brokers, such as Axciom and Experian, which gave Facebook some of the most specific and sensitive data points, from how many credit lines a user has to their criminal record. In its ad explanations, Facebook wouldn’t tell users which data point derived from a broker was used in targeting them, just that the information was provided by Experian or another firm.
The study also suggests—although does not conclusively prove—that the attribute that Facebook chooses to show in its explanation is the most prevalent one (essentially, the broadest category). So even if the advertiser specified that they wanted to show their ad to people who were between ages 18-35 and were interested in mortgage loans, Facebook would only show them the first attribute, the less sensitive one, in the explanation.
This is important because it “may allow malicious advertisers to easily obfuscate ad explanations from ad campaigns that are discriminatory or that target privacy-sensitive attributes,” the researchers write. If their findings are true for other advertisers, this means that whoever places an ad can predict which explanation a user will see, Mislove said. So if someone wanted to place a discriminatory ad, they could presumably count on Facebook showing the user a generic category (lives in New York), rather than the prejudicial one (is not African-American) in the ad explanation.
Facebook’s view
Facebook says it’s done user testing on ad explanations as recently as last year, and that people said they did not want overly detailed explanations. “In our research and testing, people have consistently told us they prefer a few reasons why an ad was delivered, so they can adjust their settings to better tailor the ads they see. We designed ‘Why am I seeing this?’ to do just that,” Matt Hural, a product manager at Facebook, said in a statement sent to Quartz.
Indeed, a small recent study on communicating the way algorithms work showed that people did not want to see explanations that reached a certain level of “creepiness” (for instance, they preferred to be told that they were seeing an ad because they were loyal to a brand, rather than exact purchases). However, they also said that they wanted to see specific and interpretable ad explanations.
Facebook also said that if the concern is obfuscation attempts by malicious advertisers, the company aims to catch those before the ad goes up on Facebook in its review process. Every ad has to go through an approval procedure to ensure that ads meet the platform’s policies.
The study shows that the second part of the ad explanation’s common format—“there may be other reasons”—has somewhat inconclusive wording, and it can also be incorrect. Even when an advertiser—in this case, the researchers—did not specify a location, the explanation would say that a user’s location “may” have been the reason for targeting.
The Facebook spokesperson said the company was working to fix the location issue, but that the “may” construction is used because people might be seeing an ad for other reasons—products of Facebook’s complex advertising algorithm—than an advertiser’s targeting selections. Update 4/6: The company said it had fixed the location issue.
The political problem of algorithm opacity
Following election-related criticism around the opacity of the company’s advertising algorithms, Facebook has stepped up its transparency effort. It’s currently testing a feature in Canada that shows every ad a given page, both political and not, is running in a separate tab.
However, ProPublica examined the feature, and concluded that it was insufficient: “While the new approach makes ads more accessible, they’re only available temporarily, can be hard to find and can still mislead users about the advertiser’s identity, according to ProPublica’s review.”
As with many things Facebook, ad explanations are essentially uncharted territory. The advertising machine never had to explain itself in such a personalized way to an individual user. But bad actors have slipped in because of its virtually uncontrolled nature, and now companies like Facebook have to reveal some of their practices. The question is how much are they willing to show.