Apple’s Empty Grandstanding About Privacy

Apple CEO Tim Cook speaks during an event to announce new products Oct. 30 in the Brooklyn borough of New York.

Apple CEO Tim Cook speaks during an event to announce new products Oct. 30 in the Brooklyn borough of New York. Bebeto Matthews/AP

The company enables the surveillance that supposedly offends its values.

“We at Apple believe that privacy is a fundamental human right,” Apple’s CEO, Tim Cook, said in a privacy-conference keynote last year in Brussels. “But we also recognize that not everyone sees things as we do.” Cook was making an impassioned plea to end the technology industry’s collection and sale of user data. “This is surveillance,” he continued. “And these stockpiles of personal data serve only to enrich the companies that collect them.” Cook called for a comprehensive U.S. data-privacy law focused on minimizing data collection, securing that data, and informing users about its nature and use.

The speech is worth revisiting in light of an emerging fight between Apple and Facebook. Earlier this week, TechCrunch reported that Facebook had been paying people, including teens 13 to 17 years old, to install a “research” app that extracted huge volumes of personal data from their iPhones—direct messages, photos, emails, and more. Facebook uses this information partly to improve its data profiles for advertisement, but also as a business-intelligence tool to help paint a picture of competitor behavior.

After the story broke, Facebook said it would shut down the iOS version of the program. That wasn’t enough for Apple, which canceled Facebook’s ability to distribute custom iPhone apps for internal use by Facebook employees. That might look like a severe punishment that will send a strong message to Facebook, and to other companies. But it’s mostly a slap on the wrist. It gives Apple moral cover while doing little to change the data economy the company claims to oppose.

Government regulation of the kind Cook called for in his Brussels speech is one way to improve personal-data privacy, and probably a necessary one. But to lean on policy as a prerequisite for action is to sidestep the moral quandary. If Apple really cared about personal data, the company could take any number of actions to keep privacy violators off its platforms and away from its customers. Until it does, it’s time to stop letting Apple off the hook as a more moral company than Google or Facebook. In fact, failing to take action while grandstanding about the urgency of doing so might make it even worse.

To distribute apps on the iPhone, creators pay an annual fee to Apple, which issues a “certificate” that allows a developer to distribute the apps they create. The most common one is used to list an app on the Apple App Store. Another allows an organization to make apps for internal use. For example, a company might create a bespoke app for inventory tracking. It wouldn’t make sense to release that to the public, so an organization creates an “enterprise” certificate in order to distribute the app within its walls.

Facebook ran afoul of Apple because it used this system to distribute its data-collection app to consumers outside the company, which isn’t allowed. In response, Apple revoked Facebook’s enterprise developer certificate, saying distributing a data-collecting app to consumers “is a clear breach of their agreement with Apple.” Cheddar reporter Alex Heath called it “an aggressive flex on Apple’s part.”

But responses such as that one give Apple too much credit, in this case and in general. Apple didn’t take a position on Facebook’s creation of a paid “research” program to extract data from users. It enforced the terms of a licensing agreement; appearing to fight for user privacy is just a side effect. Apple is flexing its contract-law muscle, not its privacy muscle, and gaining a publicity win in the process. Crucially, Apple didn’t ban Facebook from the App Store or the iPhone platform: You can still download and use Messenger.

Facebook, for its part, maintains that the data-collection activity its Research app undertook was aboveboard and not at all duplicitous. Unlike in previous controversies about how Facebook shared user data with developers such as Cambridge Analytica and foreign governments, little about the research program was hidden. Users agreed to share extensive data from their devices—including emails, private messages, photos, and contacts—in exchange for a payment of $20 a month. If this controversy is a matter of contracts, then Facebook is making the same argument as Apple: We set up a legal agreement, which the parties entered into willingly and knowingly, and we’re sticking to our side of it. It’s just business.

Apple, too, has benefited from just doing business with the biggest privacy offenders in the tech sector. Despite Cook’s claim in Brussels that the “stockpiles of personal data serve only to enrich the companies that collect them,” Apple does lots of deals with those companies. Safari, the web browser that comes with every iPhone, is set up by default to route web searches through Google. For this privilege, Google reportedly paidApple $9 billion in 2018, and as much as $12 billion this year. All those searches help funnel out enormous volumes of data on Apple’s users, from which Google extracts huge profits. Apple might not be directly responsible for the questionable use of that data by Google, but it facilitates the activity by making Google its default search engine, enriching itself substantially in the process.

The same could be said for the apps Apple distributes. Companies such as Google and Facebook get access to iPhone users by offering their apps—Messenger, Gmail, Google Maps, and so on—for download from the Apple App Store. Most cost consumers nothing, because they exist to trade software services, such as email or mapping, for data. That business model helped stimulate the data-privacy dystopia we now occupy.

It wasn’t so long ago that the kinds of certificate-signing arrangements at the heart of the latest dispute between Apple and Facebook didn’t exist at all. People and companies would make software and distribute it to customers, whether on disks or on CD-ROMs or online. The platform owners were uninvolved. Then the App Store came along, and with it app review, the process whereby Apple evaluates the fitness of each software program for its sales channel and platform. It can seem confusing and arbitrary, but the app-store model has overtaken digital software distribution. Now every software creator who wants access to the iPhone pays an annual tax to Apple, and all iPhone users are tied to the editorial choices Apple makes on their behalf.

The public might reasonably wonder what it has gotten in exchange for the loss of a direct line between software creators and customers. There are some clear security benefits, as malicious programs can get snared in review. But is that enough? If Apple really objected to data-hungry business models, it could take much more aggressive action during app review. Apple owns the platform and its tools. It is in the best position to enforce a set of values about data access and collection, if the company truly believes in them.

App review is just the start. Free services underwrite data-aggregation businesses by increasing their install bases. That’s changed consumers’ expectations—free apps are the norm, and charging even a dollar can dissuade downloads. That state of affairs encourages ad-monetization, data brokering, and in-app purchases as alternative revenue models. Apple could regulate free apps more aggressively, which could encourage developers to sell more apps for purchase. Done right, that could help turn the market away from data-driven business models, at least in part.

Apple could go further, too. There’s nothing requiring Apple to distribute apps from data-hungry companies such as Google and Facebook at all. A truly aggressive flex would see Apple ban companies whose data-collection and usage practices are incompatible with its supposedly progressive position on the matter. A privacy-oriented product might even defend against any antitrust concerns about services where Apple competes with its competitors, such as digital maps.

Mapping software in particular exposes the impotence of Apple’s privacy posture. Recently, Gizmodo’s Kashmir Hill ran an experiment in which she blocked tech giants’ services at the network level, to see what it would be like to try to live without them. When Google’s turn came around, Hill discovered that the Uber and Lyft apps didn’t work because they rely on the Google Maps API. Location and activity data are particularly valuable these days, too. The Intercept recently reported that Google’s smart-cities division, Sidewalk Labs, is aggregating the data extracted from users via services such as apps and reselling it to urban planners.

Among users and developers alike, Google Maps is perceived to be better than the alternatives, including Apple’s. But there’s no reason that needs to be the case. Apple could invest more in its mapping services to make them more competitive, but it could also strike better deals than Google offers with the third parties that use mapping services inside their apps. If Apple wanted to put its (substantial) money where its mouth is, it could even subsidize the use of its own mapping services by developers, with the express purpose of reducing data leakage from location-oriented tools.

But Apple has done none of these things, and there’s no real indication that it ever will. (Apple didn’t respond to my request for comment.) Instead, the company makes public pronouncements about its values and wishes for data regulation. It tunes the operation of its own data-driven services, such as those offered in iCloud, but most of those are far less popular than their Google and Facebook competitors. (The same hypocrisy plagues Apple’s efforts to help iPhone owners use their devices less; instead of regulating or banning apps that produce obsessive activity, Apple offers a wet noodle of a tool to offer self-regulation and visualize statistics about that use.)

Apple’s action this week did wreak some havoc inside Facebook, as employees of the social network scrambled to look up company bus schedules or use workplace chat on internal apps that were no longer available. There’s something to be said for imposing the consequences of poor choices on tech workers themselves—after all, they are the ones who build services such as Facebook’s now infamous research app. But relying on workers to solve the woes of tech’s leaders is both insufficient and unreasonable. Like Cook’s appeal to regulatory action, it’s a cowardly move rather than a daring one. And it probably won’t change the way these businesses operate anyway. By happenstance, Facebook announced Q4 earnings the same day Apple banned its custom apps; the company beat its targets, and the stock soared 6.5 percent.

A while back, I wrote about Philippa Foot, the moral philosopher who introduced the thought experiment that has come to be known as the trolley problem. It turns out that the trolley was only a minor example in Foot’s original, 1967 paper. Its purpose wasn’t to prefigure robot cars, but to illustrate the difference between what prospective moral actors intendand what they can foresee. Foot concludes that there is a moral difference between what an agent does and what one allows. That’s why the trolley problem is so piquant as a quandary. Not because it’s obviously better to let one person die instead of five, but because letting someone die is different from causing someone to die. For Foot, there is a substantial moral difference between taking an action to avoid injury and taking one to bring aid. People might not have a duty to help someone else. But they often do have a duty to refrain from harming them.

That’s one way to understand Apple’s empty message about privacy. Tim Cook talks a big game, but at the end of the day, his company is allowing the surveillance-capitalism atrocities it claims to oppose. It sometimes helps people find alternatives in its own services, but far more often, it fails to prevent its customers from being harmed by companies such as Google and Facebook—in part because it provides, endorses, and profits from their use.

No matter what it says, Apple is not a company committed to data privacy. It is a company that adopts considerably better policies than its more data-hungry competitors, but that does little to curtail the general problem. Meanwhile, Apple reaps huge profits selling the glass rectangles on which the more invasive apps run. On its own, Apple couldn’t end the data economy, which long predates smartphones. But if Apple really wanted to, it could offer a much more serious substitute, one that could bring about a whole different world of technological experience. The fact that the company claims to value that alternative while doing so little to bring it about is less righteous than just doing nothing.