The end of last week saw one of those tech stories that runs the cycle—from Twitter outrage to corporate chagrin—in less than 48 hours. Here’s what happened:
Late that night, a nerd hero (in this case, Markus Persson, the developer of Minecraft) told the service on Twitter that he had cancelled his subscription and that the company should be ashamed of itself. (His words: “Please consider not being evil.”)
More corporate shaming from less high-profile nerds ensued Friday morning, as pundits more sympathetic to the needs of venture capital in turn condescended users. By noon in California, it was all over: The company had clarified its policy and apologized for not communicating better. Fin.
Another privacy-policy scandal managed, another variant on corporate Big Brother avoided. (Though a Spotify-driven surveillance regime would look less like 1984’s Oceania and more like The Apple’s disco-dystopian West Berlin.) So privacy policies could nicely disappear from the tech news slate—at least, until the next scandal.
But beyond the cycle of discovery, outrage, and apologetic adjustment, there are deeper problems. The way lawyers, executives, and developers address user privacy just doesn’t work that well. Neither consumers nor corporations benefit from our current amend-then-freak-out regime.
That’s partly because, if we’re being honest, privacy policies are kinda boring. They’re how new and shiny consumer software gets scaffolded in mundanity. Just like how, when your new Cuisinart or Frigidaire arrives, no one looks at the manual that comes with it.
This traps both consumers and companies in a cycle of bad faith. As the privacy consultant Jonathan Salem Baskin put it:
The fait accompli behind privacy practices is that businesses have the right to intrude massively into customer’s lives and, since the policies are legal agreements (often executed by nothing more than their tolerance), people have made consciously willing trades: their privacy for better playlists, or shoe ads.
This just isn’t true, since few people understand those transactions. It also violates every conceivable psychological or sociological model of how humans define and manage their privacy, which depend on two parties learning to trust and reveal information to one another over time.
“It doesn’t have to be like this,” Baskin wrote. “There’s nothing inherently wrong with customers sharing their personal information or behavior, nor with companies using it to improve their operations and, gasp, profiting from it.”
Right now, most consumer-tech companies only announce changes whenever they make “material” changes to their policy. (The last time the Federal Trade Commission issued guidance on privacy policies, in 2009, it recommended that process.) But that means that it’s up to tech companies to decide which changes are material and which are not—which sets up a potentially adversarial relationship between company and consumer.
* * *
If a regulatory change looks unlikely, though, there is a technical intervention that Apple and Google could make. To explain how, it’s worth getting into the nitty-gritty of the Spotify flap. One of the clauses that most worried users was the following:
Depending on the type of device that you use to interact with the Service and your settings, we may also collect information about your location based on, for example, your phone’s GPS location or other forms of locating mobile devices (e.g., Bluetooth).
The company says this access was needed for its new Running feature, announced in July, that monitored someone’s jogging pace and served them tunes appropriately. And when the company’s CEO, Daniel Ek, explained the new policy on Friday, he said that that type of data collection would always require a user’s permission:
We will never gather or use the location of your mobile device without your explicit permission. We would use it to help personalize recommendations or to keep you up to date about music trending in your area.
(Wired’s coverage, by the way, is revealing about how dissonant the tech industry’s thinking about privacy is right now: A feature described innocuously in Spotify’s “lovely” and “powerful” “hunt for the perfect playlist” on July 20 becomes, exactly a month later, something “eerie” that the user “can’t do squat about.”)
“The creepiness isn’t in using GPS for the running feature or whatever, it’s in asking for GPS access independent of a specific use,” he wrote. “In normal social interaction, you can tell someone where you are for a while without giving them ‘forever access’ to your location.”
To Seaver, this makes the case for “seamfulness” in design: Software should be asking for permission to use personal data more, not less. Currently, many designers aim for a “seamless” experience and take a set-it-and-forget-it approach to app privacy. The Facebook app on the iPhone, for instance, has to ask for permission the first time it tries to access the phone’s photos, camera, or GPS location—but once a user has given access, it never has to ask again. (“Seamlessness” as a goal resembles Facebook’s infamous “frictionless sharing.”)
This seamlessness is a software problem. On iOS, users can be specific about what kind of data an app can access: It can see their photos, for instance, but not their location. But once they’ve given it that access the first time, the app never asks again. A user has to go manually turn off permissions in the phone’s settings to restrict access. Android users, meanwhile, can’t even give apps permission with that level of granularity yet. It will be a feature in the next major update.
Imagine if, right before a run, Spotify asked for 60 minutes of access to your GPS location. If you still seemed on the move 55 minutes later, it would ask for another hour of access. That seems to me like a better trade: Not all the access, all the time, wherever; but access right now, for a little while, here. Apple or Google could encourage this practice simply by making that feature possible at the operating-system level. It would be more seamful, and it would be more trustworthy.
Such a technical advance would still require companies to communicate their privacy policies better to users—would require them to turn privacy policies from standalone, hedging, anxious land grabs into “living, breathing documents that represent a company’s culture,” as Koepke put it. But doing so would save both companies and users significant distress, taking us out of the cycle of wonder and shock.