recommended reading

2 Easy Steps to Improve Privacy Policies

The end of last week saw one of those tech stories that runs the cycle—from Twitter outrage to corporate chagrin—in less than 48 hours. Here’s what happened:

On Thursday, a major tech company (in this case, Spotify) debuted a new privacy policy. The policy permitted the software to access more information than seemed reasonable (in this case, a user’s photos, contacts, and GPS location), and people were upset. They had no easy way to opt out of the new contract: They either had to agree to it or leave the service.

Late that night, a nerd hero (in this case, Markus Persson, the developer of Minecraft) told the service on Twitter that he had cancelled his subscription and that the company should be ashamed of itself. (His words: “Please consider not being evil.”)

More corporate shaming from less high-profile nerds ensued Friday morning, as pundits more sympathetic to the needs of venture capital in turn condescended users. By noon in California, it was all over: The company had clarified its policy and apologized for not communicating better. Fin.

Another privacy-policy scandal managed, another variant on corporate Big Brother avoided. (Though a Spotify-driven surveillance regime would look less like 1984’s Oceania and more like The Apple’s disco-dystopian West Berlin.) So privacy policies could nicely disappear from the tech news slate—at least, until the next scandal.

And there will be a next scandal. The agita over Spotify’s privacy policy resembled disputes just this year over other companies’s privacy policies—like Samsung’s and Uber’s—as well as the the cyclical fretting over Facebook’s reach. These scandals have attained a degree of predictability: They are almost as formulaic as the legalese of the policies themselves.

But beyond the cycle of discovery, outrage, and apologetic adjustment, there are deeper problems. The way lawyers, executives, and developers address user privacy just doesn’t work that well. Neither consumers nor corporations benefit from our current amend-then-freak-out regime.

That’s partly because, if we’re being honest, privacy policies are kinda boring. They’re how new and shiny consumer software gets scaffolded in mundanity. Just like how, when your new Cuisinart or Frigidaire arrives, no one looks at the manual that comes with it.

The sum is that no one wants to think about privacy policies except during a crisis—and corporations want to avoid a crisis. So when they do have to amend their policies, companies are reluctant to provide too much information lest they initiate a negative PR cycle. But that reluctance leads them to state their privacy policy expansively or in forbidding legalese—and then they invite that same crisis, as users interpret the policy in the worst way.

This traps both consumers and companies in a cycle of bad faith. As the privacy consultant Jonathan Salem Baskin put it:

The fait accompli behind privacy practices is that businesses have the right to intrude massively into customer’s lives and, since the policies are legal agreements (often executed by nothing more than their tolerance), people have made consciously willing trades: their privacy for better playlists, or shoe ads.

This just isn’t true, since few people understand those transactions. It also violates every conceivable psychological or sociological model of how humans define and manage their privacy, which depend on two parties learning to trust and reveal information to one another over time.

“It doesn’t have to be like this,” Baskin wrote. “There’s nothing inherently wrong with customers sharing their personal information or behavior, nor with companies using it to improve their operations and, gasp, profiting from it.”

The situation could also be improved with two different specific adjustments, one legal and one technical. Legally, the tech-policy writer Logan Koepke (who, full disclosure, is a friend of mine) advocates that companies should announce a new privacy policy whenever they change.

Right now, most consumer-tech companies only announce changes whenever they make “material” changes to their policy. (The last time the Federal Trade Commission issued guidance on privacy policies, in 2009, it recommended that process.) But that means that it’s up to tech companies to decide which changes are material and which are not—which sets up a potentially adversarial relationship between company and consumer.

If companies instead notified users of every change to their privacy policy, they will be more skittish about making any change, Koepke believes, and more likely to frame the changes they do make in more public-facing language. The benefits of this would then accumulate over time: Once a more user-respecting privacy regime is mandatory, companies will compete on privacy.

Medium has since changed its privacy policy accordingly, and it now notifies users whenever it makes a change to its policies. The FTC should encourage similar companies to do the same.

* * *

If a regulatory change looks unlikely, though, there is a technical intervention that Apple and Google could make. To explain how, it’s worth getting into the nitty-gritty of the Spotify flap. One of the clauses that most worried users was the following:

Depending on the type of device that you use to interact with the Service and your settings, we may also collect information about your location based on, for example, your phone’s GPS location or other forms of locating mobile devices (e.g., Bluetooth).

The company says this access was needed for its new Running feature, announced in July, that monitored someone’s jogging pace and served them tunes appropriately. And when the company’s CEO, Daniel Ek, explained the new policy on Friday, he said that that type of data collection would always require a user’s permission:

We will never gather or use the location of your mobile device without your explicit permission. We would use it to help personalize recommendations or to keep you up to date about music trending in your area.

(Wired’s coverage, by the way, is revealing about how dissonant the tech industry’s thinking about privacy is right now: A feature described innocuously in Spotify’s “lovely” and “powerful” “hunt for the perfect playlist” on July 20 becomes, exactly a month later, something “eerie” that the user “can’t do squat about.”)

On Twitter, Nick Seaver, an incoming anthropology professor at Tufts University, pointed that the social costs of Spotify’s privacy policy emerged from social and technical limitations.

“The creepiness isn’t in using GPS for the running feature or whatever, it’s in asking for GPS access independent of a specific use,” he wrote. “In normal social interaction, you can tell someone where you are for a while without giving them ‘forever access’ to your location.”

To Seaver, this makes the case for “seamfulness” in design: Software should be asking for permission to use personal data more, not less. Currently, many designers aim for a “seamless” experience and take a set-it-and-forget-it approach to app privacy. The Facebook app on the iPhone, for instance, has to ask for permission the first time it tries to access the phone’s photos, camera, or GPS location—but once a user has given access, it never has to ask again. (“Seamlessness” as a goal resembles Facebook’s infamous “frictionless sharing.”)

This seamlessness is a software problem. On iOS, users can be specific about what kind of data an app can access: It can see their photos, for instance, but not their location. But once they’ve given it that access the first time, the app never asks again. A user has to go manually turn off permissions in the phone’s settings to restrict access. Android users, meanwhile, can’t even give apps permission with that level of granularity yet. It will be a feature in the next major update.

Imagine if, right before a run, Spotify asked for 60 minutes of access to your GPS location. If you still seemed on the move 55 minutes later, it would ask for another hour of access. That seems to me like a better trade: Not all the access, all the time, wherever; but access right now, for a little while, here. Apple or Google could encourage this practice simply by making that feature possible at the operating-system level. It would be more seamful, and it would be more trustworthy.

Such a technical advance would still require companies to communicate their privacy policies better to users—would require them to turn privacy policies from standalone, hedging, anxious land grabs into “living, breathing documents that represent a company’s culture,” as Koepke put it. But doing so would save both companies and users significant distress, taking us out of the cycle of wonder and shock.

Threatwatch Alert

Misplaced data

8 Million We Heart It Accounts and Passwords Exposed

See threatwatch report

JOIN THE DISCUSSION

Close [ x ] More from Nextgov