The Strange Way People Perceive Privacy Online

Tashatuvango/Shutterstock.com

Internet users make weaker passwords when government surveillance is on their minds.

If you knew someone was rifling through your things at work, or suspiciously idling near your house every night, you might go to extra lengths to secure yourself. Maybe you’d start locking important documents away in a cabinet, or invest in a home alarm system. And if you knew someone was trying to access your email accounts, you’d probably think to strengthen your password.

That’s what Stanislav Mamonov, a business-school professor at Montclair State University, assumed at the outset of an experiment whose results were published earlier this year. Mamonov recruited more than 400 people to complete surveys that asked them a few questions about their attitudes toward online privacy, and required them to secure their answers with a password.

Before answering the questions, about half of survey-takers read four news stories that had something to do with government. The other half read four articles specifically about government surveillance. Mamonov expected the latter group would be primed to make more complex and secure passwords, because the specter of government snooping would be on their minds.

But the opposite was true. The participants who had just read about electronic spying created significantly worse passwords than the others.

The reason, Mamonov thinks, has a lot to do with people’s perceptions of surveillance. He guessed study participants would have wanted to protect themselves against it; instead, he says, the magnitude of the threat seems to have instilled a sense of helplessness that made them less likely to put an effort into securing themselves.

Most of Mamonov’s research focuses on the role of perception in the online world. He’s also spent a lot of time considering the role of perceived norms in people’s relationships with the social media platforms they use every day—and whether those perceived norms are, in fact, more important than the actual rules that govern use of the sites. I spoke with Mamonov about his research; a transcript of our conversation, lightly edited for clarity and concision, follows.


Kaveh Waddell: Why are perceptions so important in the online world?

Stanislav Mamonov: I don’t think it’s just in the online world; it’s in the offline world, too.

Let me give you a little bit of background on this story. When I started this research, Facebook, which was gaining popularity, was already facing four class-action lawsuits related to how it was appropriating information rights. Then, I came across a very interesting bit of research from an organizational context: something called psychological contract theory. It effectively said that if we want to understand what happens in an employment context, legal contracts really don’t tell us much about who’s going to stay and how hard people are going to work.

Drawing on that theory, I started developing a theory of psychological contracts in information exchanges, and thinking about what implicit perceived obligations people have in relation to technology providers whenever they share information through technology-mediated channels.

Facebook is obviously both a poster child for these problems, but also a very successful company. And there are many others—TripAdvisor, Yelp: They all monetize their user-shared content. So I thought it would be neat to examine the user expectations that, when breached, might affect the sustainability of a business model that that relies on continued user information sharing.

Waddell: You found, essentially, that it’s more important to a user when their perceived contract with a company is violated than when the actual legal contract that they entered into with that company is violated. People know that they’re accepting terms of service—why is it that this perceived contract holds so much more importance?

Mamonov: I believe that the roots of this came from evolution: People responding emotionally to things they perceive as unfair or infringing on their dignity or on their person. Once the emotions get triggered—whether it’s disgust, anger, distrust—they have virtually instant consequences.

Waddell: If you were Facebook, what might you take away from this research?

Mamonov: I think if there’s anything to take away, it’s about managing user expectations in relation to information that’s been shared. Facebook is known for crossing the line: They routinely use things that people post online for social ads, and in some cases that can cause problems, especially if it misinterprets what people post. So user education and erring on the side of caution would be my recommendations.

Waddell: You found that even if a user’s trust is broken but a company says, “Oh, but you signed away your right to that information,” or “We were within our rights to use it in the way we did,” it doesn’t seem to matter much. Do you get the sense that people feel like a company’s being sneaky if they slide that permission into a long terms-of-service contract or in fine print somewhere?

Mamonov: They have done several studies that have shown that effectively nobody reads terms-of-use contracts that come with online services. Even people who say that they’ve read the contract can virtually never answer basic questions about the content of those contracts.

So that’s the reality. And even though those actions might be defensible in a court of law, it doesn’t necessarily mean that they will sustain the user activity that’s critical for the company’s success.

Waddell: Does it matter if a data-gathering feature is opt-in rather than opt-out?

Mamonov: I have not done research in that area, but I would certainly expect that it does. There’s a basic perceived control variable which has been shown to be extremely important to how users perceive subsequent actions. If you give users a sense of control, then they’re must less likely to be reactive to your response.

Waddell: You’ve examined these psychological contracts in the context of social networking and mobile usage. Where else do you expect they exist online?

Mamonov: That was actually the original intention of my work: to develop a framework that would have applications across different contexts. Irrespective of where people share information, I would expect that user concerns about privacy and intellectual property rights are present.

Let me give you an example: If two people meet and one of them shares a business idea that he or she is working on, and the other party goes ahead and starts that business and steals the idea—ideas have no legal protections—if the people were friends, the person whose idea was stolen would likely feel violated.

That’s the expectation of the general framework. Wherever people share information, unless they’re explicitly sharing and waiving rights, they will always be expecting privacy and intellectual property rights.

Waddell: So that could be on Facebook or Twitter—how about email and text messaging? Any online communications platform?

Mamonov: That’s my suspicion, but the work that I’ve done so far has focused on the social networking sites and mobile services. The next area of interest for me is the Internet of Things. I’m specifically interested in self-driving cars: In the two accidents that Tesla has had, the company has been able to deconstruct the events very precisely, which tells us that they’re collecting all the information that’s available. And I just saw in the news that Amazon Echo is now going to be randomly speaking without being asked for any information. We don’t yet know what the full scope of what the company wants to do with the device.

Waddell: So you see a potential for violations of psychological contracts in both of those situations?

Mamonov: Yes. I expect the device to respond when I ask, and not necessarily respond or collect information if I don’t ask for it.

Waddell: Some of your more recent work also touches on perceptions of government surveillance, and how that affects behavior online. One particularly surprising finding from a recent study was that people kind of “give up” when they’re reminded about government surveillance: When they’re primed with that information, they create weaker passwords. Why are they doing that?

Mamonov: That was a very unexpected, surprising finding for us, as well. Let me tell you a little about the background story: What we were actually working on was a study of how to get people to specify stronger passwords. The common method that exists is computer screening of passwords. But people can get around that. They can specify “password1,” or “password123,” which will comply with the technical requirements of having letters and numbers, but ultimately, it’s an easy-to-defeat password.

Having done work on psychological aspects of human-computer interaction, I thought maybe we can modify people’s behavior by exposing them to stimuli of security and privacy threats. We did a series of studies where we exposed people to stories of corporate email break-ins. We didn’t tell people what the actual object of the study was, and we started asking them if they’ve ever driven drunk, if they’ve ever taken drugs, what their religious affiliations are—and offered to protect their responses with a password, which was the key dependent variable in the study.

Waddell: So it’s sensitive information. They have an incentive to protect it.

Mamonov: That’s a very interesting question, too, the relationship between the strength of a password and the level of information exposure—but in our study we did not find it. We did not see that people making stronger passwords led to more disclosure.

But both were affected independently by primes we used. So if we showed people stories about corporate email hacks, they made stronger passwords and disclosed less, versus if we just gave them some company-related stories.

During our research was also when the Edward Snowden story first came out, so I decided to use some primes related to government surveillance. I thought, well, maybe this is another way we can incentivize people to use stronger passwords. And then, we saw the results, which we replicated—and the results were that if we expose people to government surveillance stories, they make weaker passwords, not stronger.

To address this question—why might people be responding this way—I did a study which hasn’t been published yet where I look at what type of emotions were triggered by exposure to these stories, because we couldn’t find a rational explanation. And the only emotion out of the more than 20 that we assessed that was affected by exposure to government surveillance was the feeling of helplessness.

That can be understood quite well in the context of learned helplessness research that dates back to the Seventies: People that are repeatedly exposed to uncontrollable stimuli learn that they can do nothing, and then even once the stimulus is removed, they don’t try to do anything. This was the famous electrocute-the-dog story.

Waddell: Why was there a difference in behavior between when people saw a corporate-hack story and when people read about government surveillance?

Mamonov: That is a very good question. We would need to go back and look at the emotions that were triggered in relation to the corporate story. I would expect that they do not make people feel helpless. It would be quite easy to do, and once I find the time, I’ll do it.

NEXT STORY: Will NSA and CYBERCOM Split?