People of all ages care deeply about privacy. And they care just as much about privacy online as they do offline. But what privacy means may not be what you think.
Fundamentally, privacy is about having control over how information flows. It’s about being able to understand the social setting in order to behave appropriately. To do so, people must trust their interpretation of the context, including the people in the room and the architecture that defines the setting. When they feel as though control has been taken away from them or when they lack the control they need to do the right thing, they scream privacy foul.
To get at the challenges around privacy, let’s consider a recent privacy FAIL: Google Buzz. What the outrage around Google Buzz showed us is that people care deeply about privacy and control. Don’t get me wrong – plenty of people will use the service and it will be extremely popular, but this doesn’t mean Google didn’t screw up. They’re taking a hit in terms of trust, because not everyone benefited from what they did.
For the uninitiated, Google introduced a new service called Buzz that is basically a stream (ala Twitter or Facebook’s Feed) with content populated by the people that an individual chooses to follow. The service is situated within Gmail, requiring users to access it via the Gmail interface. When first launched, new users were invited to check out Buzz on the way into Gmail. If they agreed, they were prompted to give information that would result in the creation of a publicly accessible profile, if they didn’t already have one. And they were given a popup of users that Buzz calculated that they’d most like to follow. While any user could be unclicked, the default was that they were clicked and clicking through would result in users automatically accepting these people. The default also meant that a users’ list of followees would be listed on their publicly accessible profile, even though there was an option to uncheck this. Likewise, if the user used other public features of other Google products – such as Reader – these too would be all integrated into a user’s public profile, even though there was always a way to disconnect these sites.
Nothing that the Buzz team did was technologically wrong. There were all sorts of opt-outs available – opt out of Buzz, opt out of the default lists, opt out of displaying the lists, etc. Yet, the service resulted in a PR disaster. Why? I’d argue that Google made a series of non-technical mistakes that resulted in a disruption of social expectations. While it’s easy to blame the users since the technology was fine, I think it’s important to deconstruct cases like this to understand what went wrong and what it tells us about privacy.
First, Google got themselves into trouble by launching a public-facing service inside a service that people understand as extremely private. Gmail seems like a logical integration point because people visit there regularly, but juxtaposing the two services created a cognitive disconnect in users’ minds. The result? Confused users believed that their emails were being made publicly accessible. While this was never the case, the integration confused people and gave them the wrong impression about the service. This created unnecessary panic amongst users, resulting in bad PR for Google that was technologically inaccurate.
Second, Google assumed that people would opt-out of Buzz if they didn’t want to participate. I’m going to give them the benefit of the doubt on this one because a more insidious framing would be to say that they wanted to force people into opting-in because this makes the service more viral and more monetizable. While I’m trying not to let conspiracy theories cloud my analysis, I can’t help but notice that more and more companies are opting people in and waiting until they flip out to adjust privacy settings.
Many users jumped into Buzz to check it out, clicking through the various pages just to see what it was all about. They didn’t realize they made their content public; they didn’t realize who they connected to. They didn’t yet know the service. We know that most users accept the defaults, especially when they’re trying to login to see what something new is all about. And we know that the defaults matter. When a user doesn’t know the value proposition, they’re going to just say yes.
But once you understand something, having to undo the setup is tricky. It’s easier to flip out. Many users were extremely confused, uncertain of what opting out would mean, especially since it was located in Gmail. I spoke with a few who were afraid that opting out would mean canceling their Gmail account. That, needless to say, made them even more worried.
While you want your services to go viral, help users walk through the value proposition first. Not through a video, but through an experience. Walk them through the steps to build out their network, inviting them to join you on this journey and helping them understand what they’ll get by doing it. Often, it’s easier to start with a blank page that shows an artificial experience and then invite them to replace the artificial content with content from people they know.
Another issue is that Google foolishly told users what they wanted rather than asking them. As technologists, it’s easy to assume that optimizing a situation is always best. Yet, this tends to break necessary social rituals that help acquaint people with a particular social setting. We don’t go through the niceties of “Hi, How are you?” because it’s optimal for communication; we do it because to do otherwise is rude. In digital worlds, people need to be eased into a situation, to understand how to make sense of the setting.
Years ago, a group of engineers realized that people frequently posted “A/S/L?” in chatrooms to elicit age, sex, and location. They noticed that most people responded to this query with information like 32/F/Austin. They thought they’d make people’s lives easier by inviting them to fill out a profile that included age, sex, and location. What they failed to realize was that A/S/L? wasn’t simply about information solicitation; it was an icebreaker. When I respond with 32/F/Austin, it’s entirely appropriate for you to ask, “Oh, do you happen to be at SXSW?” But there’s a big difference between this line of inquiry and you looking at my profile and saying, “So, I noticed you’re in Austin; do you happen to be at SXSW?” The latter feels really sketchy and my immediate thought is: “what are you doing looking at my profile?”
The norm on many sites at this point is to invite users to share their Twitter or Facebook account or to upload their contacts so as to populate their network. There is no doubt that Google has tremendous information about its users’ networks. But instead of asking new Buzz users if they wanted to see who else that they know on Google services might be using Buzz, they pre-populated a list and provided it to them as their default list of friends. This made people feel downright creeped out.
This dynamic connects to my fourth issue: Google found the social equivalent of the uncanny valley. Graphics and AI folks know how eerie it is when an artificial human looks almost right but not quite. When Google gave people a list of the people they expected them to know, they were VERY close. This makes sense – they have lots of data about many users. But it wasn’t quite perfect.
To understand this, you need to know that there’s a difference between what sociologists understand as “personal social networks” and the two kinds of networks known by technologies: “behavioral social networks” and “articulated social networks.” [See more info]
Articulated social networks are the lists of people that you indicate that you know, either privately (like in your addressbook) or publicly (like on Facebook). Behavioral social networks are the networks of people that you regularly communicate with or share space with, the kinds of networks that you can discern from email exchanges or mobile phone records. All of our theories about social networks – weak and strong ties, homophily, etc. – stem from studies of personal networks. While there’s a lot we don’t know about behavioral and articulated networks, we do know that they are NOT the same as personal networks. Google collapsed behavioral and articulated social networks and presented them in a way that indicated that they might be one’s personal network. And for many users, this wasn’t quite right. You may talk to your ex-husband frequently via email, but that doesn’t mean that you want to follow him on Buzz.
Finally, Google assumed that people wanted different pieces of public content integrated together. This causes two problems. First, just because people talk to certain people in one context doesn’t mean that they want to talk with them elsewhere. As Helen Nissenbaum has argued, “contextual integrity” is necessary for people to effectively manage privacy. Dismantling contextual integrity is experienced as a violation of privacy. And second, just because something is publicly accessible doesn’t mean people want it to be publicized. We’ll come back to this one in a second.
As usual, The Onion hit the nail on the head with its satirical article, “Google Responds to Privacy Concerns with Unsettlingly Specific Apology.” Just because people trusted Google with information about themselves doesn’t mean that they want it used in unexpected ways.
I still disagree. Privacy, for all intents and purposes, is dead.
Long live publicity.