What I’ve Learned about Privacy, Data & Digital Tools


[print_link]

Long before I got involved with the digital world, I learned an important lesson about privacy and anonymity, and about the kinds of security that are truly meaningful. This lesson had nothing to do with computers or the internet, but what I learned back then holds some valuable insights for the work I do today in digital security.

How my understanding of information security evolved.

In 1987, I got tested for HIV for the first time. I was a gay man coming of age in San Francisco during the early years of the AIDS crisis. The mainstream press was full of panic, talking about plague and containment. The political class were demonising gay men and suggesting putting them all in concentration camps. As a result, no one would get tested for the virus if it meant having their identity revealed in any way.

When I went to get tested, my blood was taken and I was handed a number. I was never asked to divulge my name, address, phone number, or anything that might reveal my identity. When I returned in a few nervous week’s time, I provided my number and was given my results. I threw away my slip of paper, and my HIV status never got connected to my identity. This was well before the widespread use of the Internet or even computers to hold patient records. The likelihood that my confidentiality would be broken was very, very low.

Very soon after that, the world of data management changed. PCs became cheaper and found their way into every type of organization. We began to see the rise of the database. Personal data was already commonplace on the personal computer when I began working with activists in the early 1990s. Largely, though, these computers weren’t connected. By the end of the decade, however, a new type of database began to come into widespread use that was probably the biggest threat to digital privacy – the relational database. And these databases were connected through networks. Now you could combine data sets, and lots of information could be crossed referenced.

For the community organisers that I worked with in the US, this was an enormous leap forward in their ability to track and keep information about their constituents. You could actually match addresses to data on political districts and easily inform people who their elected officials were. This was huge. When these databases went online, that was an even bigger plus – you could now access that information ANYWHERE you had a computer connection. So if you had organised a group of people to go the capitol and meet with their legislators, you could easily make sure they were going to meet with their representative.

Our world of increasingly accessible personal data.

Online relational databases fueled the rise of social networks, and we’ve been completely captured by the convenience of social communication, keeping in touch with friends and family on a day to day basis.

Now privacy has gone to the other extreme, and our personal data is seemingly accessible to ANYBODY. It turns out that the companies who encouraged us to give up our personal information are hard pressed to protect it. We’ve given up our privacy for convenience, but in return we have become the product for these companies.

We have also seen the rise of complicated EULA’s (End User License Agreements), that pop up when we go to install new software or sign-up to use an online service. They are too burdensome to read, written in impenetrable legalese, and are presented to us when we most want instant gratification to start using something. We summarily click “Agree” because we want access to functionality, quickly moving past the several densely worded pages telling us that everything we put into the app will be the property of the company.

In the last year we’ve also learned how the NSA and GCHQ have found ways to get at our personal data and are actively accessing it. We’re also constantly learning about new vulnerabilities such as Heartbleed, which has left our user names and passwords in the open for anyone to harvest and use.

We’ve also learned more about the values of the companies we’ve entrusted our data with, as DropBox has appointed Condeleeza Rice to their board. It doesn’t seem to occur to DropBox that putting someone on their board who has been an active proponent of mass-surveillance might be a conflict of interest. So there you go: Condeleeza Rice is on the board of a company that has managed to become ubiquitous with cloud storage. Values outed. Who knew?

condi_box

The Drop Drop Box Campaign

We are now at a point where our information is out in the open. We are sending data from our computers and mobiles. Everything is being logged and recorded at some point along it’s travel from one place to another. And it’s seemingly accessible to anyone, all the time. And now we’ve seen the rise of the personal data economy.

Is the answer encryption and circumvention tools?

Well, maybe, in some cases, especially when risk is high and you need to use the Internet. The problem with digital security tools is that they are often cumbersome and difficult to use. Not only that: digital security is an arms race, as governments are constantly trying to find ways to exploit them. Also, the use of encryption and circumvention tools often arouse suspicion — and not just in repressive countries like Iran, but also in places like the United States. The NSA was actively looking for people using these tools as a way to identify possible ‘enemies of the state’.

So should we just stop using computers and revert to old-fashioned paper notebooks?

In doing some research for Article 19 on online security and Iran, I ran across a story about a notebook. In the 1990’s a woman working with a Dutch NGO travelled to Iran to meet with various activists in the country. She was trying to establish a network of people working on the inside that the NGO could support from the outside. She carried a paper notebook with her from meeting to meeting and wrote down the names and contact information for all the activists she met. On her way out of the country she was detained and the notebook was confiscated and in that moment she had succeeded in putting every single person she met with at risk AND closed the door on any chance anyone would work with that NGO in the future.

So no, it’s not just about using a notebook. What it is about is understanding the power of the information you are sharing. It’s being aware of the value of the information, the risk if other people access the information, and the likelihood of this threat occurring.

It’s about the information itself.

Don’t start by thinking about technology at all.  Start with asking how the information you hold makes you, or people in your network, vulnerable. How is it valuable? To whom is it valuable? Who will be put at risk if someone accesses it? Then think about the repercussions of the wrong people getting ahold of it.

Often the key to this is thinking clearly about what are Identifiers. Of course things like credit card numbers, addresses, mobile phone numbers and email addresses are things that can quickly point to identity. In that same research I did for Article 19, I spoke to an Iranian activist who had fled Iran after a scare with the authorities.  While the activist had lived in Iran, she was involved in an activist network that was operating inside the country. The activist was also connected to an international network at the same time. The activist had established two email addresses and used one for each network.The in-country network communicated via email for nearly two years before the authorities moved in and arrested them. The activist was shocked to find that the authorities had been monitoring the in-countries network emails the entire time, and knew every detail of what had been communicated. Yet the authorities had no knowledge of the activist’s connections to the international network.Though the one email address had been enough of an identifier to match the activist to some very incriminating evidence, because she had used a separate email address for another network and had been careful about connecting information to it that would have identified her, she was eventually released with just a warning (she fled Iran soon after this).

May 20, 2014 at 0921AM

EHRN’s Women Against Violence Campaign

There are also random pieces of information that can be pieced together to reveal identity. Recently working with the Eurasian Harm Reduction Network on a project about reducing police violence against women who use drugs, it occurred to us that the data we are collecting that puts a time and place with an act of violence is enough information for the perpetrator to identify their victim. Having that awareness of who might want to get a hold of the information and what the repercussions of that might be are undoubtedly the best defence.This has also led us to rethink the strategy of data collection. We don’t so much need data that reveal information about the victim, we actually need data that reveal the activities of the police.

There are lots of positive reasons to use the internet and social networking in advocacy and activism, but we need to keep an eye on the information itself and think through the risks we are taking when we make it accessible. I’ve been a huge fan of the Electronic Frontier Foundation’s Risk Management Guide as it gives a practical non-technical framework on how to think about securing information.By assessing risks and threats before putting information online, you can have a lot more security than just simply using digital security tools ALL THE TIME.

Back in the 80’s, someone in the medical community understood what the risks and implications were for people who got tested and came up with a system to protect confidentiality about the test results. This provided assurance and resulted in people seeking an HIV test while also getting around the stigma that their test results might have raised.  In this example from two decades ago, the security system was pretty low-tech, but the security thinking was creative and highly practical. Now that it is high-tech, we need to be even more vigilant about how our medical information, and even our DNA, is digitized. Somehow we’ve gotten distracted by the new digital tools, but have forgotten to focus on the information itself, and protecting that information based on our own capacities.

I’d be very interested in hearing about your own creative efforts to address information security. Would also to add on to the resources below, who are your privacy heroes?

Here are some resources to help you think about keeping information safe:

Special thanks to Ric Mallamo for his help, guidance and copy editing skills!