This is an on-going draft attempting to document the challenges high-risk users face.
I have a Patreon, here, where you can subscribe to support my security and systems-focused writing. You sign up for a fixed amount per essay (with an optional monthly cap), and you'll be notified every time I publish something new. At higher support levels, you'll get early access, a chance to get in-depth answers to your questions, and even for more general consulting time.
© 2021 Eleanor Saitta.
Having empathy with people unlike one's self is hard — especially when trying to understand the world enough from their perspective that the design choices you make will serve them well. Nowhere is this more true or higher stakes than the design of security systems. I've talked about changing our thinking in security from a focus on assurance to a focus on outcomes, and empathy with the user and an understanding of what they're trying to do is a key part of this.
In this essay, I'm going to present a set of use cases or user outcome scenarios. I'm going to try to make them as human as possible — this by @SwiftOnSecurity is an amazing example of this — but I'm going to look at some slightly more specific cases and put a bit more emphasis on how actual technical countermeasures may be used by real users. I'm also concentrating somewhat more on specifically-targeted users than she did. For some great thinking on how one understands a scenario like this and moves toward applying it practically, this piece from Andie Nordgren at Alibis for Interaction on moving from user focus to participation design is really excellent. I'm going to focus mostly on small adversaries here, because as Quinn Norton states in her talk on them, they're much more common, often much more practically dangerous, and heavily overlooked by the security community. Eventually, I'm interested in exploring more how we can model adversaries, develop richer and more easily-empathized with and understood user personas, and how we can integrate that kind of rich knowledge of the world into threat modeling efforts. For now, though, we'll jump straight to some stories.
I want to start with two stories about accessing Facebook via Tor, because when Alec Muffet made the announcement that Tor would be available as a hidden service, I got a lot of pushback from folks in the security community for praising his work. While there are definitely lots of real privacy worries about Facebook, it's simply not true that there is no benefit in accessing it via Tor. In general, it's especially interesting to look at cases where high-risk users have to interact with commercial systems, because they represent places where, regardless of the efforts of the net freedom or open source community, security outcomes are only partially under our control at best — and it's also where almost all the users live. Let's look at two cases where having the ability to talk to Facebook via Tor is critical:
Sarah is 31 and has two kids. She's close to her fairly large family and all of her friends back in Boston, but she moved to Phoenix with her husband Bill, three years ago, and she doesn't really know anyone here. She stays at home with the kids, because Bill doesn't want her to work. Bill works at a gun shop, and he's good friends with half a dozen guys on the police force. He works irregular hours, so he'll often help his friends out, and they're happy to return the favor. He's also an alcoholic, although he's sworn Sarah not to talk about it with her family. Bill has become abusive since they moved, and she's finally taken the kids to a shelter. Sarah doesn't have a phone — Bill didn't like it when she called people on her old one, so she got rid of it to try to keep him happy, but she does have a laptop she uses to chat on Facebook with her family.
Let's take a minute to look at how this situation stacks up. Our adversary, Bill, is very likely to kill Sarah if he finds her. In order to get to long-term safety, she must contact her network of friends on the other side of the country. She doesn't have contact information for them (how many of you have phone numbers for your family memorized?), but she has a Facebook account. That account has been her one lifeline to contacting people, and not only is it crucial for her to be able to access it, it's been an emotional lifeline for her for years. Losing access to it radically lowers her long-term chances of not only getting to safety, but also of living a happy life later on. Bill, on his part, can get his buddies to abuse police powers to ask Facebook for the IP addresses the account has signed on from. There are some safeguards against this kind of abuse, but they're nothing a friendly judge can't instantly do away with, and in many cases even that may not be required. Sadly, this kind of thing happens all the time.
When Sarah arrives at a shelter in Phoenix, staff there are happy she isn't bringing a phone in. Shelter locations are carefully protected information, and revealing where they are can put everyone in the shelter at risk. If she had a phone, they'd need to deal with location reports from apps running on the phone and even tracking malware, which is sadly very common and time-consuming. They help her wipe and reinstall her laptop, just in case it has malware on it, and get her set up with the Tor Browser Bundle so she can access Facebook. Bill does have his friends add Sarah's account to a long list of accounts they're getting signed off on for a Pen Register request for another investigation, but the only addresses that come up are exit nodes. They get timestamps of logins, but no useful activity, and as the standard of evidence required for a content warrant is higher, Bill can't get his friends to get her messages for him. Sadly, Sarah has to keep using Tor for years, even once she's moved back to Boston and restarted her life. Careful use of privacy settings lets her keep using Facebook, and she lets everyone who can see her feed know she shouldn't be tagged with a location. It's a difficult balance between keeping her location impossible to find online and living a normal life, but there's no alternative.
These kind of long-term threat scenarios are sadly incredibly common for millions of women world-wide. They don't have the choice asking all of their friends to move to a different social network, because they need to be where their support community already is. A newly-single mother recovering from abuse not only needs the tools she depends on to be as simple and reliable as possible, she also has very little time or energy to focus on them. Even things as simple as having a memorable hidden service address can make a huge difference. In this case, it's not a problem that Sarah's Facebook account operates under her real name. Of the three things that unlinkability can do on Facebook, what she needs is unlinkability between her account and her physical location (as proxied by her IP address, given geolocation and subscriber information requests to ISPs). With Tor, Sarah can keep her location secret and herself safe. It doesn't matter that a given Tor session is de-anonymized, and if she needs to look at other information without giving away the link between that activity and her Facebook account, she could just log out, flush her cookies, and start a new Tor session. In this case, she doesn't need to, though.
Lilly is a graduate student. In the course of her research for a paper, she found out about abuses within the prison system, and started talking about them with other people privately. Eventually, demonstrations started to be held, and a movement slowly started. When the demonstrations started, police attention also started. Lilly had heard about this happening to folks who spoke up, and she knew that infiltration into the community was also possible. Hoping to save her studies, she made a point of using a different name for her activism work from the beginning. As the demonstrations gathered more steam, she created a Facebook account under the name she was using every day in her community, and started a page administered by that account. Using Facebook, it was possible for her to reach a much larger audience for the stories she had to tell. Facebook was also used for announcing protests, and without the page, organization would be significantly harder. Lilly wasn't interested in violence, and kept the contents of everything on the page strictly legal, which helped it stay up in the face of protracted campaigns to get it blocked as it gained importance. She accessed the page from public computer labs on her university campus, where she could hide among the other students easily. As police interest increased, she told her adviser she needed to take a trip to take care of a sick relative and went into hiding. It was very important that she be able to keep accessing the page without the police discovering who or where she was.
Many activists use Facebook for outreach, even if they do their organizational work via more secure channels. It's a critical tool for them to be able to reach a large audience. In some repressive regimes, it's common for activists to have two or more Facebook accounts used for different purposes, some of them entirely as cover. Here, Lilly also needs the unlinkability properties of Tor to disguise her location. She has good operational security practices, but as police attention increases, if she can be tied to a specific location at a specific time, the police can use surveillance footage to identify her. Because she's been operating under an assumed name throughout her interactions with the community, it's critical that she be able to continue doing so and that she keep her real name separate from her political work.
Early on in her work, Lilly reached out to David, a well-known opposition writer and journalist. She told him what she had found, and they began working together. She found out about Tor, and they began to use it for updating the Facebook page. This meant that they could work from a variety of places more easily, and during times when David believed he wasn't being followed, they could work together at each other's houses. David's posting on the page were under his real name, but when he eventually also went into hiding, Tor ensured that he couldn't be located, just as it ensured that Lilly could be neither located nor identified, even when she'd previously connected to that Facebook account from her home connection.
Many activists work under their real names, as they find that it's important to put a public face to their work, building trust over time with their audience. This doesn't mean that they don't need unlinkability. Activists who are just starting out, however, often need to work anonymously — especially when they're not yet sure if they're willing to make a life-long commitment around an issue. In both cases, they may need to interact with an audience via Facebook.
Let's return to Sarah from our example on using Tor with Facebook. We'll change a few of the details — what if Bill can't use connections on the police force to get Sarah's login addresses, but Sarah also can't bring the computer she used to access Facebook with her?
When Sarah shows up at the shelter, someone finds an old netbook for her and shows her Cryptocat and how to use it to talk to Facebook friends securely. She knows her husband might have access to her account, but the folks at the shelter don't have much in the way of time and resources, and she isn't sure what he can and can't get into, or if there are any tools available there. Cryptocat is easy enough to use that she can convince her family back in Boston to start using it with her, and she doesn't have to set up a new set of contacts, so it's fast for her to start. Both the ease with which she can get people to use it and the speed end up being very important for her to be able to get out of Phoenix quickly, so she and her kids can be in a friendlier environment.
While there are a lot of other instances in which end to end messaging could be useful, this is an incredibly important one. While Sarah might eventually find the tools she can use to kick other computers out of her account (if you need this, it's here, in Settings->Security->Where You're Logged In->End Activity), what matters is whether she realizes that this is an option in the moment. Understanding the operational control of an account (especially once you take into account others having passwords, password resets, control of other email accounts, etc.) takes a lot of operational security knowledge. When someone is fleeing for their life and trying to figure out how to rebuild that life, that kind of attention is scarce on the ground. Many users do not have a sufficiently nuanced understanding of the relationship between their browser and their Facebook account to even begin this work. In this case, knowing timestamps and metadata doesn't help that much, as long as Sarah can keep message content private. One of the hardest parts of communication security is often getting correspondents who are at personally lower risk to take the same countermeasures to protect message confidentiality that the more motivated half of the conversation wants to use. This is especially true when care around secrecy needs to continue for a long time. Cryptocat over Facebook keeps people in the same social-graph context they're used to communicating in — they don't have to re-find all of their friends, and it's straight-forward enough for people to use for every day chats, not just the stuff that has to be secret. Both of these are critical to positive security outcomes.
Use of Cryptocat could compose with the scenarios we talked about around uses of Tor with Facebook nicely, too. While using Facebook for actual organizational work for high-risk activists would be a poor choice, there might well be times when an activist would need to talk privately with someone who they only have a connection to on Facebook. Even if they didn't use Tor for this, Cryptocat will still make their adversary's job more difficult, as the connection to Facebook is proxied through Cryptocat servers currently, and no logs are kept at the Cryptocat end, making connecting IP addresses and accounts after the fact very hard. While in theory any number of other applications could substitute in this story in place of Cryptocat, in practice there's absolutely nothing else out there that's half as easy to use for encrypted chat — and see my previous note on why that matters so much.
A (sadly not very) brief more technical note: Facebook currently provides an XMPP/Jabber interface for chat externally. Cryptocat implements the OTR protocol over Jabber to provide confidentiality and integrity for messages. Facebook is considering discontinuing the XMPP chat API in 2015, because it constrains what they can do with chat. XMPP, quite frankly, is a pretty horrid protocol, so I can't blame them for wanting to do this, but if they do Cryptocat will stop working. Internally, Facebook apparently uses a version of MQTT for messaging. It's unclear how far their internal version has drifted from the public specification. If Facebook thinks that at-risk users being able to communicate with a stronger level of guaranteed confidentiality, especially in emergencies, is a good thing, it would be lovely if they'd help out in a few ways. For Cryptocat to continue to help people, they'll need at an absolute minimum a public MQTT endpoint to talk to and an open spec for the version of the protocol that Facebook implements. Two more things would be really useful: a reference implementation of the protocol, and a reference implementation (and spec) for OTR or something similar on top of it. It's actually not that important that it's exactly OTR. Currently, OTR jumps through a bunch of hoops to create a pretend security property called deniability (which deserves an entry of its own here, and will get one eventually). Deniability supposedly allows an adversary listening to your connection to fake a transcript of your conversation, but only after it's finished. The idea is that when the police bring a transcript to court, you can deny that it's real and claim they faked it. Unfortunately, any lawyer will tell you that while if they were stuck in that situation they might try to argue it, this is basically a giant joke thought up by people who've never talked to lawyers about it. The complexities of deniability have meant that OTR is only implemented for one-on-one conversations, because the property can't be easily implemented. Maintaining deniability while providing for properties that real users actually care about in secure chats, like being able to kick someone out of a chat if they're being disruptive, or reasonable performance on low-bandwidth connections (something Facebook care a great deal about, to their credit) turns out to be much harder still. So, if Facebook provided a specification and a reference implementation for something that wasn't OTR but did everything it does except deniability and also worked for the kind of multi-party conversations they support normally, it'd be amazing. A bunch of security folks who never talk to lawyers (sorry guys) would almost certainly be annoyed, but a bunch of folks who work to protect high-risk users would be absolutely delighted. I know Facebook takes openness seriously, at least internally, and if folks are interested in showing this to the world, this would be a great way.
Up next will be a write-up of subscribable blocklists and some of the effects they might have on conversation dynamics in political groups.
Deniability is popular but at best a mistake and often actively dangerous, and it doesn't even always mean the same thing. I'll have a tragedy it two parts about it.