Coercion-Resistant Design

Dymaxion.org

A number of countries are currently trying to pass laws that will coerce development teams to destroy the security of their systems and build in backdoors.  This is a terrible idea, so here are a few ways that we may be able to design around it.

I have a Patreon, here, where you can subscribe to support my security and systems-focused writing.  You sign up for a fixed amount per essay (with an optional monthly cap), and you'll be notified every time I publish something new.  At higher support levels, you'll get early access, a chance to get in-depth answers to your questions, and even for more general consulting time.

If you like the work I do, you can also support it via Flattr:

Flattr this

Dymaxion.org is me.  Along with writing, I give talks, make art, take photographs, and work on a number of public projects.   © 2017 Eleanor Saitta.

RSS Feed for essays and talks

Return to All Essays

or some politics and an eleven-point technical plan for how not to be governed

The Problem

It's a complicated time to be a software developer.  Software is eating the world, and many nation states are unhappy about encroachments from a bunch of upstart hoodie-wearing geeks into what were until recently their sovereign territories.  Organizations that that do things in the world beyond just releasing code or running services — as much as companies like Uber try to pretend they're software companies — often find themselves subject to regulation or pressure on those AFK-centric activities.  Life has, relatively speaking and with the exception of a few minor intellectual property kerfuffles, been pretty easy for pure software folks.

As the impact of communication systems has become both clearer and larger, this has started to change.  When states want to censor certain things they often act directly, leaning on their relationships with heavily-regulated telco monopolies.  That sometimes doesn't work for surveillance, though, which is where our story starts.

In March Brazilian Federal Police briefly arrested a local Facebook vice president on obstruction of justice charges.  This came after fines of $250,000 a day failed to convince Facebook to alter the fundamental nature of reality so they could retroactively decrypt an end-to-end encrypted WhatsApp chat.  While he was released the next day, presumably after several battalions of lawyers taught the Brazilian police basic math, not everyone is going to have it so easy.

The issue in Brazil was that what the police were asking was impossible — while either WhatsApp or (via a telco wiretap) the police may have had encrypted copies of the user's traffic, neither party had the decryption keys.  The system worked as designed.  Of course, Facebook could choose to attack the security of their users by building a mechanism that would let the company selectively force users to give up their keys, or even one that would force all users to escrow their keys with the company before using them.  The axlotl (nee Signal) algorithm now used in WhatsApp (at the time, it was only present on Android) used makes this difficult, however, because it's designed such that giving up the keys you have today won't reveal the content of messages you sent yesterday, a property called forward secrecy.  To give the police what they wanted, Facebook would have to have effectively been storing the unencrypted form of every message.  While they could do this, doing so would increase the risk for their users, defeating the entire point of such a system.  Like Chekov's gun, a database of billions of unencrypted messages, some very sensitive, will always eventually turn into a tragedy.

Despite this, some states have decided that the ability of their spooks to see into all possible private spaces trumps the security of the infrastructure their populations depend on.  They've made this choice without any evidence that such extraordinary powers lead to better security results and in the face of evidence that their data obsession is undermining and distracting from traditional policing and intelligence work.  In the UK, the government is currently trying to ram through a new Investigatory Powers act, which would give a number of police and intelligence agencies the right to force software developers to re-architect their systems to build in weaknesses which would allow large-scale surveillance, regardless of the security harm caused.  While there's still a chance the UK law will fail to pass or be fought to a standstill in the courts (albeit a slim one, as the law mostly renders official already-ongoing extralegal tactics by GCHQ), there are pushes ongoing in the US and many other (mostly rich world) nations for similar laws.  If we don't want to just give up on security and privacy, it's good to have a plan — and that plan takes the form of coercion-resistant design.

The Tasks

If you want to protect the privacy and security of your users in the face of a malicious state, you have two tasks.  First, doing it, and second, getting away with it.  These tasks are closely related, and exactly what your options are will depend on what you're doing, where you're doing it, and what the law where you are is interpreted to mean on any given day, along with who you've annoyed and how lucky you are.  I'm not a lawyer, and I'm definitely not your lawyer — if this isn't a theoretical exercise for you, get a damn lawyer already.  I can, however, be your security architect, and you're definitely going to need one of those too.

As a note, in this essay I'm mostly talking about state coercion, but many of the same defenses apply to coercion by organized crime or other non-state actors.  However, many of the organizational defenses are more useful against state actors and I'm primarily talking about how law and technology can work together here.

The most obvious part of not giving up user information is not having user information.  If you have information in a format you can read, you have much less recourse when you get a court order.  If you normally keep information for a little while, you also have much less recourse when they want you to keep it for longer.  Delete that stuff as soon as you can, and better yet, just never store it.  You should account for every single bit of information from users or about their behavior you store or pass on to a third party, and see how few bits you can operate with — you may be surprised.  A big part of getting rid of user data is making sure you can't see the unencrypted form — or the encryption keys — for data you do need to store for your users.  Getting end-to-end data encryption and client key management right is both hard and out of scope for this essay; see here for some pointers.  If servers under your direct control can exert centralized control over user data or over behavior that law enforcement may want to coerce, it will be much more difficult for you to resist state coercion.  Do the work, get it right, and call experts when you need them.  Assuming you've done this, congratulations!  You now have a system you need to protect against coercion.  It's worth noting here that even if you aren't going to build a completely decentralized system and you're simply accepting the risk that the user data you hold will be subpoenaed, you still have a moral responsibility to your users to take steps to avoid becoming a vector for states to implant backdoors on their devices.  Any installed application can attack other parts of the system, and if you don't do your part, you may be responsible for negative outcomes for your users.

At an organizational level, there are three tactics for fighting coercion: evasion, segmentation, and visibility.  Let's talk through them.

Evasion

All of the ways of not having to respond to or comply with a demand directly fall under evasion.  If at all possible, don't have any offices, staff, or formal relationships with hostile states — and this includes things like domain names, SSL keys, and vendor relationships.  Then, when demands come in, ignore them.  You'll need to warn staff to be careful about their personal travel plans, and in some cases you'll need to pay careful attention to extradition laws, but if you can afford to just ignore demands, great!  Likewise, if you get a demand you can't ignore, fight it if you can, whether that's on procedural and bureaucratic grounds or on more substantive matters.  Remember that jurisdictions are also not static — somethere that's safe today may not be safe tomorrow.  Pay attention to the politics of jurisdictions you're exposed to and work to support good law in those places.  Be aware that you might need to pull up stakes in some country on fairly short notice and get your employees and servers out of there if the situation looks like it may turn hostile.  At an absolute minimum, you must have legal funds set aside to get employees out of jail.  Your organization should also have a “dawn raid” plan, developed with adivce of counsel, regardless of how safe you think your jurisdiction is.  This covers not just what you do when some authority is carrying your servers out the door, but also making sure that you've got a phone tree to get e.g. executives and lawyers on-site immediately.

Segmentation

With many of the laws states are pushing for, organizations will have limited ability to fight and are often under gag orders.  If you know you are at risk, you need more options.  The combination of segmentation and visibility can be your friend here.  The process of creating an intentional backdoor and deploying it into production in a well-managed software development lifecycle is complex.  We can make it much more complex, and we may be able to make it legally impractical.  Backdoor demand laws are relatively new, and there is definitely no cross-jurisdiction agreement on what is and isn't reasonable, which is useful.  If releasing a backdoor requires the cooperation of staff from multiple legal entities in many different jurisdictions, the requesting state is much less likely to be able to force all staff to comply directly.  This is the basic principle of segmentation.

Segmentation requires visibility, because if the parts of the organization expected to provide checks and balances can't see what's happening, they can't block the process.  Segmentation is also more likely to work if it's both functionally relevant and legally complete.  Having the equivalent, as a UK company, of one guy sitting in France whose only job is to say no to backdoors is unlikely to impress a judge.  However, having a completely separate company in France which acts as your general-purpose security auditor and which both cryptographically signs new code and provides a financial guarantee of its security and which will refuse to sign backdoors as a matter of fiduciary duty may be a different story.

Visibility

The kind of visibility we need for segmentation to work is mostly technical visibility inside an organization, which we'll talk about below.  That said, there are times when external, non-technical visibility can be useful too.

If you receive a surveillance demand, tell people about it if you're allowed to.  Use the Lumen (previously known as Chilling Effects) to publicize and archive orders.  In particular, though, tell your users and particularly the directly affected users, and fight for the right to at least the affected users if you've recieved a gag notice.  They deserve to know, and they may be able to take legal actions you can't.  Warrant canaries and transparency reports are potentially useful tools here.  The former are affirmative statements you make declaring that you have not been coerced, operating on the legal theory that compelled speech, especially when done outside of any fixed and regular pattern, is not legal in many jurisdictions.  You may not be able to say you've been coerced, but you can stop saying you haven't been and trust folks to figure it out — like they did with Reddit recently.  The latter assume that you will be compelled to respond to some warrants, but that the response doesn't represent an existential risk to your security architecture, and consist of reporting how many warrants and orders you've received.  They're useful for political fights, but they're not going to stop the kind of coercion we're talking about here.

External visibility may also be useful as a direct bargaining chip in a couple of different ways.  First, if there is a risk that a target of surveillance can directly determine that they're under suspicion, many intelligence teams may choose to use other means to avoid compromising their investigation.  Every time you make this risk calculation worse for the surveillance team, you increase the likelihood that you'll be able to continue protecting all of your users.  Second, many draft laws for backdoor mandates include payments from the state to the vendor building the system, to cover costs.  For organizations where end-to-end security is a direct marketing point, building systems that guarantee that customers will learn that a backdoor was released — preferably messily and in the newspaper — may increase the effective “cost” to create the backdoor.  In this case, the organization might demand that the state pay up to the total current and future value of the company to account for lost market share, breach of trust, reputation loss for staff, customer lawsuits, etc.  It is vanishingly unlikely that this will work, but that's not the same as not being tactically useful, especially if the case becomes a public political issue.

One final note on visibility — it is currently unclear where it will be considered legal to attempt to prevent state coercion at all.  If you attempt to resist coercion you need to start by that acknowledging that not only may you fail, but that that failure may force your organization or service to close down or may put people in jail.  Again, consult counsel first.

Technical Tools

Let's say we get lucky, though, and it looks like we can use a combination of segmentation, visibility, and evasion to fight state coercion without anyone going to jail.  What technical tools are useful here?  We have a multi-stage process with a bunch of different natural interests, and we want to enforce separation of duties and visibility of compromise.

SDLC

Developers write code, check it into source control, and other developers review that code before it's approved for release.  The code is then built and packaged for production, and then distributed to the systems which will run it.  Those systems check the package and then install the new code.  Along the way, there are at least eleven different control points where technical countermeasures may help organizations resist coercion.

Cross-Team Feature Splits

First, ensure that features and work which could directly compromise end user control over keys or data are, where possible, split across multiple jurisdictions.  For instance, if the system uses two different processes to receive and decrypt incoming data and to encrypt and send outgoing data, ensure that the processes trust each other as little as possible, that they have formally-specified interactions by which they agree on how to use keys, and that they're written in different countries — maybe even in mutually antagonistic countries unlikely to cooperate.  The goal of this exercise is to ensure that high-level architectural conversations are necessary to enable backdoors, and that those conversations happen across jurisdictional lines.  You want your backdoor to have a large and well-documented paper trail.

Signed Commits

Second, require every developer to sign every change to either code or build system configurations when they go into source control.  This protects your infrastructure by removing much of the utility for an attacker to compromise the source control system, as any modifications made there will be noticed.  Attempts at legal coercion tend to go hand-in-hand with extralegal attacks, and if you are successful at frustrating legal avenues, you paint a target on yourself for other tactics.  Change signing also means that every change can be attributed to an individual developer.  This provides a stronger place from which to verify that developers aren't changing code they shouldn't be, and to uniquely attribute unexpected code or changes.  For change signing to be useful, developer keys must be well-protected, preferably on hardware tokens, and their workstations must be patched and hardened.  This latter shouldn't be left to individual developers; you need proper operations support and effective policy.

Signed Reviews

In order to provide a check against individual coerced or compromised developers, code reviews should also include a signature.  System rules should prevent code from being included in a master branch without review signatures.  Splitting code review responsibilities across jurisdictions and across multiple corporate entities may be particularly useful.  If the development team is in one country, the QA team in another, and the security team in a third and pushing to master requires signatures from all three, one can't skip code review.  This could be ensured by source control systems in all three countries which must agree.  The rules for such an agreement should also be signed and in source control.  This way, skipping code review to hide a backdoor will require breaking the build system configuration and forcing the creation of a backdoor will require significant international cooperation.

Reproducible Builds

Assuming we have effective review-time code signing, we need to be able to make strong statements about how a particular binary was created.  If we can't, a developer could just maintain a separate backdoored source tree and swap the binary out at the last minute.  In some cases, a difference of a single bit in a binary can be sufficient to enable a vulnerability, and proving whether or not a difference is significant can be time-consuming and expensive.  What we need here are deterministic, repeatable builds, which let us tell if a given binary came from a certain code tree.  How difficult these are to set up depends on the language and build environment.  Many build systems include time stamps and directory paths in the binary, but these are easy to standardize or replace.  Differences in library version are another source of non-reproducibility, but this should be fixed as a reliability issue regardless.  The build environment, including exact versions of all libraries, must be controlled by source control and replicable.  Finally, with some languages and toolchains, such as C in some gcc versions, code arrangement and optimization choices themselves are nondeterministic between compiler invocations.  This can only be fixed at the toolchain level, and is critical to get right.

Split-Binary Signing

Once different groups can each reproduce the build we can put our next control in place.  Responsible development teams sign their binaries so users can verify that they binary they're getting is the real thing, and many install systems and almost all update systems require signatures (somewhere) to work.  Every team involved in writing code or countersigning the review process should be involved in signing the binary.  First, every team sets up an independent release build environment, per the source-controlled instructions.  Next, they confirm all signatures in the source tree when they load it into the build environment, and ensure there are no unsigned modifications, that no changes are missing, that all files have the versions they expect, and that developer and reviewer role assignments, and, if relevant, area of responsibility rules have all been respected.  Once the checks are successful, each team builds the binary.  Care must be to secure the build environment against malicious content included in the source tree; reproducibly compromising the build system from e.g. an image file included in the source could present another avenue by which a binary could be compromised.  Finally, all the teams together create a multi-party signature for the binary, via a protocol that ensures that the signature is only valid once all parties have signed.  There are a variety of protocols for accomplishing this; which one makes sense will depend on the requirements of the toolchain which will be verifying the signature.  The details are, of course, critical, but also beyond the scope of this essay.

Distributing Initial Installs

Having created a binary with a strongly traceable provenance, you have to get it to users and enable them to verify it.  If they're downloading the binary from the web, this is straightforward — the download site and all pages linking to it under your control must be TLS-only, with HSTS, certificate pinning, modern hardening applied both to the TLS stack and the system, and with no JavaScript loaded from sites you don't control.  It's worth offering downloads via a TLS-protected Tor hidden service as well, to make it harder for a state that has taken coercive control over your site to target individual users.  When the state cannot target malware, it increases the reputation cost you may be able to claim or recover and lowers the value of the attack vector due to the increased chance of discovery.

Verifying Initial Installs

This covers getting them the bits; verification is harder.  All binaries should have hashes listed on the download site, and the set of keys that can sign builds should be available to users.  It's not unreasonable to have keys tied into the web of trust, too, but it's important to realize that, to a first approximation, none of your users will ever verify hashes or keys.  Don't make your users jump through hoops to try to force them to change their behavior.  If you can't make the process of maintaining security smooth and seamless for users, they simply won't use your system or they will work around your process.  Operate download sites from your safest jurisdiction and hope for the best — initial downloads are hard to secure in this scenario, and we all live in a state of sin here.  There are no good answers that work for even the vast majority of technical users, let alone non-technical users.  That said, while there is a point of vulnerability here, the adversary has to actively exploit it in that moment.  Betting they won't isn't a great bet, but statistically, it will work out for most users in most situations.

App Stores

On the bright side, if you're distributing your system in an app store, you don't have to worry about either the initial install process or the update story.  Your users already have a trust relationship with the app store operator, and that trust is used to provide them with your application.  On the less bright side, you're constrained by the tools you're given by that vendor and you have no idea when the trust relationship your users have with the app store will be used against you, and neither recourse nor even the ability to detect when it is.  If you cannot get your updates and installs to your users except through an app store and you manage to successfully evade a state backdoor demand, it is safe to assume that that backdoor will still happen, but at the app store level, beyond your control.  In many app store systems, the key used to sign the release binary is your only tool to control update and code integrity (assuming the operator is not coerced), so splitting the keys used to sign release binaries can still help even if you have less control otherwise.

Secure Updates

Once you've got code running on your users' systems, and assuming you aren't stuck in the update system of an app store, it's time to provide updates securely.  You have some interesting options here, but getting secure updates right is hard even without coercion resistance.  If you're building an update system for a small number of self-contained packages, your job is merely very difficult.  If you're building a large-scale package repository, you have a lot of additional and harder problems like dependency resolution and key scoping; good luck.

When building a coercion-resistant update system, we first have to handle the traditional update concerns.  We need to provide for integrity of the update files, we can't allow parts of one update to be substituted for parts of another update, we must prevent an old update from being passed off as new, and we don't want an adversary to be able to block clients from updating.  The first three issues are elevation of privilege concerns, but the last is a denial of service problem and thus the best we can do is make it more difficult, detect the attack, and notify the user.  We also need to handle key revocation, both for more general concerns and for coercion-specific cases.  On top of these general concerns, we also need to make it hard for the update system to target specific users, we want to ensure that as many users as possible have access to a full list of every possible update we create, even if they don't install them, and we want to make sure that updates themselves “leak” to other users, even if only a couple of users install them initially.  All of these latter properties will improve the visibility of any backdoors created.

Managing Update Signatures

In practice, a system like this picks up from the segmented binary signing we already discussed.  First, instead of a single key per team, we'd like to split each team's signature into an n-of-m threshold signature scheme.  This ensures that no single user can provide their team-role's approval for an update.  Handling key revocation is also important, and in a 2-of-3 key system, the remaining two keys can generate a trustable revocation for a compromised key and add a replacement key.  Without this, you're stuck having a known-unsafe key sign its own replacement, a process that cannot happen securely.  It may be advisable to have at least four keys, so that even if both keys involved in one step of a signing operation are judged compromised the system can still recover.

This set of keys (three parties, each with a 2-of-4 threshold signature key set) are used to sign both the update itself and a metadata file for the update (the contents of which are duplicated inside the update package) which contains a timestamp, along with file hashes and download locations.  Clients will refuse to install an update with a timestamp older than their current version, preventing replay attacks, and updates can also be expired if this proves necessary.  The metadata file is signed offline by all parties, like the build, but the parties also operate a chained secure timestamping service with lower-value, online keys.  The timestamping service regularly re-signs the metadata file with a current timestamp and an additional set of signatures, ensuring that clients can determine if the update offer they're presented with is recent and allowing them to detect cases where an adversary is blocking their access to the update service but has not compromised the timestamp keys.  The continued availability of the file also acts as an indicator that all three segments of the organization are willing to work together, and functions as a form of warrant canary, although an explicit canary statement could also be added.  In some cases, there is precedent for forcing service providers to continue “regular” or automatic operations that, if stopped, could reveal a warrant.  Given this, generating some of the contents of the timestamp file by hand and on an irregular but still at least daily basis might be useful — call it, say, the organization's blog.  The timestamp portion of this scheme does depend on client clock accuracy.  A variation is possible which does not require this, but it loses the ability to detect some “freeze” attacks against update systems.

Metadata files are among other things attestations to the existence of signed updates.  To this end, it's important that all users see all of them so the set of metadata files forms a record of every piece of code the organization has shipped.  Every metadata file should contain the hash of the previous file, and clients should refuse to trust a metadata file that refers to a previous file they haven't seen (the current metadata file chain will need to ship with initial downloads).  If a client sees multiple metadata files that point to the same previous version, they should distrust all of them, preventing forks.  To ensure there is a global consensus among clients as to the set of metadata files in existence, instead of putting them out on a mirror the development team should upload files into a distributed hash table maintained by the clients for this purpose.  This ensures all clients see all attested signatures, and will only install updates they know other clients have also seen.  Clients can even refuse to install updates until a quorum of their neighbors have already trusted the update, although some clients will have to go first, possibly randomly.  Clients may connect to their DHT neighbors via Tor, to make targeting of Sybil attacks more difficult.  If this is done correctly, it will be impossible for the organization to make the update system hide the existence of a signed update.

Distributing Update Files

It's reasonable, once they've verified the metadata, for clients to just download updates directly from sites hosted by the organization.  However, some percentage of the clients should fetch the update from peers instead, via the DHT, and they should maintain the update package unadulterated after installation.  This ensures that the contents of any update can be found and reverse engineered later, if needed, providing a direct exposure route for potential backdoors.  The mirrors are not trusted in this design, but they can still perform denial of service attacks against clients.  This may be important if an adversary has found a vulnerability in the current version of the system and is attempting to coercively prevent the organization from patching it in a timely manner.  Having each segment of the organization run its own set of update mirrors will increase the difficulty of compromising all of them.  The client should programmatically notice slow, invalid, or overly large files served by a mirror and switch mirrors, helping to fight update denial of service attacks.

At this point, we have a secure and coercion-resistant chain involving every step from the moment a line of code is written and checked into source control to when it executes on an end-user machine.  It's not perfect — we still have some dependencies on the CA system for TLS during the initial installation — but it's mostly solid.  It's useful to take further advantage of the other natural segmentation of interests that every system has, as represented by the user base, and enable your users to replicate this entire chain and check for abnormalities.  Your application doesn't have to be open source to do this, but all components do have to be source available with a license that permits testing and auditing.  The system must be buildable in a manner indistinguishable from production with the code provided, or users won't be able to replicate the build and check hashes.  Unlike most of the previous steps, you can get away without this.  However, if you don't allow this you've reduced the likelihood that a backdoor will be detected, which was the entire point of this exercise.

Coercion-Resistant Coding

Beyond how you move code around, creating application architecture with coercion-resistance in mind, as we mentioned at the start of our technical countermeasures section, can make deploying backdoors both more difficult and harder to hide, and may improve system safety and reliability in other ways at the same time.  Depending on your platform and what your system does, you may be able to separate functionality and privileges in useful ways.  For instance, a messaging application might be able to split itself into three components — a network component that sends, receives, and parses encrypted traffic, a general-purpose UI and business logic component, and a component running in a secure enclave that handles key management, encryption, and the entry and display of messages.  The network and UI components can both run in sandboxes that restrict their system call interactions to those things they're expected to do.  The network layer, where a backdoor is more likely to be effective, can be kept small, mostly auto-generated, and easily audited, and the UI layer can be messier but given no ability to touch anything outside itself.  Running all the core cryptographic code and anything with access to message content in an enclave means that even if a backdoor is placed in other parts of the system, it won't be able to compromise the most critical system security guarantees.  Architectural design for attack resilience is a complex subject and much larger than just update systems.  If you're going to spend the time on the update chain, look at this too.

Protection Across Patches

It may be possible in some situations for an application maintain restrictions that are built into its protection architecture across multiple updates.  For instance, if the update system is built such that it always uses the sandbox rules that shipped with the previous version of the system (on the theory that sandbox rules change infrequently), creating a backdoor that requires a change in sandbox rules would take at least two separate updates.  This again increases the detectability of the backdoor.

Services

If the system involves the development organization running services — for instance, an introduction server or synchronization and backup servers for encrypted client data — similar tactics of splitting different parts of services across jurisdictions and requiring them to cooperate may be useful.  This is especially true if, for customer experience reasons, key escrow or account recovery features are necessary.  Tools like remote code attestation, via the Intel SGX extensions or zkSNARKs (zero knowledge Succinct Non-interactive Arguments of Knowledge, a relatively new cryptographic primitive) may help in providing services in different jurisdictions that need to maintain information about the software operations and the compromised or uncompromised state of another part of the system.  In some cases, these may even be useful for hosted services to attest their state to users.  Care must be taken with enclave systems like Intel SGX.  While the system may be useful in some scenarios, we do not have a good characterization of the likelihood that that system cannot be breached by a well-funded and well-connected state adversary, nor the likelihood that it ships already-compromised by USG.  Wherever possible, decentralize systems entirely and ensure that users are not trusting devices outside their personal control.

Conclusions

Taking coercion-resistant design seriously is a lot of work.  That said, in addition to the direct security benefits that more-hardened systems bring, structuring your team and its work around the security needs of users has cultural benefits that can sometimes outweigh the organizational friction.  Doing so pushes teams to keep user needs at the front of their minds and demonstrates the priorities of the organization's management team.  That prioritization and the ethical sense that comes with it can even be a last-ditch defense against state coercion itself.  The law can force organizations to tell teams to build backdoors, but it's much less clear if it can compel individuals to use their knowledge.  If everyone on your engineering team with the knowledge to circumvent design countermeasures makes it clear that they'll quit if forced to create a backdoor, this presents both an economic and a practical obstacle to warrant compliance.

If you try to do everything I've talked about in this essay, it will add a nontrivial amount of complexity to your organizational structure and add at least some headcount to most teams.  However, it's not all or nothing.  Developing a scaled and progressive approach to coercion resistance is a long-term process, but one I highly encourage teams to take on.  Most of the tactics I describe here have not been tested, althouugh they broadly represent what folks looking at this challenge would consider best practice.  To take steps like these implies that you may end up as a test case in a fight you won't necessarily win.  Make sure you know the risks and costs going in and try to be a good test case if you end up there, for all of the rest of our sakes.  This means, among other things, choosing your battles wisely; consult with counsel and make a smart decision about whether a particular dispute with a government is the hill you want to die on, lest you risk making bad law for everyone.  Just as importantly, if you build infrastructure to make coercion-resistant development and deployment processes easier, release it openly and write about it, if possible.  If you do, you'll lower the bar for the next team that takes on the challenge.  Parts of the update structure I talked about are based on the excellent work by the team behind The Update Framework, who have built a great tool for hardening the update processes of collective community package repositories against covert compromise.  Extending their work with mechanisms for public signature records and update gossiping, if done properly, would mean future projects could just integrate a library instead of doing a lot of complex and error-prone development work.  The larger process of verifying the authenticity of binaries and proving that a specific binary traces back to a specific source tree is called binary transparancy, and there's an increasing body of work around it.  Find those folks and collaborate with them when you can.  Similarly, share the legal structures you build to manage segmentation on sites like Docracy or Legalese.io, and document how those legal structures support and work with technical tools.  There is a risk from working in public like this, but it seems likely that the common good outweighs individual tactical risks.

Regardless of the tactics you choose, don't give up the fight.  States have no right to surveil by default, and definitely no right to damage collective infrastructure and harm the already-tenuous security of the people who depend on that infrastructure.  If you have legal or technical feedback on these tactics or specific experience in (or tales about) trying to implement them, I'd love to hear about it.

Thanks to Riana Pfefferkorn (@Riana_Crypto) for reviewing this (in a purely personal capacity), and to Daniel Kahn Gillmor for bouncing some ideas around with me, and for his ongoing work on binary transparency in Debian.


If you liked this essay, you can sponsor me writing more.  I've started a Patreon where you can pledge to support each essay I write.  I'm hoping to put out one or two a month, and if I can reach my goal of having a day of writing work funded for every essay, it will make it much easier for me to find the time.  In my queue right now are a piece on avoiding malware in email, something about what security strategy is and why you care, more updates to my piece on real world use cases for high-risk users, and a multi-part series on deniability and security invariants..  I'd much rather do work that helps the community than concentrate on narrow commercial work that never sees the light of day, and you can help me do just that.

Thanks again!
Eleanor Saitta
2016.03.31
Malmö