Ethics and Power in the Long War

Dymaxion.org

I gave a talk on surveillance, ethics, economics, the balance of power, and our responses to all this at OHM (Observe Hack Make), the 2013 Dutch Hacker camp.  A number of political issues came up during the run-up to the camp around several of the sponsors who sold or supported government surveillance, and also around the responses of the organizing committee to public censure there.  Originally, my intent had been to skip the camp entirely, but I'd ended up in Amsterdam during it anyway, so I came and gave this talk.

I'd like to thank Quinn Norton for many conversations over the years that inspired a lot of what I said here; much of the credit is hers, all of the blame is mine.  I'd also like to thank the folks at Noisy Square, the political sub-camp within OHM that worked to collectively resist surveillance culture there, and who were my hosts for this talk and arranged both the talk and this transcription.  The transcribing was done by Lunar, hdez, kali, KheOps, pabs, zilog, and others, and they had it done, proofread, and online six hours after I stopped talking — a great collaborative effort.  I've edited their transcript lightly (here's the original) so it makes more sense in writing, but it's still a bit rougher than what I'd write as an essay.

I have a Patreon, here, where you can subscribe to support my security and systems-focused writing.  You sign up for a fixed amount per essay (with an optional monthly cap), and you'll be notified every time I publish something new.  At higher support levels, you'll get early access, a chance to get in-depth answers to your questions, and even for more general consulting time.

Dymaxion.org is me.  Along with public speaking, I consult, write, make art, take photographs, and work on a number of public projects.  You can hire me to do all of these things.

© 2017 Eleanor Saitta.

Return to All Essays

Listen to the talk

Why do people spy?  What is the economic structure of international mass surveillance?  How do the revelations of the past months interact with the tools we build and the mindset from which we build them?  What does all this mean for hacker culture, and how does this change the debate that was already happening around OHM?  What, really, do we need the Internet for anyway?  I'll try to leave you with more answers than questions, and with luck, hope for the future too.

For those of you who don't know me, my name is Eleanor Saitta.  I wear a bunch of different hats: I'm Technical Director at the International Modern Media Institute located in Iceland.  I'm Principal Security Engineer at the Open Internet Tools project.  I work on the Briar, decentralized delay-tolerant messaging platform and the Trike threat modeling tools and do a bunch of other random things in my spare time.

Today, I'm here to talk about the kind of stuff that has been floating in the air around surveillance, and the interaction between surveillance, the state and the people.  I want to start by saying that this is a lot bigger than our current moment and the worries about “oh, this government is doing this to these people and what are the technical capabilities, etc.”, and all the stuff that isn't part of the story that has now turned into a soap opera at a Russian airport.  What this actually is, and it's worth understanding as we go into this, why the different sides think the way they do, what the mindsets that are playing into this battle are.  That's where we start seeing this long term structure.  What we are actually looking at right now is what you could call kind of a battle between “good” and “evil”.  But it's not a battle between good and evil as in this titanic clash with the evil spies versus the good freedom fighters.  What this actually is, is a battle in terms of how we understand humanity.&nbdqp; Do we understand people as fundamentally good or do we understand people as fundamentally evil.  And this is one of the sort of epic philosophical debates that we normally think of not having that much relevance to the real world, to the way it affects our every day life.  But it turns out to actually have a lot of relevance.

So, if you believe in democratic free societies and think that they are very important and things that absolutely must be preserved, that our abilities to be humane to each others depends on the existence of a democratic society, but you believe that people are fundamentally evil, what do you do?  There's an entirely consistent and coherent mindset that says &ldqo;people are fundamentally evil and democratic society is an amazing and beautiful and scarce and rare flower that must be preserved no matter what. However democratic societies cannot be exposed to the evils of people in the real world because if it is, it will fail, it will fall apart and it will die.” If you believe that, you should get a job at the CIA Because that is the mindset that drives mass surveillance — that drives mass surveillance in the kind of Western democratic regimes where there is this understanding that democracy is important.  That democracy is too important to be left undefended. Democracy requires “rough men doing evil deeds in the night” to preserve the space in which it can happen.  And the people who are practicing this democracy can never find out what was done in the name of that democracy, or they won't be able to do the things that they are doing.  Their innocence has to be retained, lest they be tainted, lest they be corrupted.

If you think that people are fundamentally good, then your life is in some case much simpler because you don't have to worry about people being tainted by the reality of things.  You can simply assume that if you build all these structures that let people coordinate and collaborate, it will probably work out OK.  They may go a bit off course sometimes but we don't need the structure in the dark and we can probably further even say that the democratic structures and our ability to have this kind of peaceful beautiful flower of democracy or what have you, depends on those things to not happening.

This is the battle that we're looking at right now.  Now, let's look a bit more at what good and evil actually are.  I don't think that anyone who is approaching this, like the folks at the CIA, think that North Koreans are an elemental evil.  But what they do believe is that, if you believe that most people are like you (and most people do) and you are selfish and you want the people who are more like you to do better in the world, then you assume that everyone else is is doing the same thing and all evil means is “oh, you are someone over there who is coming from a very different position than I am, [therefor] you want different things that are mutually hostile [to what I want]”. Whereas if you believe that people are fundamentally altruistic and don't want other groups to do better than they are, this is the same thing in some ways as saying people are fundamentally good  Then you don't need to fight and undermine those people.

This economic balance and economic understanding of altruism and differential benefit are going to underlay a lot of the things that I have to say today so I'm going to come back to this theme in a bit.  However, another reason why these people surveil, is that we asked them to.  It's not like they just said “Well, you know, this has to happen in the world, so therefore I'm going to do surveillance on behalf of the state”. We did actually ask them to do this surveillance. There's budget items that say “Oh, please run a surveillance agency, here is however much money to do it”.  The reason why we all collectively ended up asking those people to do, it many cases many years ago with it just kind of continuing, is geopolitics.  If you are if you are trying to run a nation state as a nation state, geopolitics is the law of the strong.  There is this notion that there is international law and that you can somehow say, “Oh, you invaded me so I'm going to sue you”.  No, it doesn't work that way, or least historically it hasn't worked that way.  This isn't to say that we can't have structures that aren't the law of the strong at the geopolitical level but we don't have them right now.  So nation states require this kind of surveillance in order to function.

The modern state depends on having an intelligence function.  I should be clear that this is not to excuse the intelligence function in saying this, it's to understand what the set of power relationships are here.  The state depends on having an intelligence function because the state depends on being able to able to deny others access to territory.  Now, if you're in a globalized situation where state borders are porous, where people flow back and forth, where ideas flow back and forth, where trade flows back and forth, you can't simply build a wall.  Certainly before World War One and even even in the lead up to World War Two there was this notion at the diplomatic level of &ldqo;gentlemen do not open each other's mail”, that you didn't need to do this and you shouldn't need to do this kind of deep dirty intelligence work.  But as the world has become more globalized, as the world has become more porous, the need perceived from the state's side to do surveillance has grown because they don't have any other way of understanding and controlling the state's structure.  As the state's structure becomes in some ways flimsier, it has to hang on tighter to be able to maintain the same centralizing governance structure

One of the things that I hear a lot as people are trying to deal with, sometimes psychologically but also technically, the outcome of the massive pile of revelations that has been dumped in our lap… well, revelations and confirmations, which doesn't seem likely to end any time soon, thankfully. One of the things that I hear people saying is “we need to make policy, we need to ensure that these things don't happen any more without oversight”, which I think is a great idea, but it is sadly kind of ridiculous.  Policy doesn't matter around surveillance for a few reasons.  If you look at the historical record of surveillance structures, you have never, we've never seen a modern state, without going through a revolution or something similar, roll back deployed operational and technical [surveillance] capabilities.

Once something is fielded from an intelligence gathering team, and assuming it stays funded, if it is in the field, if it is working, if it is actively producing useful intelligence, it stays there.  Pretty much no matter what, as far as we can tell.  The NSA did very politely in 1975 turn off their telegram surveillance program.  It had never in their entire history produced anything useful.  So that's our one example of a technological capability being rolled back.  So much for history.

If you build a capability, limiting it, similarly, does not work, because there is this notion of pernicious efficiency.  If you have a functional system that is deployed and that is useful for one thing, “oh, we can use that for that other thing too; oh, we can use that for that third thing too”, and the capability naturally expands over time.  It is very difficult because policy does not have of a hold on things.  A soon as policy weakens for a moment, there is “oh we need that over here too”.  There's always something else you could do with the capability.

Policy relies on political enforcement.  If you can't politically enforce your policy, then you have very little likelihood of it being functional in the long term.  Now, I don't know about GCHQ and the Germans and the Dutch, but NSA taps every politician's email, and even anybody who might become a politician.  They tap their phones, they tap their email.  Partially they do this for national security reasons; they need to know if those people have sold out to somebody or any number of justifications.  But it also means that they have all the dirt on all the politicians.  Now, how easy do you think it is to keep a politician bought if their career can be destroyed at any moment?  How well do you think their long term policy will is going to be when they know that if they stand up too much they just get destroyed?

Clean politicians don't win elections.  Clean politicians aren't allowed to win elections.  But also, there aren't just clean people in the world.  Everybody has something that's embarrassing enough to ruin their career if the NSA decides they want to ruin someone's career over it.

And also really, policy doesn't matter because policy isn't the level at which these decisions are made.  These decisions are made on purely milito-economic terms.  The economics of spying is the structure that controls whether or not spying is done.  The notion of return of investment is very germane here; how much intelligence product are you going to get for a given investment.  That is what determines which intelligence methods are used.

Compliance with legal structures is a cost in this case.  If I decide that I don't want to comply with a legal structure, I may have to budget so much extra for getting the right politician elected, or so much extra for getting the law changed, or buying the creation of a secret court, or whatever the thing is that I need to swing my way around to do this.  But if you are looking from a purely economic perspective, it's really just a cost to roll out some new spying technology . You can assign a dollar value to how much it's going to cost to push that through.  If you decide that the cost/benefit analysis is worth it, then maybe you change the societal structure, the legal structure, to allow you to do the piece of intelligence work that you think is necessary.  Political fall-out similarly is just a minimizable cost structure over time.

Your primary targets is intelligence product per dollar.  But you're also interested in coverage: having really deep intelligence in one area isn't particularly interesting.  If you are an intelligence company you need both deep intelligence in specific areas, but also the understanding of a broad spectrum.  You also need flexibility.  You don't know what your intelligence needs are going to be next week.  So, this leads you to look very aggressively at things like full take surveillance.  Because full take surveillance means you have maximum amount of flexibility.  You can decide at any point where you want to go deep.  You can decide five years later than you need to go deep somewhere else.

Cost is an interesting structure in here, because [of how it relates to existential] risk.  This is the mindset shift that happened at least in the US, and really it happened throughout the black state throughout intelligence world in the Cold War where all of a sudden intelligence failure became an existential risk.

This was a thing that was literally intolerable because if you failed, everyone in the country might die.  At that level there is no cost which is too great.  Now there is “what cost you can actually afford to spare” versus the other things you need to do with that money, but there's never a level where you overspent on intelligence if it's useful in preventing that existential failure.

9/11 in the US was another instance of this.  It was another case where it suddenly became an existential crisis, and completely reoriented the intelligence community in the US.  In the early nineties, the director of the NSA and the director of the CIA refused to talk to each other.  They literally could only communicate through intermediaries because the inter-organizational hatred was that deep.  That vanished in a few months. This is very interesting if you're looking at what this [cost] balance looks like.  The fact is, though, that it's still fundamentally about return on investment.  This is really good for us, it turns out, because we can shape the market of their return on investment, or rather we can shape their cost structure for different kinds of intelligence work.

There are a few reasons why they do take some actions which don't make any economic sense; graft is one: “oh my buddy has a company who does that” — “oh yeah we totally need that”.  Political stability is another interesting one; even if it's entirely internal: “oh, they're activists and we don't like what they're doing because the government might change on us”.  If you are the NSA, you don't want the government to change.  Not because it's going to necessarily be an existential event for the country but because any kind of political instability is just bad for business.  So you end up going after a whole range of dissidents that are maybe not so important for deep intelligence reasons but matter in terms of the political stability.  Finally, the NSA's first priority isn't the survival of America, it's the survival of NSA, which means all sorts of non-economic decisions.  If you look at it purely from the external cost per intelligence product as return on investment, they do things that don't make sense because they're actually aimed not at the survival of the US but the survival of NSA.  The same applies to any other intelligence organization, but I'm using the NSA for the example because it's the most temporal one.  Regardless of all these non-economic choices, ROI still what fundamentally rules decisions.

The security community, if you evaluate them from this ROI perspective, has been doing a lot of really weird stuff over the last twenty or thirty years, much of which doesn't make any sense.  We have some really great maths, very little of which end up keeping people secure in the field.  There is this sort of truism in the community that “everything is broken”: if you have an Android phone, you cannot secure how the code is delivered to the phone, and SSL is broken, and we know there are all sorts of bugs in the operating system, and the baseband is completely owned, and SIM toolkit, and the hardware is manufactured on untrusted assembly lines, so there is really nothing we can do; “everything is owned”.  This turns out not to be that relevant in the real world because what we actually care about isn't if is something secure theoretically, it is: did someone get away with the thing they were trying to do.  I spent nine years as a commercial consultant, thinking about security mostly from that mindset of “what is the kind of theoretical structure of ‘can I trust this thing?’”. And I started to spend more time in the field and very quickly realized: “no I don't really care what the theory is, I care, did my friend get away from the police?”, and sometimes, “were they able to kind of maintain some sort of long-term subterfuge?”, and sometimes it's “did they have the five minutes to make it to the airport?”.  When you look at the world from this kind of outcome-oriented perspective, it, among other things, plays very well with this understanding of intelligence's return on investment, because it's the same game.  It's not “could theoretically this dragnet have picked up some piece of intelligence”, it's “did it? was it actually useful?”

The things we need to focus on to shift the structure of intelligence gathering, the things that we can possibly achieve are: stopping full-take “we're just going to surveil everything coming across these cables” surveillance and protecting the social graph.  These are the two things that are most deleterious to free society and also where we have the most leverage based on the current situation.  SMTP (the protocol for email between mail servers) is an interesting example of this.  By default SMTP between two servers is not encrypted.  You may have an encrypted connection from your mail client, whether that's webmail or a thick client running on your laptop to the mail server, but then that mail server talks to another mail server where your friend has their mail service, somewhere across the Internet, and that connection across the backbone is completely unencrypted.  This is why the NSA is able to just snoop up most of that mail — it's just sitting there on the wire.

There are two are modes of encryption between email servers: you can either require that connections between two mail servers always be encrypted, or only do it opportunistically.  The former would be difficult to roll out because most mail servers don't support it right now.  However, if everyone starts saying “I'll ask first if the other guy supports encryption and then we will upgrade to that&rdqup;, it buys you a massive amount.  If an intelligence agency (or anyone else who can get between those servers) wants to make an active attack, they can still get the mail.  It's very easy to just pretend that the other server said “no”, but that's noticeable and the attacker has to do something on the wire.  It's no longer a passive attack, and it's easy for mail server operators to build monitoring infrastructure that notes when a server they normally speak to encrypted suddenly claims not to support it.  Revealing active intelligence capability in a context like that is very expansive for the agency.

If NSA decided to man-in-the-middle every mail connection at once, people would notice.  It would be a diplomatic problem globally.  It would leave the quiet, background surveillance position they need to operate from. Now, in theory, according to the security community, you haven't actually provided any real additional security, because it's so trivial to downgrade — from a theoretical perspective, there's nothing there.  However, we've completely shifted the real-world outcome and we've completely shifted the return on investment for doing that kind of passive snooping.

There are a lot of different places where we can make the same kind of shift.  This is interesting when you're looking at designing security systems: “did you get caught?”, “could you perform the mission?” and “did you see the failures and correct for them?” are equally important questions.  The only one of those that's directly influenced by the maths of traditional security is “did you get caught?”.  The rest of those are seen as being usability questions.  The “did you get caught?” is also informed as much around operational security practices as theoretical system security — “does the user understand how the tool is supposed to work?”, “can they actually use the tool correctly?”, all of these other questions.  Security is in many ways mostly a usability problem.  Certainly, the usability part of it is at least as hard as all of the rest of it.  We in the security community have concentrated on theory to such a degree that it has warped our ability to understand what the actual problems we're trying to solve are.

Part of this is because the security community doesn't have any real problems.  It's a bunch of white guys who can afford to spend time hanging out in tents in the middle of the very nice Dutch countryside.  I wouldn't want to wish real problems on the community, but this means you don't understand the position of someone who's in the field, of someone who's actually depending on this tool working for whether or not they or their cousin who barely knows how to use a computer survives the night.  Once you put yourself in their shoes you start looking at the tools we build in a very different way. I don't blame people for having the standpoint from which they come; it just happens, but it's time to start expanding our viewpoints on the world.

I want to try to give you some hope about the future even though everything is broken.

Encrypting all the things isn't enough.  Encrypting all the things will be hard, but it isn't actually enough.  However, there are things that we can do that will actually make a difference in addition to encrypting all the things.  If we start decentralizing all the things, that makes a real difference.  One of the reasons why NSA has been so successful is that, when it's too inconvenient to break the security of a protocol or capture information off the wire, they can just show up with a letter and force companies to comply with them.  There are lots of other places where this can happen.  We don't know that much yet about who else is trying to compel the companies to do things but I'd guarantee that if NSA is doing it, then lots of other national intelligence organizations are doing it too.

Let's say I've got a hardened, encrypted system where you have to physically take a hard drive out of the machine to get at my data, and my data is on a server somewhere in a rack in who knows where, run by another company.  You don't have to talk to me to get at that machine and I never have to find out about it when you take that hard drive.  If I own the rack that it's running in, then maybe you have to talk to me or maybe you can just talk to the people that run the colo, but still I'm a little better off.  At least someone I trust is going to come and ask why the machine is down — I have some visibility in there.  If that hard drive is in my pocket, you are going to have a relatively difficult time of taking it without my knowledge.

So, decentralization, when you do not have the rule of law as a protective structure, which we don't, is an incredibly, incredibly critical tool.  This means that we need to stop using an Internet that is built out of services: APIs are counter-revolutionary.  It's over.  We need to stop relying on centralized services.  We just can't do it anymore.  It's impossible to build a free Internet that is centralized.

However, we can't just say “well OK, we're going to build a bunch of protocols and then everybody will decide to use them”.  Because, if it hasn't worked so far, why should it work now?  We do have a moral high ground, but that's not enough.

We're doing a lot better this week than we were a few months ago for being able to say “you know really guys, it is important, it matters, you need to do this”, but it's still not enough.  We don't just need protocols, we need protocols with business models.  We need some way to be able to say “Yes, this development pays for itself, we have the money to hire a real UX team and hire designers and a marketing team, and all of the other things that you do in the commercial world if you're actually serious about building software.”  It turns out that programming is the easy part.  Everything else is the hard part.  Adoption is harder than development.  Design is harder than feature-completeness: that's the part that we do not have right now, that we have to learn, and learn in a hurry.

Honestly, the user model is the thing that needs to come first.  Let's say I release a web chat client that is really easy to use, and people love it, and the security really sucks.  You know what?  It turns out that we can fix that, we can totally fix that.  It doesn't matter where the system's security started because people are actually fucking using it and that's the part that actually matters — that they can use it.  If we start with an user model and a set security properties that we're trying to provide and we know the properties are achievable and much more importantly we know how to explain them to the user, we can build a tool that is actually interesting to users, even if it doesn't implement those properties as well as it should yet.  If we've got something that people actually care about, we can fix the implementation later.  Now, I don't think we should do this — but we can.  We know how to take an insecure implementation of something that makes sense to users and makes sense at the requirements level, something that can ship and work in the real world, and then get it to the point it needs to be at to deploy it in the real world.  We at OpenITP are working on a few projects to make this process easier that I can tell you about later if you're interested.  However, the tool has to make sense first.  You have to start with design.

I know there's a bunch of people here who work in NGO land.  I'm one of them.  I'm going to speak to a few different groups here, starting with the NGOs.  NGO land, we have a problem.  We are concerned about building careers, we are concerned about our space, we are concerned about the security scene…  Fuck all that!  What we're supposed to be doing is trying to solve the problems.  Different organizations get this and different organizations don't — this isn't universal, but I see a lot of people in this field (which I just kind of walked in to, and I'm going to be the bull in the China shop here for a while) who are doing things that don't look like they are trying to solve the problem.  I know what trying to solve problems looks like, and it's not: “well, you know, this isn't really the project that we should be doing but it's what we think we can get funded, maybe.” and it's not: “well, I actually don't know if this is going to have any real impact….”  Come on!

Yes, I understand that we have to deal with funders, but if the funders will not fund the projects that are going to make a real impact, then let's fucking talk to the funders and get them to change what the hell they fund, because we can do that.  This is ridiculous.  We cannot keep wasting time and money on dumb projects that look really cool and get us respect within the community but don't actually help people on the ground.  I don't want to see help desks that are run without people who have the depth of skill to actually help the people that they are helping — that doesn't do anyone any good at all.  There are a billion other examples; that's just one of the top of my head.

Security people, hackers.  We also have a problem: we have massive fucking egos and they are getting in the way.  I don't care about your egos. My friend Zineb, who is trying to keep her seven year old daughter from getting the shit beaten out of her by Moroccan riot cops does not care about your egos.  When I don't have any tools because I get told “Oh yeah! GPG is totally something that you can totally teach in the field”, fuck you!  Egos don't matter.  If you do a cool thing everybody is gonna go and say “Hey! You did a cool thing. Really awesome! Let's go and do other cool things together.” There are enough of us that if we work together we can do some pretty amazing shit if we work together.  If we all dick-wave, we can't do anything useful.  I have been around in the security community for a while and I can do the whole “Oh! Your idea sucks, my idea is better!” dynamic of so-called collaboration just as well as anyone.  It's not useful.  Let's try to stop harassing each other instead of having a polite technical conversation like professionals do in the real world.  This is a crisis, not a career, and we don't have time to play those games anymore.

Another thing, and this is interesting for a lot of projects, is that I see a lot of stuff getting built without a theory of change.  A lot of “this is a cool tool that I can build,” without an understanding of, “hey, there's a larger battle going on, how does this change the overall landscape.  Where is this going?  What am I actually trying to do?”  Here's an interesting little example: Right now we have systems that are incredibly brittle.  When they get compromised they just get owned.  We're very bad at notifying users about failures in useful ways, and our systems atomically — an entire system fails at once.  Because of this, the user generally doesn't have a way of noticing and understanding what is going on.  Maybe they notice that their phone is acting funny or maybe they don't notice anything because the guy who wrote the malware knew what he was doing for once.

If you take this as one of the fundamental problems in security, which it is, then one of the interesting things is that in most situations you have an incredibly complex and very powerful pattern matching CPU hooked-up to your system that you're not using.  It's called the user.  Let's look at a sample system and see how we can segment the problem.  Say you have a phone.  Instead of it just being a phone, split it so it's a 3G Wi-Fi dongle and a tablet.  Say the dongle has some LEDs on that aren't being used for anything and you pack a firewall and a deep packet inspection tool onto it.  If it sees anything that it thinks looks weird it lights up an LED.  If it sees something that looks like voice traffic, it lights up an LED.  You see an LED and you look at your phone and you think, “wait, I'm not on a call right now, why does this dongle have it's voice traffic LED on?”  Now you can do something.  Maybe, you're still owned and don't know any techies and can't really do anything about the compromise, but you can say, “my phone is acting weird and I think it's either owned or the dongle is broken.”  You know something in this set of things is no longer acting correctly.  Now you can take some kind of corrective action.  Maybe you're going to go and put your phone in the fridge and have this conversation in another room or maybe you're going to leave your phone on the bus so it rides around town for a few hours while you go to the airport or whatever you need to do.  What you've done here is you've taken an understanding about the state world — our systems fail atomically and fail to notify users — and drawn that all the way through a bunch of tech development to something that changes user outcomes in a way that's consistent with that high-level understanding of the shape of the world.

This is what we actually need to do.  We must put users in a place where they can affect what the outcome is for them.  This is starting with an understanding of the world and a theory of change, looking at a security problem and taking that theory and driving it all the way through.  If you're not doing that and you're designing tools, you're probably wasting everyone's time.

Hacker culture is at a crossroads.  For a long time it was totally cool to say, “you know what, I don't really want to be political, because I just like to reverse code and it's a lot of fun, and I don't really have time for politics because I spend thirteen hours a day looking at shell code and socialism takes too long.” That was great for a while, but we don't get to be apolitical anymore.  Because If you're doing security work, if you're doing development work and you're apolitical, then you are aiding the existing centralizing structure.  If you're doing security work and you are apolitical, you are almost certainly working for an organization that exists in a great part to prop up existing companies and existing power structures.  Who here has worked for a a security consultancy? Not that many people, OK.  I don't know anyone who's worked for a security consultancy where that consultancy has not done work for someone in the defense industry.  There are probably a few, and I guarantee you that those consultancies that have done no work that is defense industry related have taken an active political position that they will not touch anything that is remotely fishy.  If you're apolitical, you're aiding the enemy.

No neutral ground means that we have to have the culture war, that you have to say “you're either with us or against us”. Not “well, I guess that if democracy happens to occur, I'm not against it, but I'm not&hellip”  No.  No, we're done.  There is this notion, and it's talked about in the context of harassment, that you accept what you walk by, that your standard for conduct is what you walk by and ignore.  So, if you see somebody putting someone down, but it's not like it's a big deal, it's not enough to write them up over and you walk by it, well, that's your standard for the behavior you accept.  That is your baseline, and if that happens everywhere, then it's totally fine.  And it will happen everywhere, because the standard is what you will ignore.  The standard is where you'll say, “Oh, I don't really want to have that fight right now, it's not such a big deal, I'm just going to go and get a beer.”  Yes, that's your standard for what you'll accept in the world.  So, if you don't wanna do that, fucking speak up.

This is why I was boycotting.  I am still boycotting this event.  I was not going to be here, and then I had free day in Amsterdam and DrWhax said, “I'll give you a stage and you can show up and rant.”  So I'm here and I'm ranting.

Yes, we have a culture war going on.  This isn't about hacker in-group politics, though.  The culture war is the big culture war.  It's the fight for the narrative of what humans are.  Do you believe that people are fundamentally altruistic, or do you believe that we will stab each other in the back for a loaf of bread at a moment's notice, and that if you are not kin and kind then, fuck you I'll stick your head on a pike.  If that's the world you want to live in, you get to choose which of those is true.  This isn't just about what is true, but about what we want to be true.  You get to build the world you want to live in and your actions decide which of these is true.  That's what the culture war we're having right now in the hacker scene is actually about.

This war has real cost.  This is not free, it is not easy.  It has a cost at many many levels.  It has costs within our community.  You are talking about maybe running Noisy Square to become OHM next time.  To run the Dutch camp in four years.  If that happens it will probably split the Dutch scene for real.  There will be two camps, and one of them might or might not happen, depending on how the funding goes.  That has a real cost and it means that there will be real fights, there will be people who won't come, friendships that will be broken.  There were a lot of people that I wanted to see this year who decided that they really couldn't come after the behavior of the organizing committee.  I miss those people.  I would have loved to see them here.  I would love to have been here myself the whole time.  This has real costs, because it is a real fight.  People get hurt.  It matters, though, and we don't get to say no.

Back to that bit about the power of the state.  Back to that bit where surveillance is inevitable as long as you have a state.  Which it is, as long as you have the state that wants to centralize all the power, that wants to hold on, that is going to be the single centralizing entity with a monopoly on violence.  As long as you have empire, yes, you are going to have surveillance.  It is an inherent problem in the Westphalian compromise.  For a long time the Westphalian compromise sort of, kind of, maybe worked or we could at least pretend it worked a little bit more easily.  But the state has been captured by a lot of other centralizing structures.  There really is no such thing as an independent state anymore.  The closest you get is North Korea, which is really just a client state of the Chinese anyway.  You don't have independent states because money has gotten into states, because global corporations and the global rich have bought those states.  Now, the states also buy the global rich.  It's a mutual cancer of centralizing structures.

In this context surveillance is going to continue to be maximally deployed, and a functional public, a functional democracy, a functional dialog can't happen with surveillance.  As soon as you start organizing to express an opinion publicly which is unpopular, you get banned, you get jackboots at your door because they have heard every word you have said.  Or you never have the idea of organizing dissent because you live in Singapore and the brutality is right there out in the open.  As long as you stay in the mall, it's cool.  But if you like independent media… &ldqou;Why would we have an indepedent media?  The media exists to serve the state.”

So if we want to have something that resembles democracy, given that the tactics of power and the tactics of the rich and the technology and the typological structures that we exist within, have made that impossible, then we have to deal with this centralizing function.  As with the Internet, so the world.  We have to take it all apart.  We have to replace these structures.  This isn't going to happen overnight, this is a decades long project.  We need to go build something else.  We need to go build collective structures for discussion and decision making and governance that don't rely on centralized power anymore.  If we want to have democracy, and I'm not even talking about digital democracy, if we want to have democratic states that are actually meaningfully democratic, this is simply a requirement now.

We cannot build a free world on an unfree Internet.  You cannot build functionally decentralized Internet-centric democratic structures on an unfree Internet.  It's like the CIA trying to build a free democracy on a legacy of treachery and murder.  It just doesn't work.

So let's fight.

Eleanor Saitta
3 August, 2013
OHM, Netherlands