Terrorism and crime, in the modern age, are inevitable.
A perhaps hyperbolic and somewhat unfortunate statement to open with, but true none-the-less. I’ve been sitting on this blog post for some time, trying to find the right words, but I keep coming back to that realisation.
It’s no wonder then, that when some of the most influential security officials in the world stand at lecterns, or scribble column inches about how “our privacy has never been an absolute right” and that technology is turning the Internet into a “command-and-control network” for terrorists and criminals, that I get a little more than frustrated.
There are a wealth of areas I could call out in the debate around privacy and how it balances with security, but I’m going to try to be selective. Specifically, I want to challenge the idea that:
- just because we know how to better surveil people in the modern age, that equates to a duty to do so
- the technology sector is in denial about its role in protecting the interests of the public at large
- through greater surveillance, we can protect people
Predominantly, these are challenges to statements allegedly made by the Director of GCHQ, Robert Hannigan, quoted in the Guardian late last year, but they apply equally to comments made by security officials of various guises around the world.
Just because we can, doesn’t mean we should
Everyday, people are generating an inordinate amount of data and using the Internet to transmit it. Increasingly we live our lives online; and the more we use it, the more data we’re putting out there.
Much of that data is willingly and deliberately put into the open. When you post a tweet or a Facebook status you’re making that data public. Even when you’re making these posts from “private” accounts, for all intents and purposes, you’re putting information freely into the public domain.
But our use of the Internet is more than just posting public statuses to the gorging masses for our collective social media vanity projects. We’re also (despite many of us wishing people wouldn’t) still sending millions of emails every day. We’re writing documents, creating slide decks, buying our weekly food shop and much more besides; all on the Internet. And the vast majority of that data we believe, perhaps mistakenly, is private. We’re not deliberately or consciously putting that information into the public domain.
So for the sake of argument here, there are two kinds of data; public and private.
Your public data is fair-game
Security services have been able to interrogate your public data for a long time; assessing it for anything they might consider untoward behaviour using whatever technologies they keep under lock and key in places like GCHQ in Cheltenham.
And in many ways, that’s no big deal. You chose to put it out in the open, so you can’t really make a principled stance against it being used to help James Bond stop MI6 being blown to smithereens. If you’re willing to let Facebook use public data to sell you crap you don’t need, then catching a bad guy as a by-product doesn’t seem so bad. (I’m yet to hear about a crime that’s been squashed because the baddies tweeted their plans, but hey, you never know!)
So using public data? No biggie.
Your private data is apparently fair-game too
The leaks made by Edward Snowden exposed the fact that security agencies have, for some time, been able to access data that we might ordinarily have considered “private”. I’m talking about your email, your instant messages, the documents you store online; basically anything that you normally only expect you, and maybe an intended recipient, can see.
They’re able to collect, log, analyse and use it for whatever purpose they deem necessary. In principle, they’re using this data to find needles in haystacks. In practice, that means treating everyone as suspects of wrong doing with no prior evidence.
Some of this data is believed to have been collected by so-called “man in the middle” attacks; where data is transmitted from point A to point B but is being intercepted on the way by security services. Other data is believed to have been collected by direct access to the infrastructure of companies holding it.
Then there’s metadata
There’s also a third kind of data; metadata. Metadata is data about data, such as call records and browsing history, and sometimes its private, and sometimes its public.
I’ll cover metadata, and its shortcomings for things like crime prevention, in another post. The important take away is that this information is both useless and useful at once. Metadata can prove or disprove almost anything you want it to, because it lacks context.
Context is key
Context is the crux of my issue with the idea that, because we can access and analyse the data made available through technology, that we have a duty to do so to prevent terrorism and crime.
Almost all data lacks context. Through the analysis of your public, private and meta data, intelligence agencies can build a pretty invasive picture of what we do on a daily basis. The problem is, there’s no way of knowing, for law enforcement, if that data is accurate.
Worse, without that accuracy and certainty, guilt before innocence is the assumption. And its at that point that your own right to privacy is being used against you. Just because we can access all of this data, doesn’t mean we should - because it doesn’t necessarily lead us to the right outcome.
Its the presumption of security agencies that they have the obligation to interrogate this data, that they can do so without consent, that they do so without the full understanding of the context, and that they are doing it en mass without reasonable suspicion, that is driving different kinds of behaviours. And those behaviours are coming not just from criminals who are trying to hide from detection, but from the technology sector itself.
The tech sector is protecting your interests
There is a seemingly common belief amongst senior officials across the globe that technology companies are doing the wrong thing. Officials believe that the likes of Facebook, Google and Apple are complicit in creating safe havens for terrorists to hide, and they’re doing so through things such as end-to-end encryption technologies. Ipso facto technology companies are helping terrorists, endangering people’s lives and don’t have the public’s interest at heart.
Robert Hannigan said as much himself:
“To those of us who have to tackle the depressing end of human behaviour on the internet, it can seem that some technology companies are in denial about its misuse.
“I suspect most ordinary users of the internet are ahead of them: they have strong views on the ethics of companies, whether on taxation, child protection or privacy; they do not want the media platforms they use with their friends and families to facilitate murder or child abuse.”
Putting aside the deliberately emotive examples and scare-mongering, which to my mind shows there is no decent basis or evidence to support the argument, the implication here is that tech companies that use encryption harm the public.
This is utter nonsense. The use of encryption technologies is - by its very nature - designed to protect the public. It protects their money, their intellectual property, their privacy, and their identity. Yes, a by-product of that choice is that it makes bad things more difficult to find, but frankly, that’s kind of the point. The whole reason for using encryption is so that a third party doesn’t get access to your data; it doesn’t matter if that’s a cyber criminal trying to hack into your bank account, or MI5 having a snoop in your inbox. If the security services have such a big issue dealing with encryption then that’s probably a good thing - because it means that when criminals target the public directly through online methods, the public are probably safe.
The use of greater encryption of our data is being painted as a nuisance that’s stopping us from stopping crime, when in fact, its doing the opposite. If anything, using encryption technologies is a kind of public service.
Technology companies aren’t in denial about how technology is misused. They’re more than aware of it. That’s why they do things like encrypt your data.
More surveillance doesn’t mean we’re safe
It should be obvious, then, that weaker protections of our data will, in at least some ways, make us less safe. But that’s not where I started my argument. I put my central argument right in the first sentence of this blog.
Terrorism and crime, in the modern age, are inevitable.
I chose to start with that sentence on purpose. Because it runs through each of the challenges I’m making. To recap:
- without context, access to data doesn’t guarantee safety
- using weaker technology protections won’t make us safer
And my final assertion is this:
- more surveillance will not make us safe
Why am I so sure? Because surveillance is tackling the issue of crime and terrorism from the wrong angle. It’s trying to fix symptoms without trying to resolve the root causes; like treating a hangover when you could have consumed less wine the night before.
Surveilling all of our communications, interrogating all of our data, and treating everyone as a potential suspect is not going to stop the world’s next major terror attack. It didn’t stop 9/11. It didn’t stop 7/7. It didn’t stop the Madrid bombings, and it didn’t stop the Charlie Hebdo attacks.
That’s because focussing on surveillance is putting all our eggs in the wrong basket. We’re creating technological solutions to human problems and focussing on effect, not cause. Sure, we’ll get lucky at some point, and catch the odd crime, through laziness or sloppiness but, if anything, greater surveillance and greater suspicion is probably making the situation worse. By claiming “if there is nothing to hide, there is nothing to fear”, you set up a situation where the state and the citizen are in constant conflict. That’s not how you fix the cause; it’s how you exacerbate it.
We need a new narrative on privacy in the digital age
Security agencies are pre-disposed to ignore our privacy. They’re using any tools and any data they can lay their hands on to create a culture of mistrust and fear. And ultimately, for what? A wild goose chase that probably won’t keep us safe from the very things they claim to want to protect us from.
People like Robert Hannigan are right when they call out this as the time where we have to re-balance the relationship between the technology we use, and the way we protect the public. But not in the way he thinks. Where GCHQ would see greater surveillance and greater distrust, the public won’t be so naïve. The real question facing us right now is how much privacy are we willing to give up in order to achieve a little more safety. The pendulum is swinging in the direction of less and less privacy, despite the fact that no one can promise the level of safety we want.
The sad thing is that our privacy is not the real enemy - but it is the victim.