So yesterday, we were greeted with another bombshell from the Snowden archives.

After finding out the day before that GCHQ had spied on lawyers, we now find out that GCHQ and the NSA conspired to steal the encryption keys to pretty much every sim card in the world, meaning that they can easily break the (admittedly weak) encryption used to protect your phonecalls and text messages.

Personally, I’m not terribly concerned about this, because the idea that your mobile phone is insecure is hardly news. What is of concern to me, is how they went about getting those keys.

It seems that in order to get these keys, the intelligence agencies hunted down and placed under invasive surveillance ordinary innocent people, who just happened to be employed by or have dealings with the companies they were interested in.

The full capabilities of the global surveillance architecture they command was deployed against entirely unremarkable and innocent individuals. People like you and me, who’s entire private lives were sifted through, just in case they exposed some information that could be used against the companies which they worked.

Nothing to hide, nothing to fear

If there is a silver lining in all this, with any luck it will go some way towards shattering the idea that because you have nothing to hide, you have nothing to fear.

This is, primarily, a coping strategy. It’s a lie people tell themselves so they can avoid confronting an awkward or terrifying fact, a bit like saying climate change isn’t real, or that smoking won’t kill me.

Generally, it is taken to mean that you’ve done nothing wrong, i.e. illegal (and of course, that’s not what privacy is about, and what you consider being “wrong” has typically not been the same as what those in power consider “wrong”).

Fundamentally, it misses the point that you don’t get to decide what others are going to find interesting, or suspect you of knowing. In this instance, innocent people had their privacy invaded purely because they had suspected access to information that the intelligence agencies found interesting. This is something that, were I to do something similar, I’d go to jail for a very long time.

Now consider that one of the NSA’s core missions is to advance US economic interests, spying on Brazilian oil companies and European trade negotiations, etc. If I worked at a competitor of a US company, I’d be very careful what I said in any insecure form of communication.

You do have something to hide.

If 2014 can be remembered for anything, it’ll be, in tech circles at least, the year of the Internet Troll. Online abuse, particularly abuse of women and minorities, has always been there and been a massive problem, but last year it finally broke into the mainstream.

Suffice it to say I’m sick of seeing the people I know and respect get abuse for simply being who they are and daring to use the Internet.

This is primarily a social issue, rather than a technical one, but as a technologist it’s the technology that I know, and as someone who helps build platforms that help people communicate, I can’t help wondering what technological approaches can be adopted to give victims of abuse some extra tools – as a sticky plaster – while we address the much more tricky root social issues (which I think largely revolves around good people not remaining silent about this stuff).

So, I’m wondering, if we were to build a tool like Twitter, or WordPress for that matter, today. What could we do, technically, to help?

Something important to stress as we begin to discuss tools we might be able to provide to a victim, is that in no way should this be interpreted as trying to shift the responsibility for abuse to them. More tools are only ever a sticking plaster to deal with the state of the world as it currently is, and it shouldn’t distract us from trying to make a better world where those tools aren’t needed.

Anyway, here are some rough musings.

A better block button

When someone reports a user for abuse, obviously no more messages should be received by the target from the abuser.

The abuser shouldn’t be explicitly made aware that they’ve been blocked (although it’d not be hard to find out that they have been), and every subsequent message should be redirected and logged, with as much detail as possible, in an evidence package for law enforcement, automatically.

The fact that this is done automatically is important, because it means the victim won’t have to manually process abusive messages in order to gather evidence, which itself can be an upsetting experience.

Shared block lists

This is a concept mooted by a bunch of people, and is something that certain existing services (looking at you Twitter) would be smart to implement.

Basically, a user can share their block list and make it available for others to subscribe to. This would allow people to quickly pre-empt some of the orchestrated attacks we’ve started to see emerging, since it would be a very quick way of distributing lists of trolls and their sock puppets, especially if there are one or two users who are the primary focus of an attack.

The bad side of this is that someone has to maintain these lists. However, if users share these lists with each other, you can easily see a black mark propagating quite quickly.

Graphing the network, degrading performance

Randi Harper created a pretty powerful tool, the GGAutoblocker which works by mapping the social graph of a few key accounts and pre-emptively blocking users who follow more than one.

This approach has been reported as being remarkably effective, and can easily be applied elsewhere.

If you’re building/operating a centralised service, this might be a handy concept to build into your network, specifically when dealing with specific groups of harassers or organised harassment campaign.

Additionally, I’m wondering whether it might be smart to attempt to disrupt these groups… for example, a service could throttle the communication between users who are on the list, so as to slow down their ability to organise using the platform. I imagine this would be particularly effective when applied to nexus nodes (fun fact, this is the theory behind the US metadata collection/drone murder program, but slowing down/hell banning people is probably less extreme).

This last would need to be done at the network level, and would require the network to make some opinionated decisions about who is an abuser and who is not. Probably not that hard to work out in actuality, but the unwillingness of certain popular networks to get involved is often part of the problem.

Problems with a distributed/Indieweb network

The specific problems of how you handle this sort of thing on a distributed network are interesting. In an effect, you could handle abuse in much the same way as you’d potentially handle spam. So, perhaps something like Vouch could help here.

While blocking based on domain (for mentions etc), shared block lists and automatic evidence collection is still applicable, the social graph forms of defence start becoming more tricky. Especially if, as I’m keen to see, the next generation of distributed networks go out of their way to hide/obfuscate the graph in order to protect against bulk metadata collection.

Just thinking out loud here, what are your thoughts?

So, after I fixed the two screen problem I was having with my Ubuntu setup, I started getting an odd flickering.

This flickering didn’t affect the whole screen, rather it seemed to be something to do with window repainting, and it became even worse after I updated to 14.04.

I run a slightly non-traditional configuration, in that I run Gnome2 fallback rather than Gnome3 or Unity, therefore this probably won’t effect a lot of people, and is probably why it persists.

After a bit of digging, I discovered that this is actually a compiz issue. Here’s a summary of the fix:

Fixing the flicker

  • Install the compiz settings manager: apt-get install compizconfig-settings-manager
  • Scroll down to “Workarounds” in the “Utility” section:

compiz-1

  • Select “Force full screen redraws (buffer swap) on repaint”:

compiz-2

Once this is done, your windows should repaint as normal.