If 2014 can be remembered for anything, it’ll be, in tech circles at least, the year of the Internet Troll. Online abuse, particularly abuse of women and minorities, has always been there and been a massive problem, but last year it finally broke into the mainstream.

Suffice it to say I’m sick of seeing the people I know and respect get abuse for simply being who they are and daring to use the Internet.

This is primarily a social issue, rather than a technical one, but as a technologist it’s the technology that I know, and as someone who helps build platforms that help people communicate, I can’t help wondering what technological approaches can be adopted to give victims of abuse some extra tools – as a sticky plaster – while we address the much more tricky root social issues (which I think largely revolves around good people not remaining silent about this stuff).

So, I’m wondering, if we were to build a tool like Twitter, or WordPress for that matter, today. What could we do, technically, to help?

Something important to stress as we begin to discuss tools we might be able to provide to a victim, is that in no way should this be interpreted as trying to shift the responsibility for abuse to them. More tools are only ever a sticking plaster to deal with the state of the world as it currently is, and it shouldn’t distract us from trying to make a better world where those tools aren’t needed.

Anyway, here are some rough musings.

A better block button

When someone reports a user for abuse, obviously no more messages should be received by the target from the abuser.

The abuser shouldn’t be explicitly made aware that they’ve been blocked (although it’d not be hard to find out that they have been), and every subsequent message should be redirected and logged, with as much detail as possible, in an evidence package for law enforcement, automatically.

The fact that this is done automatically is important, because it means the victim won’t have to manually process abusive messages in order to gather evidence, which itself can be an upsetting experience.

Shared block lists

This is a concept mooted by a bunch of people, and is something that certain existing services (looking at you Twitter) would be smart to implement.

Basically, a user can share their block list and make it available for others to subscribe to. This would allow people to quickly pre-empt some of the orchestrated attacks we’ve started to see emerging, since it would be a very quick way of distributing lists of trolls and their sock puppets, especially if there are one or two users who are the primary focus of an attack.

The bad side of this is that someone has to maintain these lists. However, if users share these lists with each other, you can easily see a black mark propagating quite quickly.

Graphing the network, degrading performance

Randi Harper created a pretty powerful tool, the GGAutoblocker which works by mapping the social graph of a few key accounts and pre-emptively blocking users who follow more than one.

This approach has been reported as being remarkably effective, and can easily be applied elsewhere.

If you’re building/operating a centralised service, this might be a handy concept to build into your network, specifically when dealing with specific groups of harassers or organised harassment campaign.

Additionally, I’m wondering whether it might be smart to attempt to disrupt these groups… for example, a service could throttle the communication between users who are on the list, so as to slow down their ability to organise using the platform. I imagine this would be particularly effective when applied to nexus nodes (fun fact, this is the theory behind the US metadata collection/drone murder program, but slowing down/hell banning people is probably less extreme).

This last would need to be done at the network level, and would require the network to make some opinionated decisions about who is an abuser and who is not. Probably not that hard to work out in actuality, but the unwillingness of certain popular networks to get involved is often part of the problem.

Problems with a distributed/Indieweb network

The specific problems of how you handle this sort of thing on a distributed network are interesting. In an effect, you could handle abuse in much the same way as you’d potentially handle spam. So, perhaps something like Vouch could help here.

While blocking based on domain (for mentions etc), shared block lists and automatic evidence collection is still applicable, the social graph forms of defence start becoming more tricky. Especially if, as I’m keen to see, the next generation of distributed networks go out of their way to hide/obfuscate the graph in order to protect against bulk metadata collection.

Just thinking out loud here, what are your thoughts?

OAuth is a technology that allows a user to connect a client to a service, but without that user needing to enter their password.

The usual way this works is that a user clicks on a button, and are taken to a page asking whether they wish to allow the connection. Under the bonnet a handshake is going on between the client and server, resulting in an exchange of tokens.

If you’ve ever used the “Facebook connect” or “Sign in with twitter” buttons, you are likely familiar with this.

Known has a comprehensive API, and while it is possible to authenticate yourself to it using signed HTTP headers, I thought it’d be handy to be able to authenticate with OAuth as well (it was an excuse for me to write the code powering the server side of an OAuth exchange, a good way to understand it!).

The plugin I wrote lets a user manage “applications” – collections of keys – which can be used by an OAuth2 client to power an exchange.

Example Usage

Here is an example of client authentication in it’s most basic…

To get a code:

https://mysite.com/oauth2/authorise/?response_type=code&client_id=<your API Key>&redirect_uri=<path to your endpoint>

You will be directed to a log in page, followed by a confirmation page as necessary, after which you will get a response code back. This response will either be a JSON encoded blob, or if you specified a redirect_uri, the values will be forwarded as get variables.

Exchanging the code for a token

https://mysite.com/oauth2/access_token/?grant_type=authorization_code&client_id=<your API Key>&redirect_uri=<path to your endpoint>

You should get back a json encoded blob with an access token, expiry and refresh token.

Once you’ve performed an OAuth exchange, you will be provided with an access token. You can pass this token along with any web service API call to authenticate your request.

» Visit the project on Github...

Using the Paris attacks as an excuse, governments around the world are clamping down on free speech, and the tools that make that speech possible in the digital age.

Cameron, who clearly read somewhere that it doesn’t matter what you say, so long as you sound decisive, has declared war on cryptography.

I talk a bit about this in a rant I recorded earlier:

A secure internet secures us all, and despite having never so much as got a parking ticket, I feel deeply uncomfortable in the UK – which is officially the most spied on country in the “free” world. Where every car journey is tracked, where people are recorded (both audio and video) in virtually every public space, where every text message, email, phone conversation and website is recorded and analysed.

Where, if Cameron has his way, it will soon be a crime to use tools to resist this ever watchful eye.

Not knowing if you’re being watched, and not knowing what conclusion some faceless spook or bureaucrat will make from the activity of your day to day life is stressful and socially damaging. People will always say “if you’ve nothing to hide, you’ve nothing to fear”, but really it’s all about context.

Granted, there are crazies out there, but the gunmen in the Paris attack were known, and they communicated openly with each other. Why weren’t they picked up? Well, the French already stated, that it is simple not possible to investigate every possible lead – so throwing the net wider and making the haystack bigger, while sounding good in an election campaign, can only make it less likely that you’ll spot the next attack.

Destroying freedom in order to protect it is not winning, Mr Cameron. We lived for decades under the threat of Christian terrorists, and the threat of US/USSR nuclear annihilation, without shredding the constitution.

Putting the whole country under surveillance in a modern reboot of East Germany is not going to protect us. Destroying the UK’s IT sector is not going protect us either.

Christian Payne and Cory Doctorow say this much much better that I did.

Perhaps trying to get to the reasons why so many poor people are angry and turning to religious fanaticism and violence might be a better idea?

But of course you won’t. You need to appear Tough. You need to Lead. To support your backers.

The Cheltenham eye of Sauron is being turned inwards, not to protect UK citizens from terrorists, but to protect the interests of your super rich friends from the dispossessed and increasingly angry poor, as you strip away their freedoms, education, healthcare, houses and livelihoods.

My blood is boiling again, so I think it’s time to sign off and go drink some herbal tea.

I’ll leave you with a video by Russell Brand. No matter what your personal views are on this guy, his video on the Charlie Hebdo massacre hits the nail absolutely on the head.

Peace.