Ok, so here’s an experimental, proof of concept plugin for Idno that provides OpenPGP encryption for form posts between web clients and the server (just in time for Reset the net 😉 ).

It makes use of OpenPGP.js, a pure javascript implementation of the OpenPGP spec. On form submission, the plugin will encrypt all form variables, client side in the browser, before transmitting them to the server.

What this for

Primarily, this is aimed at (partially) addressing a situation where you have an Idno site sitting behind a load balancer/reverse proxy like Squid, Nginx or Pound.

In this configuration it is common for the connection to be HTTPS only between the client and the load balancer node, at which point HTTPS is stripped and the connection to the back-end web server is conducted over HTTP. As we know from the NSA smiley, attacking this point where HTTPS is stripped at the load balancer was one of the ways the NSA and GCHQ was able to burgle customer data from Google’s cloud.

Using this plugin, the contents of the form will be encrypted with the back end server’s public key, meaning that the payload will remain encrypted as it transits through your data centre until it’s final destination, where, if you redesign your system as such, it could be stored in encrypted form and decoded only when necessary.

What this is NOT for

This is not intended to be a replacement for HTTPS.

Encrypting the form on the client does raise the bar slightly, making it much harder for a passive attacker to simply read your username and password as it travels over the wire. However, it does not protect against a more sophisticated attacker capable of launching a “Man in the middle“, or “Man on the side attack“.

Using HTTPS is important, because without it an attacker could insert their own public key into the mix, or modify the javascript sent.

Usage and limitations

The plugin currently piggybacks off of gnuPG to do the decryption on the server end, and so this requires you to perform a couple of configuration steps.

  • Make sure you’ve got gnuPG installed. If the binary isn’t at at /usr/bin/gpg, you can set opengpg_gnupg in your config.ini
  • Generate a keypair for your web server user
su www-data
gpg --gen-key
  • Make sure that the .gnupg directory is not accessible publicly, using a .htaccess or similar, since it’ll contain a secret key!
  • Get a copy of the public key gpg --export -a "User Name", and save it as openpgpPublicKey in your config.ini

Once you’ve done this, activate your plugin in your Idno settings, and you should be ready to rock and roll!

It was enjoyable playing with OpenPGP.js, and I can already think of some other cool uses for it (most obvious might be to enhance my OpenPGP elgg plugin).

Have fun!

» Visit the project on Github...

GCHQoogle: so much for "Don't be evil" So, unless you’ve been living under a rock for the past few months, you will be aware of the disclosure, by one Edward Snowden, of a massive multinational criminal conspiracy to subvert the security of internet communications and place us all under surveillance.

Even if you believe the security services won’t misuse this, what GCHQ can figure out, other criminal organisations can as well, so like many others, I’ve decided this’d be a good time to tighten the security of some of the sites I am responsible for.

Enabling HTTP Strict Transport Security

So, lets start off by making sure that once you end up on the secure version of a site, you always get sent there. For most sites I had already had a redirect in place, but this wouldn’t help with a number of threats. Thankfully, modern browsers support a header, that when received, will cause the browser to rewrite all connections to that site to the secure endpoint before they are sent.

A quick and easy win, so in my apache conf I placed:

Header add Strict-Transport-Security "max-age=15768000; includeSubDomains"

Auditing my SSL configuration, enabling forward secrecy

The next step was to examine the actual SSL/TLS configuration used by the various servers.

During the initialisation of a secure connection, there is handshake that takes place which establishes the protocol used (SSL / TLS), the version, and what algorithms are available to be used. The choices presented have an effect on exactly how secure the connection will be, and obviously, older and insecure protocols and weaker algorithms present security risks.

I have been, I admit, somewhat remiss in the past, and have largely used Apache’s defaults, which was ok for the most part, but as SSL Lab’s handy audit tool revealed left a number of weak algorithms available as well as not taking advantage of some newer security techniques.

I quickly disabled the weaker algorithms and SSL protocols, and also took the opportunity to specify that the server prefers algorithms which support Forward Secrecy.

Forward secrecy is a property of newer algorithms (supported, sadly, only by newer browsers), that means that even if the key for a given session has been compromised, that key could not be used to decrypt any future sessions. This means that even if the attacker compromised one connection, they would not be able to compromise any past or future connections.

Here is a handy guide for implementing this on your own server.

The downside of these changes of course is that older browsers (IE6, I’m looking at you) are left out in the cold, but these browsers (IE6, I’m still looking at you) are using such old implementations with weak algorithms, they would likely be in trouble anyway.

Security is a moving target, so it’s important to keep up to date!

Update: If you’re running Debian (as I am), you will have some issues with ECDHE suites until stable updates to Apache 2.4. Until then I’ve tacked on AES support at the end, to support IE with something reasonable, but giving forward secrecy to more modern browsers.

Update 27/6/14: Debian recently backported a few 2.4 cipher suites into the 2.2 branch in debian stable. This means that Perfect Forward Secrecy is now supported for Internet Explorer!

So, unless you have been living under a rock for the last few days, you would have heard about Heartbleed, the recently discovered OpenSSL security bug (go over to XKCD for the best explanation I’ve seen for how it works).

It was bad, really bad. As Bruce Schneier put it:

“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.

…And it was, very bad, but it could have been much much worse.

As it was, within a day or two of the bug being declared, the vast majority of affected servers were patched, and service users notified. We don’t know how often, if at all, this bug has been exploited, and we likely never will, but we can easily tell if it’s been fixed.

Bugs happen, but the only reason we know about Heartbleed at all is because OpenSSL is Open Source software. Open source allows you, and other third parties, to independently review and audit the code, something not possible with software from proprietary vendors.

Does software from Apple, Google or Microsoft contain similar ticking timebombs? Who knows, and we have no way of finding out. This is one of the many reasons why you should never trust closed source or proprietary security products. Ever.

Bluntly, anything where you can’t see the code can’t be audited, so can’t be trusted not to do something malicious, whether by accident or design. Trusting a product from a manufacturer purely on brand is a genetic fallacy.

What next

OpenSSL is an incredible project, and the community do a bang up job at keeping our communications safe from prying eyes. The fact that this bug, a simple buffer overflow error (a mistake every C programmer has made countless times in their career), was not spotted for almost two years is a little troubling, but those of us outside the project can’t really comment usefully on that.

Bugs happen, sometimes they’re serious, but the responsible action was followed and internet security is stronger for it. Patch and move on.

Talking specifically about OpenSSL, and open source in general, my thinking is that Heartbleed highlights a resource problem common with many projects which fall under the umbrella of being “infrastructure”. In that, it is largely built by volunteers (although also with large contributions from engineers paid to work on the project by their day job), and so sometimes there’s not enough bandwidth available to do everything, but by being “infrastructure”, doesn’t get so much of the attention as the latest wizzbang project on Hacker news.

My hope is that this bug will see more stakeholders taking an active interest, and so see a growth in the numbers of contributors and auditors. If every company who relied on OpenSSL paid one of their engineers to spend one day a month trying to break it, or audit the latest code, how many more person-hours this would give the project?

Another issue Heartbleed illustrates painfully well, is the danger of homogeneity. When the vast majority of the world use the same bit of software, it presents a single point of failure, and one error can have a massive impact.

OpenSSL has become the de-facto standard SSL/TLS implementation in use through merit, and while there is a very good case for not rolling your own crypto, I wonder if there is a case for encouraging and maturing different open source implementations of the protocol? Pros and cons on either side, but single points of failure should always be avoided…

Image “How the Heartbleed bug works” by XKCD