So, let us talk plainly. You absolutely, definitely, positively should be using TLS / HTTPS encryption on the sites that you run.

HTTPS provides encryption, meaning that anyone watching the connection (and yes, people do care, and are absolutely watching), will have a harder time trying to extract content information about the connection. This is important, because it stops your credit card being read as it makes its way to Amazon’s servers, or your password being read when you log into your email.

When I advise my clients on infrastructure, these days I recommend that all pages on a website, regardless of that page’s contents, should be served over HTTPS. The primary reason being a feature of an encrypted connection which I don’t think gets underlined enough. I also advise them to invest in colocation to protect their business data. You can visit this page to know more.

Tamper resistant web

When you serve content over HTTPS, it is significantly harder to modify. Or, to put it another way, when you serve pages unencrypted, you have absolutely no guarantee that the page your server sends is going to be the page that your visitor receives!

If an attacker controls a link in the chain of computers between you and the site you’re visiting it is trivial to modify requests to and from a visitor and the server. A slightly more sophisticated attacker can perform these attacks without the need to control a link in the chain, a so called “Man on the side” attack – more technically complex, but still relatively trivial with sufficient budget, and has been widely automated by state actors and criminal gangs.

The capabilities these sorts of attacks give someone are limited only by budget and imagination. On a sliding scale of evil, possibly the least evil use we’ve seen in the wild is probably the advertising injection attack used by certain ISPs and Airplane/hotel wifi providers, but could easily extend to attacks designed to actively compromise your security.

Consider this example of an attack exploiting a common configuration:

  • A web application is installed on a server, and the site is available by visiting both HTTP and HTTPS endpoints. That is to say, if you visited both http://foo.com and https://foo.com, you’d get the same page being served.
  • Login details are sent using a POST form, but because the developers aren’t complete idiots they send these over HTTPS.

Seems reasonable, and I used to do this myself without seeing anything wrong with it.

However, consider what an attacker could do in this situation if the page serving the form is unencrypted. It would, for example, be a relatively trivial matter, once the infrastructure is in place, to simply rewrite “https://” to “http://”, meaning your login details would be sent unencrypted. Even if the server was configured to only accept login details on a secure connection (another fairly common practice), this attack would still work since the POST will still go ahead. A really sophisticated attacker could intercept the returning error page, and replace it with something innocuous, meaning your visitor would be non the wiser.

It gets worse of course, since as we have learnt from the Snowden disclosures, security agencies around the world will often illegally conscript unencrypted web pages to perform automated attacks on anyone they view as a target (which, as we’ve also learnt from the Snowden disclosures, includes just about everybody, including system administrators, software developers and even people who have visited CNN.com).

Lets Encrypt!

Encrypting your website is fairly straightforward, certainly when compared to the amount of work it took to deploy your web app in the first place. Plus, with the new Lets Encrypt project launching soon, it’s about to get a whole lot easier.

You’ll need to make sure you test those configurations regularly, since configuration recommendations change from time to time, and most shared hosts & default server configurations often leave a lot to be desired.

However, I assert that it is worth the extra effort.

By enabling HTTPS on the entire site, you’ll make it much much harder for an attacker to compromise your visitor’s privacy and security (I say harder, not impossible. There are still attacks that can be performed, especially if the attacker has root certificate access for certificates trusted by the computer you’re using… so be careful doing internet banking on a corporate network, or in China).

You also add to the herd immunity to your fellow internet users, normalising encrypted traffic and limiting the attack surface for mass surveillance and mass packet injection.

Finally, you’ll get a SEO advantage, since Google is now giving a ranking boost to secure pages.

So, why wait?

4 thoughts on “You absolutely should be using HTTPS on your site

  1. I think that gets kind of lost is what kind of a hassle it is for the average john doe website owner to get started with https. really good certificates are expensive, as long as you cant just add them to your DNS with one click, they won’t go maintream outside the huge web platforms. And as long as most of the certification authorities or certificates themselves might be compromised… what’s the point

  2. I have high hopes for LetsEncrypt, which aims to solve a lot of the deployment and management issues.And true, compromised root certs are a problem, but that depends on your attack profile – security isn’t entirely all or nothing.Even if your adversary is the GCHQ/NSA, bulk use of HTTPS means we move from a world where they can passively listen to everyone and inject packets where they please, to a world where they’re forced to use a compromised root cert to actively MITM you, which is much more costly in terms of resources, and risky in terms of detection.

  3. We all know how important it is to secure web servers with encryption. As I’ve mentioned before, port 80 HTTP should be considered deprecated at this point!
    Just as important (potentially more so), but often overlooked, is to ensure that your email server is also secure.
    STARTTLS is a technology that lets you start an encrypted session during a standard SMTP connection. In the same way as HTTPS secures web, STARTTLS will secure email in transit from your mail client to the server, and from server to server. This makes it much harder to passively read the traffic, and having more encrypted traffic on the internet is only ever a good thing.
    This only protects email in transit from server to server of course, so this is not a replacement for end to end encryption methods like PGP, but it does complement it… and since most email is still sent insecurely, this adds extra security without requiring your users do any extra work.
    It’s easy to set up (for Exim at least), and it transparently runs on port 25, so there’s no reason not to!
    Generate your keys
    As with web, you’ll need a server key and certificate file.
    For my public mail and MX relay servers, I decided to use valid certificate authority certificates. Clients, and some relaying servers, will throw a certificate error for self signed certificates, but others will not. Better safe than sorry, and since I already had a valid certificate on my site for the server in question, I simply recycled the certificate.
    If this is your internal server, you can use a certificate signed by your own certificate authority, supported by the machines in your organisation.
    The default exim configuration expects to find certificates in /etc/exim4/exim.key and /etc/exim4/exim.crt.
    Enable TLS
    The basic STARTTLS configuration by simply editing exim4.conf.template and setting MAIN_TLS_ENABLE = yes in the tlsoptions section. Restart exim and you should have STARTTLS support enabled.
    As with a web server, you can configure ciphers etc at this stage. On my server at least, the defaults seemed reasonably strong, but as we learn which ciphers have been compromised by GCHQ and the NSA, we might need to tweak these.
    Test your configuration
    Next, you should test your configuration.
    To do this, the simplest way is to use a program called swaks, which you should find in your distro’s package library.



    swaks -a -tls -q HELO -s mail.example.com -au test -ap ‘<>’


    1

    swaks a tls q HELO s mail.example.com au test ap ‘<>’


    Should produce a result something like…



    === Trying mail.example.com:25…
    === Connected to mail.example.com.
    .
    .
    .
    -> STARTTLS
    <- 220 TLS go ahead
    === TLS started w/ cipher ECDHE-RSA-AES256-GCM-SHA384
    === TLS peer subject DN=”/OU=Domain Control Validated/OU=PositiveSSL/CN=mail.example.com”
    .
    .
    .
    ~> QUIT
    <~ 221 mail.example.com closing connection
    === Connection closed with remote host.


    123456789101112131415

    === Trying mail.example.com:25…=== Connected to mail.example.com.... -> STARTTLS<  220 TLS go ahead=== TLS started w/ cipher ECDHERSAAES256GCMSHA384=== TLS peer subject DN=“/OU=Domain Control Validated/OU=PositiveSSL/CN=mail.example.com”... ~> QUIT<~  221 mail.example.com closing connection=== Connection closed with remote host.


    If you get an error when starting TLS examine your exim log for the cause.


    Thanks for visiting! If you’re new here you might like to read a bit about me.
    (Psst… I am also available to hire! Find out more…)


    Follow @mapkyca
    !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?’http’:’https’;if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+’://platform.twitter.com/widgets.js’;fjs.parentNode.insertBefore(js,fjs);}}(document, ‘script’, ‘twitter-wjs’);


    Share this:EmailLinkedInTwitterGoogleFacebookReddit

Leave a Reply