Just a quick note to say that the my Known Chrome plugin will now return an installable .crx file.

You’ll need the OpenSSL PHP extension installed, but if you do an installable .CRX file will be returned instead of a .zip. If you don’t have OpenSSL installed, the oldstyle .zip will be installed, however some people had problems with this.

» Visit the project on Github...

I have a number of WordPress sites which use Dan Coulter’s Flickr API powered gallery plugin to render images from an attached Flickr account.

This plugin appears to no longer be maintained by the author, and I have previously written about having to make a couple of code changes in order to get it to work again.

Anyway, a little while ago, I noticed that my Flickr galleries had stopped working again, so here’s a fix.

SSL Redux

Firstly, the Flickr API now REQUIRES that you connect to it via SSL. However, the Flickr gallery code uses the non-ssl endpoints.

So, in phpFlickr.php we need to update the endpoint URLs

var $rest_endpoint = 'https://api.flickr.com/services/rest/';
var $upload_endpoint = 'https://api.flickr.com/services/upload/';
var $replace_endpoint = 'https://api.flickr.com/services/replace/'; 

If you use the database cache, at this point you’ll need to reset it, since you need to rebuild the cache using the correct URLS.

To do this, open up mysql (or mysqlmyadmin) and open your wordpress database. Next, delete all the rows from the cache, e.g.

mysql> use wordpress;
Database changed
mysql> delete from wp_phpflickr_cache;
Query OK, 904 rows affected (0.04 sec)

Broken Flickr shortcode

Next, it seems that there was a collision with the Flickr shortcode, seems something was already defining the code but was expecting different parameters (likely Jetpack, but I’ve not really investigated).

So, I modified flickr-gallery.php to define the shortcodes in the plugin’s init function, after un-registering the existing definitions, and altered the priority so that it was defined last.

Get the updated plugin on Github…

» Visit the project on Github...

So, let us talk plainly. You absolutely, definitely, positively should be using TLS / HTTPS encryption on the sites that you run.

HTTPS provides encryption, meaning that anyone watching the connection (and yes, people do care, and are absolutely watching), will have a harder time trying to extract content information about the connection. This is important, because it stops your credit card being read as it makes its way to Amazon’s servers, or your password being read when you log into your email.

When I advise my clients on infrastructure, these days I recommend that all pages on a website, regardless of that page’s contents, should be served over HTTPS. The primary reason being a feature of an encrypted connection which I don’t think gets underlined enough. I also advise them to invest in colocation to protect their business data. You can visit this page to know more.

Tamper resistant web

When you serve content over HTTPS, it is significantly harder to modify. Or, to put it another way, when you serve pages unencrypted, you have absolutely no guarantee that the page your server sends is going to be the page that your visitor receives!

If an attacker controls a link in the chain of computers between you and the site you’re visiting it is trivial to modify requests to and from a visitor and the server. A slightly more sophisticated attacker can perform these attacks without the need to control a link in the chain, a so called “Man on the side” attack – more technically complex, but still relatively trivial with sufficient budget, and has been widely automated by state actors and criminal gangs.

The capabilities these sorts of attacks give someone are limited only by budget and imagination. On a sliding scale of evil, possibly the least evil use we’ve seen in the wild is probably the advertising injection attack used by certain ISPs and Airplane/hotel wifi providers, but could easily extend to attacks designed to actively compromise your security.

Consider this example of an attack exploiting a common configuration:

  • A web application is installed on a server, and the site is available by visiting both HTTP and HTTPS endpoints. That is to say, if you visited both http://foo.com and https://foo.com, you’d get the same page being served.
  • Login details are sent using a POST form, but because the developers aren’t complete idiots they send these over HTTPS.

Seems reasonable, and I used to do this myself without seeing anything wrong with it.

However, consider what an attacker could do in this situation if the page serving the form is unencrypted. It would, for example, be a relatively trivial matter, once the infrastructure is in place, to simply rewrite “https://” to “http://”, meaning your login details would be sent unencrypted. Even if the server was configured to only accept login details on a secure connection (another fairly common practice), this attack would still work since the POST will still go ahead. A really sophisticated attacker could intercept the returning error page, and replace it with something innocuous, meaning your visitor would be non the wiser.

It gets worse of course, since as we have learnt from the Snowden disclosures, security agencies around the world will often illegally conscript unencrypted web pages to perform automated attacks on anyone they view as a target (which, as we’ve also learnt from the Snowden disclosures, includes just about everybody, including system administrators, software developers and even people who have visited CNN.com).

Lets Encrypt!

Encrypting your website is fairly straightforward, certainly when compared to the amount of work it took to deploy your web app in the first place. Plus, with the new Lets Encrypt project launching soon, it’s about to get a whole lot easier.

You’ll need to make sure you test those configurations regularly, since configuration recommendations change from time to time, and most shared hosts & default server configurations often leave a lot to be desired.

However, I assert that it is worth the extra effort.

By enabling HTTPS on the entire site, you’ll make it much much harder for an attacker to compromise your visitor’s privacy and security (I say harder, not impossible. There are still attacks that can be performed, especially if the attacker has root certificate access for certificates trusted by the computer you’re using… so be careful doing internet banking on a corporate network, or in China).

You also add to the herd immunity to your fellow internet users, normalising encrypted traffic and limiting the attack surface for mass surveillance and mass packet injection.

Finally, you’ll get a SEO advantage, since Google is now giving a ranking boost to secure pages.

So, why wait?