Last Wednesday saw the long awaited return of the ever fantastic Oxford Geek Night.

Thanks to all involved, and all who came, and especially the keynote and microslot speakers for making it an awesome night!

The first speaker was a woman called Leila Johnston who gave an interesting talk called “Making things fast”, I suggest you go have a look at the video.

In a nutshell the longer a project goes on the less likely it is to be completed. We should all stop worrying so much about getting things perfect before letting a project see the light of day. There was also the observation that enthusiasm for a project is a finite resource and is spent very quickly.

So anyway, last weekend I bashed together an idea I’ve been pondering since my flight back from Vegas, and prompted by Leila’s talk I’m pushing the first buggy version out into the world at large before I get distracted with something new and shiny.

So, without further ceremony I’d like to introduce “I’m going to miss…

What?

I’m going to miss…” is something I cooked up on a transatlantic flight as partially something I’d like to have exist, and partially an excuse to build OAuth support into a platform I’m hacking together.

The basic concept for the first version is, essentially, Single-serving friends reunited (although I obviously can’t call it that). It is an attempt to capture those interesting conversations and chance encounters we have in-between A and B. I’ve got some quite cool ideas for the next iterations as well, which may or may not happen.

It won’t change the world, but it was a quick build (I like them) so have a play! I’ll be interested to see if this is something that’s going to fly…

Right. What’s next?

Unless you have been living under a rock for the last few days, you will be aware that the whistle blowing website Wikileaks has recently published a massive collection of US government memos dating back to the 1960s.

Even the issuing of a D-Notice has failed to prevent the reporting of some of the contents of these memos here in the UK (welcome to the reality of the world in the 21st century guys), and I suspect the impact will be felt for years to come.

The leak was met with almost universal applause from the public, and almost universal condemnation from governments around the world. This startling disconnect and the reason’s why it marks a change in expectations that government has yet to fully grasp has probably been best explained in this article. News agencies in the most part (FOX not withstanding) have been treading a fine line; drooling over the scoop but at the same time giving a disparaging sniff of disapproval.

Suffice it to say, governments around the world have got used to the idea that surveillance goes only one way and that the public at large will happily accept that “Government Knows Best”.

Wikileaks is drawing a lot of attention. Once discounted as a bunch of trouble making nerds, it is now increasingly a thorn in the side of major governments – who are being forced to go through the full body scanner and are now having their unmentionables exposed for their citizens to pick over and pass judgement on.

Incoming chairman of the House homeland security committee Peter King recently described Wikileaks as a “Terrorist organisation” only reminiscent of how Joseph McCarthy once described the ACLU.

There is now a real danger that Wikileaks and its founders will get put on the various terrorist blacklists (or worse). This will essentially pull the rug out from under the organisation since it would mean severe penalties for anyone or any organisation who aided Wikileaks in any way – including activities such as processing payments or hosting their website.

The reason why Wikileaks will fail? Simple, its a single point of failure, and an increasingly prominent target.

The real tragedy is that the more successful it becomes and the more embarrassment it causes to those who seek power without accountability, the faster it will hasten its own demise. I predict that in a few months or years Wikileaks will be taken down in a blaze of ill thought out legislation that will cause untold damage to the rest of us.

The hole left behind is a vital one to fill, but it has to be filled by something distributed and open rather than one site run by one (albeit dedicated) set of individuals.

Wikileaks 2.0

In order to survive, the successor of Wikileaks must – I think – meet at least the following requirements (although this if off the top of my head, so its by no means a complete list):

  • Be distributed. The platform will be a collection of interconnected nodes rather than a single site (bonus points if a node is only aware of its “neighbours” rather than the entire network.
  • Be open. The specification of what a node should do and how it communicates should be an open and peer reviewed document. This will mean that multiple interoperable implementations can be built.
  • Be self repairing. New nodes can be added and will announce. While every document in the system need not exist on every node, the system will ensure that there is never less than X copies in the system.

What we’re talking about here really is a somewhat customised form of CDN and the technology already exists to do all of this.

The Wikileaks of the future then would be one of many websites which sit with their toes in the same pool of data.

Discuss.

Amazon, who are most widely known for their online shop, has another project called Amazon Web Services (AWS) offering website authors a collection of very powerful hosting tools as well as hybrid cloud services.

This is not a sideline for Amazon – a recent conversation I had with an Amazon guy I met at a Barcamp revealed that in terms of revenue AWS exceeds the store – and it dramatically lowers the cost of entry for web authors. Using AWS it is possible to compete with the big boys without the need to mortgage your house for the data center and it solves many storage and scalability problems, what’s more – its pay as you go.

To be clear then, Amazon is now a hosting company, and its rather curious as to why people (Amazon included) are not making a big deal of this.

Anywho, I’ve recently had the opportunity to work on a few projects which use AWS extensively, and since I did run into a few gotcha’s I’d thought I’d jot some notes down here, more as an aide-mémoire as anything else.

Getting an AWS account

Before you begin, you’re going to need an account. So, head over here and create one.

Creating a server

You are going to want to create a server to run your code on. The service you need here is Amazon Elastic Compute Cloud (EC2), so sign up for EC2 from the AWS Management Console.

You are going to have to select an EC2 Image to start from, and there are a number of them available. An Image (called AMIs) is a snapshot of your server, its disks and any software you want to run. Once configured you can start one or more Instances of it, which are your running servers. If you want more control over your servers, consider getting unmanaged vps hosting.

Once you’ve configured your server, you can create your own image which you can share or use to boot other instances. This whole process is made infinitely easier by using an EBS based image. EBS is a virtual disk, these can be added like normal disks to your server – sizes 1GB to 1TB. EBS backed images store data to these disks and so configuration changes are preserved, additionally new server images can be cloned with the click of a button.

You can add extra disks to your server, but they must be in the same availability zone (datacenter) as your EC2 image and currently can’t be shared between EC2 instances.

So:

  • Save your sanity and use an EBS backed image – if you’re looking for a Debian or Ubuntu based one, I recommend the ones created buy Alestic. You can search for community managed images from within the management console, be sure to select EBS backed! For reference, the AMI I often use is identified as ami-209e7349.
  • Grab an Elastic IP and assign it to your image, point your domain at this.
  • Your goal is to build an AMI consisting of a ready to go install of your web project, so install the necessary software.
  • Note however, that you want to build your AMI so that you can run multiple instances of it in order to handle scalability. Since EBS disks can’t be shared you may have to change your architecture slightly:
    • Don’t store user data on the server – user icons etc – these can’t be shared across instances. Store these instead on Amazon S3 and reference them directly using URL where possible. Note, temporary files like caches are probably ok.
    • If you use a database, use MySQL, but only install the client libraries on your image – you won’t be using a local server (see below)!

Once you’ve configured your server create an AMI out of it, this can then be booted multiple times.

Using a database

The database service you will most likely want to use is called Amazon Relational Database Service (RDS) which can function as a drop in replacement for MySQL.

Sign up for it using the management tool, create a database (making note of the passwords and user names you assign) and allow access from your running server instance(s) via the security settings.

The database is running on its own machine, so make a note of the host name in the properties – you will need to pass this in your connect string.

Conclusion

AWS is a powerful tool, if you use it right. There is a lot more complexity to cover of course, but this should serve as a start.

I’ll cover the more advanced scalability stuff later when I actually get to use them in anguish (and get the time to write about it!)