DSC_0237

I like feeds and APIs.

Feeds and APIs provide ways for others to access a service and to recombine the data in new and unexpected ways. Ways that have consistently been proven to be beneficial to both parties (which makes google’s increasing antipathy towards them an interesting, not to mention short sighted, trend).

Anyway, it was one morning when I was attempting to find a route to work for my girlfriend which bypassed the numerous arterial route crashes that had happened that morning and I found myself pondering thus

… wouldn’t it be cool if roads and junctions had permanent URLs, and better yet if you could get a data feed on them?

This would let you do many cool things, for example you could enter your route to work and get a status of the traffic en route – or at the very least attach a particular traffic blackspot (in our case the 13 bends of death on the A4074) to ifttt and get SMS alerts if there was a problem.

Giving roads and junctions addressable urls would be an obvious extension to the google maps API, but given that Google won’t even let you embed a map in a page if it contains a traffic data overlay it seems unlikely they’ll provide such access to their data. Other sources such as the Yahoo’s traffic API has long since been shut down.

So, what alternative traffic data sources could we use?

One possible data source we could use would be to parse a twitter search for the road in question. We both currently use ifttt hookups to get alerts for certain key roads, so the basic concept is sound.

This isn’t perfect, for example there is no understanding of the context of a message – so for example a message saying “No traffic problems on the A4074” and “Terrible crash on the A4074” would both trigger the alert, but only the latter would indicate a problem.

The other problem of course is that it also relies on people tweeting, but in effect this would actually pull in quite a diverse range of secondary sources – in my case, for example, it also pulls in any source that feed into the local radio station – which includes reports from their traffic spotter plane.

As an individual without access to data from traffic sensors, or any ability to collect data directly (unlike, say, google who can use position reports from android phones), we are pretty much limited to collecting data from secondary sources as far as I can see.

What other sources could we use?

Otto von Bismarck once said: “Laws are like sausages. It’s better not to see them being made.

To my mind, few things could have illustrated this clearer than yesterday’s vote on the Digital Economy bill, where – as the vote was finally called – the room quickly filled with MPs who had completely missed out on the debate of the last two days.

Faster than you could say “Stitch-up” or “Democratic deficit” the vote was overwhelmingly passed thanks to a reported 3 line whip and a back room deal with the Conservatives. Only the Liberal Democrats and one awesome Labour back bencher did the right thing.

It should be noted as well that the Labour back bencher in question was actively tweeting during the proceedings.

So that’s pretty much that. The bill as passed will pretty much regulate away the UK technology industry and provide a quick and cost effective mechanism to curtail free speech and governmental scrutiny, leaving only big business and a gagged population.

Someone much more cynical than me may suggest that this was the idea. Afterall, it is in both big business and government’s interest that you are unquestioning ignorant consumers – simple economic units that work, buy stuff and pay taxes.

So, with this and other laws worthy of East Germany making the UK feel less like a country and more like a cage, I and many others are left looking about for a free country to live in.

While I do that, I will just point out that Labour and the Conservatives are the same people – so please remember this when it comes to the ballot box.

Today saw the release of Data.gov.uk, the government data website spearheaded by Tim Berners-Lee which hopes to collate government data and make it available for people to build on.

Although it is clearly aimed at developers, it is my hope that innovative and genuinely useful tools will quickly start popping up as entrepreneurs get to grips with this new wealth of information.

The launch has triggered a fair amount of buzz, and a flurry of blog posts elsewhere which do a much better job at explaining the ins and outs of the site than I have time to.

Personally, I think this is a good step in the right direction. It is also good to see that they have opted to go ugly early – publishing the raw data so we can begin hacking straight away – rather than wait until their cathedral-like semantic web interface is perfect.

True, while the data is in this state it is not so useful to the wider world – yet. Projects such as Scores on the doors have proven that turning raw data into something useful can be a useful and profitable undertaking, so I’ve no doubt that this will change.

The biggest disappointment is the choice to release much of the data under crown copyright. While this was almost certainly a compromise to get anything to happen at all, it would have been nice if the government had taken the bolder step and released it unencumbered and let the economy profit from it.

I would also like to see more local authorities opening up their data, moving away from the idea that everything has to be centralised.

Still, the new site follows a general positive trend of data glasnost which has already seen the promise to open up the postcode database, and in that spirit I welcome it.

Image “New, Improved *Semantic* Web!” by Duncan Hull