One of the many itches I have been scratching, as part of taking my social media contributions out of silos, is how to keep track of what my friends are up to. So, we’re talking about bringing the familiar social networking concept of friends, subscriptions and update notifications to a distributed social network like Idno, an Elgg/Elgg multisite node, or an Indieweb site.

Existing systems, like PuSH, seem a little to complicated for my liking. I wanted something I could get up and running in about an hour, and test using curl.

I expect other people in the Indieweb community are thinking about this too, but I couldn’t find anything with the 30 seconds of googling I had time for, and since I needed it I thought I’d throw my hat in the ring…

Outline specification

  • Two sites/profiles: Alice and Bob.
  • Alice wants update notifications from Bob.
  • Alice’s site looks at Bob for subscribe endpoint.
  • If found, Alice’s site sends POST containing Alice’s profile URL to the endpoint:

    subscriber=http://alice.com/profile/alice&subscribe=http://bob.idno/profile/bob

    Note: Bob’s endpoint is specified for multi-user situations, allowing the system to know which user we’re subscribing to.

  • OPTIONAL: Alice and Bob mine each other’s profiles for MF2 data, one could also do key exchange at this point for any secure messaging or authentication for syndication of private posts.
  • When Bob creates or updates a post, he discovers Alice’s endpoint and sends a POST containing the permalink, e.g:

    subscription=http://bob.com/foo+bar/

  • Alice checks to see if permalink is from recognised domain (Optional, but recommended).
  • Alice visits the permalink and parses MF2, extracts the author and checks that the author URL is in the subscription list.
  • Alice then uses the MF2 content to produce a feed, or pop up a notification, whatever.
  • If Bob deletes a post he sends a DELETE containing the permalink to Alice’s endpoint, e.g:

    subscription=http://bob.idno/foo+bar/

  • If Alice wants to unsubscribe/unfriend she sends a DELETE mirroring the initial subscription request to Bob’s endpoint, and then (optional, but recommended) ignores any future post from user.

Crucially, it doesn’t require firing ATOM blobs around or maintaining extra feeds of data.

Handling popularity

An obvious problem with this proposed spec is how one handles a situation where a user has many subscribers. Here, we may want to update the spec or use a different technology, but the vast majority of people will likely only have a couple of hundred people in their network, tops.

I’ll probably be building this functionality out as an plugin for a couple of client sites in a couple of days, and I’ll post implementations when I have them, but let me have your comments below!

The Domain Name System – which much of the internet is built on – is a system of servers which turn friendly names humans understand (foo.com) into IP addresses which computers understand (111.222.333.444).

It is hierarchical and to a large extent centralised. You will be the master of *.foo.com, but you have to buy foo.com off the .com registrar.

These top level domain registrars, if not owned by national governments, are at least strongly influenced and increasingly regulated by them.

This of course makes these registrars a tempting target for oppressive governments like China, UK and the USA, and for insane laws like SOPA and the Digital Economy Act which seek to control information, and shut down sites which say things the government doesn’t like.

Replacing this system with a less centralised model is therefore a high priority for anyone wanting to ensure the protection of the free internet.

Turning text into numbers isn’t the real problem

It may not be an entirely new observation here; the problem of turning a bit of text into a set of numbers is, from a user’s perspective, not what they’re after. They want to view facebook, or a photo album on flickr.

So finding relevant information is what we’re really trying to solve, and the entire DNS system is really just a factor of search not being good enough when the system was designed.

Consider…

  • Virtually all modern browsers have auto complete search as you type query bars.
  • Browsers like Chrome only have a search bar
  • My mum types domain names, or partial domain names, or something like the domain name (depending on recollection) into Google

For most cases, using the web has become synonymous with search.

Baked in search

So, what if search was baked in? Could this be done, and what would the web look like if it was?

What you’re really asking when you visit Facebook, or Amazon or any other site is “find me this thing called xxxx on the web”.

Similarly when a browser tries to load an image, what it’s really saying is “load me this resource called yyyy which is hosted on web server xxxx on the web”, which is really a specialisation of the previous query.

You’d need to have searches done in some sort of peer to peer way, and distributed using an open protocol, since you’d not want to have to search the entire web every time you looked for something. Neither would you want to maintain a local copy of the Entire World.

It’d probably eat a lot of bandwidth, and until computers and networks get fast enough, you’d probably still have to rely on having large search entities (google etc) do most of the donkey work, so this may not be something we can really do right now.

But consider, most of us now have computers in our pockets with more processing power than existed on the entire planet a few decades ago; at the beginning of the last century the speed of a communication network was limited by how fast a manual operator could open and close a circuit relay.

What will future networks (and personally I don’t think we’re that far off) be capable of? Discuss.