The Domain Name System – which much of the internet is built on – is a system of servers which turn friendly names humans understand (foo.com) into IP addresses which computers understand (111.222.333.444).

It is hierarchical and to a large extent centralised. You will be the master of *.foo.com, but you have to buy foo.com off the .com registrar.

These top level domain registrars, if not owned by national governments, are at least strongly influenced and increasingly regulated by them.

This of course makes these registrars a tempting target for oppressive governments like China, UK and the USA, and for insane laws like SOPA and the Digital Economy Act which seek to control information, and shut down sites which say things the government doesn’t like.

Replacing this system with a less centralised model is therefore a high priority for anyone wanting to ensure the protection of the free internet.

Turning text into numbers isn’t the real problem

It may not be an entirely new observation here; the problem of turning a bit of text into a set of numbers is, from a user’s perspective, not what they’re after. They want to view facebook, or a photo album on flickr.

So finding relevant information is what we’re really trying to solve, and the entire DNS system is really just a factor of search not being good enough when the system was designed.

Consider…

  • Virtually all modern browsers have auto complete search as you type query bars.
  • Browsers like Chrome only have a search bar
  • My mum types domain names, or partial domain names, or something like the domain name (depending on recollection) into Google

For most cases, using the web has become synonymous with search.

Baked in search

So, what if search was baked in? Could this be done, and what would the web look like if it was?

What you’re really asking when you visit Facebook, or Amazon or any other site is “find me this thing called xxxx on the web”.

Similarly when a browser tries to load an image, what it’s really saying is “load me this resource called yyyy which is hosted on web server xxxx on the web”, which is really a specialisation of the previous query.

You’d need to have searches done in some sort of peer to peer way, and distributed using an open protocol, since you’d not want to have to search the entire web every time you looked for something. Neither would you want to maintain a local copy of the Entire World.

It’d probably eat a lot of bandwidth, and until computers and networks get fast enough, you’d probably still have to rely on having large search entities (google etc) do most of the donkey work, so this may not be something we can really do right now.

But consider, most of us now have computers in our pockets with more processing power than existed on the entire planet a few decades ago; at the beginning of the last century the speed of a communication network was limited by how fast a manual operator could open and close a circuit relay.

What will future networks (and personally I don’t think we’re that far off) be capable of? Discuss.

7 thoughts on “DNS is a symptom of broken search #sopa

  1. I agree that search results are far more important than domain names in getting to a site (although when I argued exactly that at SXSW last year, I was shouted down by friends at Google and Mozilla).

    Here’s the thing: the only way to do this even remotely efficiently would be to trust some kind of cloud brain. In the same way that your have root DNS servers, we’d now be putting trust in Google (or whoever). Even if we didn’t have a centralized component, and we somehow made p2p work, there isn’t just one search algorithm.

    So we’re now dependent on somebody’s algorithm to find our way to any given website. Which brings us to Google’s handy example of why that’s a bad idea.

    Me? I’m wondering if we shouldn’t hand control back to the academic sector.

  2. Somebody’s algorithm, aye, but at least you won’t be limited to just the one – not in the same way as with dns.

    Additionally, just because it’s inefficient now doesn’t mean it always will be… Memory, CPU and bandwidth all getting cheaper faster and larger year on year.

  3. I don’t understand this argument. If we don’t use DNS and by inference IP addresses, but use url’s that would mean you would have to replace every router on the planet. And have a new protocol stack on every computer.
    Have I missed the point?
    I find the CPU/Memory/bandwidth is cheap argument to be always a fallacy. In the real world I find it encourages lazy programming and then everyone is surprised when it under performs due to bloatware.
    In any case CPU/Memory improvements have always far outweighed bandwidth improvements.

  4. In the real world I find it encourages lazy programming and then everyone is surprised when it under performs due to bloatware.

    Quoted, as they say, for truth. Inefficient is always inefficient; the only thing that changes is how much you lose to that inefficiency. Increases in CPU power could be counteracted by increases in users / data, just for example.

  5. I still don’t understand. I have problems with the the title. In my opinion DNS has nothing to do with content searching and it is not broken.
    I feel the article confuses the Web with the Internet. If you want to be able to find content it has to be built on the existing infrastructure, the internet, which uses IP and DNS to resolve IP addresses.
    Although DNS is sort of centralised it is also distributed if you don’t like ICANN’s DNS then you use your own or somebody else’s
    http://en.wikipedia.org/wiki/Alternative_DNS_root

  6. I don’t intent to conflate the web with the internet, but if I have I am sorry.

    I’ll try and explain my thinking a bit better through the aid of bullet points:

    * IP addresses and netmasks are how packets are routed.
    * Humans aren’t good with numbers so DNS exists
    * Additionally, resources move physical location and people don’t want to change every reference to them.
    * DNS is very efficient at what it does, and yes other DNS registries do exist.

    However:

    * The registries that 99% of the world use are a centrally owned hierarchy.
    * This is at odds with the distributed nature of the internet
    * SOPA is deliberately going after DNS as a way of shutting access to something for those 99% because its the list distributed part of the system.
    * Shutting down access via IP is a substantially more involved process – you’d need to patch virtually every router on the planet.
    * Replacing DNS with something more distributed and self repairing is, I think, important.
    * The question people really ask when looking on the web specifically, or the internet in general is “find me X”, where X could be a website, a cat picture or a server for a given protocol on a given machine.
    * I imagine some sort of announce/propagate architecture for resources; when something new became available it was propagated to its neighbours etc etc etc, but other options exist.
    * Better (read perfect) search could be a good solution, but of course not the only one.
    * All this is a “What if” conversation, and the title was deliberately provocative.

  7. I see what you are getting at now. But I think you have outlined two issues. DNS and more effective searching.
    Consider me provoked 🙂

Leave a Reply