I do a lot of web development these days, on number of projects, which often require their own domain, so I thought I’d share a quick tip that I’ve found helpful.

In a nutshell, I use wildcard domains and used the Apache vhost alias module in order to be able to automatically create a domain per project.

Setting up bind

The first step is to set up wildcard DNS for your machine, in this case *.dev.mymachine. Assuming you’ve got bind set up, this is just a matter of configuring a local zone for your network (or adding this to your existing local zone).

It’s late, and I’m tired, Google “wildcard dns bind” and that’ll point you in the right direction.

Setting up the vhost

Next, you need to set up an Apache vhost for your wildcard domain, but crucially, instead of specifying DocumentRoot in the normal way define VirtualDocumentRoot.

First, enable the module:

a2enmod vhost_alias

Then, set up and enable a definition which uses variables supplied by the vhost_alias, which will use the structure of the url line to load the appropriate web page.

<VirtualHost *>

ServerAdmin webmaster@myhost.com
ServerName myhost.com
ServerAlias *.myhost.com

# Indexes + Directory Root.
DirectoryIndex index.html index.php
VirtualDocumentRoot /home/%2/mycode/%1/

<Directory "/home/*/mycode/">
AllowOverride All
Options +Indexes +Includes +FollowSymlinks +ExecCGI


The above code will use the directory name plus the user’s username, for example, fizzbuzz.marcus.myhost.com.

What this means is that you don’t need to create a new virtual host for each one of your projects, which may save you a little time.

DNS is the system which converts a human readable address, like www.google.com, into the IP address that the computer actually uses to route your connection through the internet, e.g.

This works very well, however, it is a clear text protocol. So, even if all other traffic from your computer is encrypted (for example, by routing your outbound traffic through a VPN – more on that later), you may still be “leaking” your browsing activity to others on your network.

Since I intend to do my best to stamp out cleartext wherever it may be, this is a problem for me.

Encrypting DNS

Unfortunately, DNS is still very much a legacy technology as far as modern security practices are concerned, and does not natively support encryption. Fortunately, OpenDNS, a distributed DNS alternative, have provided DNSCrypt, which is open source, and will encrypt dns connections between your computer and their servers.

DNSCrypt will help protect your browsing from being snooped on, however, you should be aware it’s not foolproof; while people on the same WIFI hotspot / your ISP will not be able especially if you see a lot of error (broken trust chain) resolving ... messages in your system log and your connection stops working when forwarding upstream.to see the clear text of the DNS resolution flash by, once it’s resolved into an IP address, they will still see the outbound connection. So, while they won’t see www.google.com in their capture logs, they will still see that you made a connection to, which an attacker can resolve back into www.google.com if they have the motivation. To protect against this, you must deploy this technology along side a VPN of some sort, which will encrypt the whole communication, at least until the VPN outputs onto the internet proper.

All that said, I’ve got it turned on on my home network (since there’s no sense in making an attacker’s life easy), and I’ve got it running on my laptop to give me extra protection against snooping while surfing on public wifi, and in the case of my laptop, I also surf over a VPN.

Setting it up

By far and away the easiest thing to do is use DNSCrypt-proxy, which serves as a drop in replacement for your normal DNS server provided by your ISP. Run it on your local machine, configure your network settings to talk to, and you’re done.

In my home network, I had an additional complication, in that I run my own DNS server, which provides DNS names for various human readable names for computers and devices around the home (my computers, the NAS, the printers and so on). I wanted to preserve these, and then configure the DNS server to relay anything that wasn’t local (or cached) via the encrypted link. To accomplish this, I needed to run DNSProxy on the network DNS machine along side BIND (the traditional DNS server software), but listening on a different port.

dnscrypt-proxy --local-address= --daemonize

Over on github, my fork of the project contains a Debian /etc/init.d startup script which starts the proxy up in this configuration. You may find this useful.

Then, all I’d need to do is configure BIND to use the dns proxy as a forwarder, and I should be done.

In /etc/bind/named.conf.options:

forwarders { port 5553;

You can use pretty much any port that you like, but don’t be tempted to use something obvious like 5353, since this will cause problems with any Avahi/Bonjour services you may have running.

You may also want to put a blank forwarders section in the zone file for your local domain (which is strictly speaking “correct”, but many examples don’t), e.g.:

zone "example.local" {
    type master;
    notify no;
    file "/etc/bind/db.example.local";
    forwarders { };

Some gotchas

First, OpenDNS by default provide “helpful” content filtering, typo correction and a search page for bad domains. This last means that any bad domain will resolve to their web servers on, which can break your resolv.conf search domain. This can cause problems in certain situations if, for example, you have subdomains or wildcards in your zone file for your local domain, and will make them only accessible by the fully qualified domain name.

A workaround for this is to create a free account on OpenDNS, register your network, and then disable their web content filtering and typo correction, although my feeling is that I may have made a mistake in the configuration.

Second, OpenDNS’ servers do not support DNSSEC despite promises to the contrary. Not sure why, probably because it would break the DNS hijacking which makes the above unrecognised domain redirection possible. Since their business is security, OpenDNS should be doing DNSSEC validation on your behalf, how much of an issue this is an open question.

Still, it’s worth noting, since you will at least see a lot of error (broken trust chain) resolving ... messages in your system log and in all probability your connection will stop working when forwarding upstream.

Happy encrypting!

Update: CloudNS, an Australian based name service, now offer DNSCrypt together with no logging. There are also a number of OpenNIC servers which are starting to support DNS Encryption, so it’s worth keeping an eye on the Tier2 server page.

The Domain Name System – which much of the internet is built on – is a system of servers which turn friendly names humans understand (foo.com) into IP addresses which computers understand (111.222.333.444).

It is hierarchical and to a large extent centralised. You will be the master of *.foo.com, but you have to buy foo.com off the .com registrar.

These top level domain registrars, if not owned by national governments, are at least strongly influenced and increasingly regulated by them.

This of course makes these registrars a tempting target for oppressive governments like China, UK and the USA, and for insane laws like SOPA and the Digital Economy Act which seek to control information, and shut down sites which say things the government doesn’t like.

Replacing this system with a less centralised model is therefore a high priority for anyone wanting to ensure the protection of the free internet.

Turning text into numbers isn’t the real problem

It may not be an entirely new observation here; the problem of turning a bit of text into a set of numbers is, from a user’s perspective, not what they’re after. They want to view facebook, or a photo album on flickr.

So finding relevant information is what we’re really trying to solve, and the entire DNS system is really just a factor of search not being good enough when the system was designed.


  • Virtually all modern browsers have auto complete search as you type query bars.
  • Browsers like Chrome only have a search bar
  • My mum types domain names, or partial domain names, or something like the domain name (depending on recollection) into Google

For most cases, using the web has become synonymous with search.

Baked in search

So, what if search was baked in? Could this be done, and what would the web look like if it was?

What you’re really asking when you visit Facebook, or Amazon or any other site is “find me this thing called xxxx on the web”.

Similarly when a browser tries to load an image, what it’s really saying is “load me this resource called yyyy which is hosted on web server xxxx on the web”, which is really a specialisation of the previous query.

You’d need to have searches done in some sort of peer to peer way, and distributed using an open protocol, since you’d not want to have to search the entire web every time you looked for something. Neither would you want to maintain a local copy of the Entire World.

It’d probably eat a lot of bandwidth, and until computers and networks get fast enough, you’d probably still have to rely on having large search entities (google etc) do most of the donkey work, so this may not be something we can really do right now.

But consider, most of us now have computers in our pockets with more processing power than existed on the entire planet a few decades ago; at the beginning of the last century the speed of a communication network was limited by how fast a manual operator could open and close a circuit relay.

What will future networks (and personally I don’t think we’re that far off) be capable of? Discuss.