elgg_logo1 Here’s the scenario; you’re a developer and you’ve been asked to do some work on an existing Elgg site, or you’ve built an Elgg site with some complex plugin interdependencies that you need to copy on to a live site.

In both cases, this primarily involves copying the source code and Elgg database from one site to another, here’s how…

Source code and database

  1. Install the source code for your project; scp it from the other site, git clone –recursive, whatever…
  2. On the site you’re copying, take a dump of the database. You can look in your engine/settings.php for the database username and password:

    mysqldump -u your-db-user -p elgg_database > database-dump.sql

  3. Copy this file onto your new host.
  4. Create a new database and install the Elgg database into it, in the mysql client do the following:

    create database new_elgg_database;
    grant all on new_elgg_database.* to `db_username`@`localhost` identified by 'db-password';
    use new_elgg_database;
    source /path/to/database-dump.sql

  5. You should now have a local copy of the elgg database installed, but in order for it to work you need to change a few paths. Firstly, alter your dataroot and site location details in your prefix_datalists table:

    update elggdatalists set value="/path/to/elgg/" where name="path";
    update elggdatalists set value="/path/to/dataroot/" where name="dataroot";

    Don’t forget the trailing slash on the paths!

  6. Next, you need to update the site url in the site object stored in the prefix_sites_entity table. For the vast majority of people (who only have one site object) this will be straightforward, for others, you might have to use a slightly different query in order to get all sites working as expected.

    update elggsites_entity set url="http://localhost/path/to/site/";

    Again, don’t forget the trailing slash on the URL!

  7. Finally, alter your copy of engine/settings.php to reflect your new database details.

When I view my site, all the CSS is broken!

This is almost certainly a mod-rewrite problem.

  • Firstly, check that it’s installed and enabled, and that overrides are enabled for your site URL (common problem if installing into ~/public_html).
  • Next, make sure that your RewriteBase is configured. If you’re installing into a subdirectory on a domain (e.g. http://localhost/~marcus/elgg/) you’ll need to set the RewriteBase in your .htaccess file accordingly, in the case of my example, RewriteBase /~marcus/elgg/

Files

The above should get you up and running with a usable site for testing, however if you want to fully migrate a site, you’ll also need to copy the data directory across.

  1. Using rsync or similar copy the complete data directory from your old site’s data directory to the new.
  2. Ensure that the directory, subdirectories and files are read and writeable by your web server’s user.
  3. Flush the caches. This is important since Elgg caches the locations of template files and other data in the data directory, which obviously will cause issues if you copy a cache file from another location! If the admin panel has become unavailable at this point, deleting the system_cache directory from dataroot by hand will often restore it.

Happy hacking!

Ok, so the other week I wrote a little bit about migrating my git server from gitosis (which is no longer maintained), over to gitolite.

Along side this, I also run a gitlist server. Gitlist is a web front end for git, similar to gitweb, which provides a slick and modern looking “github” style interface. It is also remarkably easy to set up and configure.

One gotcha I found was that, while unmodified gitosis repositories displayed correctly, as soon as you pushed a change, gitlist presented an error:

Oops! fatal: Failed to resolve HEAD as a valid ref.

After some investigation, it seems the problem stems from a permission issue. By default, gitolite creates new files and repositories with a slightly more restricted set of permissions.

Fixing the problem

Once the problem was identified, the solution is thankfully fairly trivial:

  1. First, stop gitolite making the situation any worse. Go to the gitolite home directory and edit .gitolite.rc, and set

    $REPO_UMASK = 0027;

    This will cause new files to be created with access to group, as well as user.

  2. Next, give your web server process access to the gitolite group. Assuming, as with me, the user is git, modify /etc/group and add your webserver user to the git group, e.g. git:x:128:www-data
  3. Restart apache for your changes to come into effect.

Fixing broken repositories

New repositories and files should now be created using the more permissive access permissions, which gitlist/gitweb will now be able to see. However, you may need to fix the permissions on some existing repositories.

find repository-name.git -type d | xargs -i{} chmod 750 {};
find repository-name.git -type f | xargs -i{} chmod 640 {}

Hope this helps!

DNS is the system which converts a human readable address, like www.google.com, into the IP address that the computer actually uses to route your connection through the internet, e.g. 173.194.34.179.

This works very well, however, it is a clear text protocol. So, even if all other traffic from your computer is encrypted (for example, by routing your outbound traffic through a VPN – more on that later), you may still be “leaking” your browsing activity to others on your network.

Since I intend to do my best to stamp out cleartext wherever it may be, this is a problem for me.

Encrypting DNS

Unfortunately, DNS is still very much a legacy technology as far as modern security practices are concerned, and does not natively support encryption. Fortunately, OpenDNS, a distributed DNS alternative, have provided DNSCrypt, which is open source, and will encrypt dns connections between your computer and their servers.

DNSCrypt will help protect your browsing from being snooped on, however, you should be aware it’s not foolproof; while people on the same WIFI hotspot / your ISP will not be able especially if you see a lot of error (broken trust chain) resolving ... messages in your system log and your connection stops working when forwarding upstream.to see the clear text of the DNS resolution flash by, once it’s resolved into an IP address, they will still see the outbound connection. So, while they won’t see www.google.com in their capture logs, they will still see that you made a connection to 173.194.34.179, which an attacker can resolve back into www.google.com if they have the motivation. To protect against this, you must deploy this technology along side a VPN of some sort, which will encrypt the whole communication, at least until the VPN outputs onto the internet proper.

All that said, I’ve got it turned on on my home network (since there’s no sense in making an attacker’s life easy), and I’ve got it running on my laptop to give me extra protection against snooping while surfing on public wifi, and in the case of my laptop, I also surf over a VPN.

Setting it up

By far and away the easiest thing to do is use DNSCrypt-proxy, which serves as a drop in replacement for your normal DNS server provided by your ISP. Run it on your local machine, configure your network settings to talk to 127.0.0.1, and you’re done.

In my home network, I had an additional complication, in that I run my own DNS server, which provides DNS names for various human readable names for computers and devices around the home (my computers, the NAS, the printers and so on). I wanted to preserve these, and then configure the DNS server to relay anything that wasn’t local (or cached) via the encrypted link. To accomplish this, I needed to run DNSProxy on the network DNS machine along side BIND (the traditional DNS server software), but listening on a different port.

dnscrypt-proxy --local-address=127.0.0.1:5553 --daemonize

Over on github, my fork of the project contains a Debian /etc/init.d startup script which starts the proxy up in this configuration. You may find this useful.

Then, all I’d need to do is configure BIND to use the dns proxy as a forwarder, and I should be done.

In /etc/bind/named.conf.options:

forwarders {
    127.0.0.1 port 5553;
};

You can use pretty much any port that you like, but don’t be tempted to use something obvious like 5353, since this will cause problems with any Avahi/Bonjour services you may have running.

You may also want to put a blank forwarders section in the zone file for your local domain (which is strictly speaking “correct”, but many examples don’t), e.g.:

zone "example.local" {
    type master;
    notify no;
    file "/etc/bind/db.example.local";
    forwarders { };
};

Some gotchas

First, OpenDNS by default provide “helpful” content filtering, typo correction and a search page for bad domains. This last means that any bad domain will resolve to their web servers on 67.215.65.132, which can break your resolv.conf search domain. This can cause problems in certain situations if, for example, you have subdomains or wildcards in your zone file for your local domain, and will make them only accessible by the fully qualified domain name.

A workaround for this is to create a free account on OpenDNS, register your network, and then disable their web content filtering and typo correction, although my feeling is that I may have made a mistake in the configuration.

Second, OpenDNS’ servers do not support DNSSEC despite promises to the contrary. Not sure why, probably because it would break the DNS hijacking which makes the above unrecognised domain redirection possible. Since their business is security, OpenDNS should be doing DNSSEC validation on your behalf, how much of an issue this is an open question.

Still, it’s worth noting, since you will at least see a lot of error (broken trust chain) resolving ... messages in your system log and in all probability your connection will stop working when forwarding upstream.

Happy encrypting!

Update: CloudNS, an Australian based name service, now offer DNSCrypt together with no logging. There are also a number of OpenNIC servers which are starting to support DNS Encryption, so it’s worth keeping an eye on the Tier2 server page.