Like many people out there who have servers, I use cross site backups for the purposes of disaster recovery.

While I also use “proper” backup software as well (which maintains a historic delta of changes, among other things), I’m a big fan of redundancy, so like many other server administrators out there, I also perform regular site to site copies using Rsync over SSH.

Rsync, for those who aren’t familiar, is a file copy tool, which, after the first copy, will only send changes during subsequent updates. This makes it a very efficient tool, especially when used over an internet connection. SSH stands for Secure Shell, and is an encrypted connection between two computers.

Anyway, to enable rsync from server A to server B, it is common to perform the login via key. This means that on Server A you’d generate a SSH keypair for your backup user, then copy the public key that was generated into the ~/.ssh/authorized_keys file for your backup user on Server B. There’s a good guide to doing that here.

Because rsync is going to be executed automatically via cron script, it is necessary to create the key file without a password.

Backups are running, so what’s the big?

Ok, so the key for your backup user isn’t password protected, but that doesn’t mean that anybody can log in as that user, they’d still need to have a copy of the cryptographic key.

If someone is able to break into Server A and steal your backup user’s key, they will be able to log in to Server B as that user. If that user is a non privileged user, that isn’t absolutely terrible (although they could, for example, read your /etc/passwd file, or anything else you’ve not locked down) and you can revoke this key at any time. However, since strength in depth and compartmentalisation are two concepts you should always remember when building a secure system, it’d be nice if it was possible to do something about it.

Thankfully there is.

Jail time

The solution is to configure Server B’s ssh client to place the backup user’s sessions into what’s called a “Jail” using a Linux tool called chroot. When the backup user logs in, they will not have access to any other parts of the system, other than what’s in their jail cell.

It’s a little fiddly to set up, but most of the hard work is done by SSH.

  • First, install chroot: “`apt-get install dchroot debootstrap“`
  • Configure your SSH server
    • Open up “`/etc/ssh/sshd_config“`
    • Make sure there’s a line that says
      Subsystem sftp internal-sftp
    • At the end of the file, tell SSH to create a chroot jail for your backup user:
      Match User backup-user
              ChrootDirectory /home/backup-user
              AllowTcpForwarding no
      

      Note, because of the way chroot works, you’ll need to make sure the chroot directory is owned by ROOT, even if it’s actually the home directory of your backup user.

    • Save, and restart your SSH server.

This gets you part of the way, you should now be able to SFTP into Server B using your backup user, and a normal SSH session will be refused. When connected, you will be restricted to the location set in ChrootDirectory.

Unfortunately, rsync needs more than this, and in order to copy files it’ll need access to the shell (I’m assuming bash), as well as the rsync application itself, together with whatever libraries are required.

Therefore, it becomes necessary to create a partial chroot image in the backup user’s chroot directory. You could do this the traditional way (e.g. by using something like debootstrap), which will create a mirror of your base operating system files in the chroot jail. However, this generally takes a few hundred megabytes at least, and if all you want is to copy some files, you don’t want to give access to more than you need.

Instead, I opted to create a skeleton chroot jail by hand.

The goal here is to mirror the filesystem of your server inside the chroot jail, so that if a file exists in /foo/bar, then you need to copy it to /home/backup-user/foo/bar, and make sure it’s owned by root.

  • Copy bash from “`/bin/bash“` to the directory “`/home/backup-user/bin/“`
  • Copy rsync (on my system this was in “`/usr/bin“`)
  • Next, you need to copy the symbolic link libraries to which these files are linked against. You can use the tool “`ldd“` to interrogate the executable and get a list of files to copy, e.g:
    root@server-b:/home/backup-user# ldd /bin/bash 
        linux-vdso.so.1 =>  (0x00007fff52bff000)
        libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007f412810a000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f4127f06000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4127b79000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f4128340000)
    

    Copy the files which have directories into the appropriate locations, e.g. “`/lib/x86_64-linux-gnu/libtinfo.so.5“` should go into “`/home/backup-user/lib/x86_64-linux-gnu/“`

  • Do the same for “`/usr/bin/rsync“`

All being well, your rsync backups from Server A should now function as before, however the backup user will not be able to touch anything outside of the chroot jail.

Hope that’s useful to you!

mysql-logo3 So, the other week, I migrated all my sites over to a new server. This was accomplished with minimum fuss, using lots of rsync magic and juggling of DNS ttls.

The part of the migration I imagined would be the most complicated, moving several tens of MySQL databases running on the old server to the new, turned out to be pretty straightforward, and essentially completed with one command. I thought others might find it handy to know how…

First, I brought down Apache on both servers, so I could be sure that nobody was going to try and write to the database. This may not really be required, since mysqldump can handle dumping live databases consistently, for MyISAM at least, but in never hurts to be paranoid.

Next, I needed to move all the databases, together with their access permissions and users (so the full MySQL schema) to the other server. One way to do that is to copy the whole /var/lib/mysql directory over. However, I had a lot of cruft in there (old bin-logs etc), plus there were a number of articles suggesting that the straight binary copy had a number of issues, especially for mixed storage engine environments. So, I opted for the mysqldump method.

Traditionally, this takes a lot of SCPing. Here’s how to do it with one command, using the magic of Unix pipes:

mysqldump -u root -pPASSWORD --all-databases | ssh USER@NEW.HOST.COM 'cat - | mysql -u root -pPASSWORD'

Boom. This ran surprisingly quickly for me, and you can of course just as easily use this method to transfer a single database.

Three gotchas:

  • If the link dies, you need to start again, so don’t do this over a flakey connection, and I suggest you run the command in a screen if the first server isn’t localhost.
  • You need to restart the database server on the target machine for the new users and privileges to come into effect.
  • On Debian, you may see an error along the lines of:

    Got error: 1045: Access denied for user ‘debian-sys-maint’@’localhost’ (using password: YES) when trying to connect

    Fix this by executing the command:

    GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY 'THEPASSWORD' WITH GRANT OPTION;

    Where THEPASSWORD is the password found in /etc/mysql/debian.cnf

One final note: this will only work if both servers are the same major version number. I was moving between two Debian 6.0 installs, so YMMV.

Happy Easter!