What a difference a year makes… and I ain’t dead (yet). Long time, no post. Suffice it to say it has been a very busy year, with a lot going on, both professionally and personally, both good and bad. But, I won’t go into that here.

AI is the new hotness, as has been remarked. A lot of my day job now involves AI in some form or another, and indeed, the field of Structural Biology leads the way on a lot of the technological adoption. Notably, things like AlphaFold for AI based structural predictions, but also, some of the stuff I’m more directly involved with which aims to build a model for improved fragment based drug discovery. More on that stuff in due course.

Compared to that, this little thing I put together while bored during a meeting today may not be earth shattering, but as a proof of concept I think it does start to address a problem we on the ARIA team have faced for a little while.

Namely, our catalogue is very well placed to allow people who know what they’re after to get what they want, but not if they want to, for example, find something out but have no idea how to go about it.

Previously, we had used human experts, and even played with a live chat feature. None of this really scaled very well, but now we’ve got fancy pants AI, I figured I’d give it a try!

My thinking on this is fairly simple, the first step is to make sure all the services in ARIA have good descriptions, including down to what each machine can and can’t do. The better the description, the better the results from the model.

The second is to feed this context into the prompt, and instruct the AI (via the new fangled technique of prompt engineering) to act as an expert and set the parameters according. For example, to instruct to only recommend information in the context, and to provide, as a result, a list of suitable application URLs (which in reality will allow the user to directly apply for the service).

I also instructed the AI to consider how it could connect services together into a pipeline, as well as to consider alternative recommendations.

Proof of concept right now, but already I’m getting possibly interesting results. I’ve got more tedious meetings coming up, as well as a fair amount of sitting in airports, so my plan is to wire this up with a chat interface and let real scientists play. No doubt we’ll have to tweak the prompt a number of times as we go, but as a first play with the technology I think this was a morning well spent!

» Visit the project on Gitlab...

I have a hetzner server (the server that hosts this blog, as it happens) which runs Debian. While waiting to give a presentation at the day job over zoom, I decided this was a good moment to upgrade the system from Debian Bullseye (now EOL) to Bookworm.

This is something I’ve done before, so I didn’t anticipate this to be a problem.

I was wrong.

Anyway, after performing the upgrade in the usual way, I rebooted and was greeted with… well… nothing.

The server failed to restart.

I logged in to hetzner robot and booted the rescue image, but I couldn’t find anything of note in the server logs. Indeed, the logs appeared to be completely untouched. To me, this pointed to a problem with the boot loader.

Thankfully, Hetzner supports a vKVM, so I booted into that. Again, nothing.

Hmm…

On playing around, however, I did notice that if you boot the KVM (which automatically starts the rescue image), and then trigger a soft reset from within the kvm itself, the kvm will remain attached during the boot process, and allowed me to see… a grub error.

Joy. But at least I had identified the fault.

The error in question was complaining about not being able to find normal.mod, which is fairly critical. I poked around the recovery console, mounting the various filesystems in the RAID, but couldn’t find the file. So I attempted to load the kernel manually using insmod… to be treated to another error complaining about linux.mod not being found.

So, grub was completely b0rked.

This, however, gave us the answer….

The fix

The problem is that, for whatever reason, grub (the bootloader) has got messed up. So, we need rebuild it. Since we can’t boot the system, we need to do this from the rescue system. Hetzner does have a installimage tool, but I felt a little wary about running that since my understanding was that this would wipe everything… a bit of a nuclear option.

Thankfully, there was a lower impact solution we could try first.

  1. Confirm your software raid is working by taking a look at /proc/mdstat, and listing the structure using lsblk. Your raid should already be assembled, but if it isn’t, you can run mdadm --assemble --scan
  2. Next, find your mount points, for me:
    • md0 = swap (ignore)
    • md1 = /boot
    • md2 = /
    • md3 = is your home directory, so leave this alone
  3. Now you’re ready to rebuild your filesystem in chroot.
    • Mount your root drive (md2) to /mnt/ mount /dev/md2 /mnt
    • Mount your boot drive (md1) inside – mount /dev/md1 /mnt/boot
    • Bind various system drives
      • mount --bind /dev /mnt/dev
      • mount --bind /proc /mnt/proc
      • mount --bind /sys /mnt/sys
    • Finally, create your chroot: chroot /mnt
  4. Now, rebuild and reinstall grub
    • grub-install /dev/sda (I took a guess this was where my boot loader is, usually the case)
    • update-grub
  5. Exit, unmount, and reboot
    • umount /mnt/dev
    • umount /mnt/proc
    • umount /mnt/sys
    • umount /mnt/boot  
    • umount /mnt
    • reboot

All being well, your server should be back up and running. For me, however, this wasn’t quite the end of the story.

After rebooting, my server was still inaccessible. I repeated the vKVM trick and fully expected to see a grub error, however the server was booting normally.

Using the root password, I logged in to the console and sure enough my server was running, however there was no network connectivity.

A bit of poking around shows that for some reason the network interface name had changed, and the server was hard coded in /etc/network/interfaces to use the incorrect one.

I used ip link show to find the correct network interface address, modified interfaces and restarted.

Boom, server back up… and now I can tell you about it here!

Hope this is of use to someone.

In Linux, it is possible to expand an existing filesystem over multiple disk drives. I recently had to do this for a few VPS as part of my day job, since they were running out of disk space and causing some instability with one of our core services.

Here’s how to do it, mainly for my own benefit, for when I inevitably have to do it again and have forgotten how…

  1. First, after you have connected your new drive (either physically or virtually), you need to create a new logical partition on that drive using cfdisk. You can use fdisk -l to find out the drive’s name (e.g. /dev/sdc)
  2. Create a new physical volume for this partition using pvcreate, e.g. pvcreate /dev/sdc5
  3. Extend the existing volume group to include this, e.g. vgextend VOLUMEGROUP /dev/sdc5, where VOLUMEGROUP is the name of the volume group. You can find out what the volume group is called by using the command vgdisplay.
  4. Extend the logical volume to include this extra space, e.g. lvextend -l +100%FREE /dev/VOLUMEGROUP/root
  5. Resize the filesystem. The command for this varies depending on what filesystem is currently installed, so for ext4 this would be resize2fs, and in my case for xfs this is xfs_growfs. E.g. xfs_growfs /dev/VOLUMEGROUP/root

After which, you should be able to type df -h and see increased disk space available to you.