So, the other week I told you about the improvements to my access logging tool, which will now keep a user by user track of account activity.

This tool also makes a call to a GeoIP lookup hook, but until now remained unanswered. So, I wrote a quick tool that implements this GeoIP lookup hook using PHP’s built in geoip functions.

Once installed and configured (and the appropriate GeoIP database set up), this plugin will respond to any geoip/lookup event requests by looking up ['ip' => '....'] and returning the a country.

If installed along side LoginSyslog, you should start seeing the country listed along side the IP address!

» Visit the project on Github...

At home, which is also my office, I have a network that has a number of devices connected to it. Some of these devices – wifi base stations, NAS storage, a couple of raspberry pis, media centers – are headless (no monitor or keyboard attached), or in the case of the media center, spend their time running a graphical front end that makes it hard to see any system log messages that may appear.

It would be handy if you could send all the relevant log entries to a server and monitor all these devices from a central server. Thankfully, on *nix at least, this is a pretty straightforward thing to do.

The Server

First, you must configure the system log on the server to accept log messages from your network. Syslog functionality can be provided by one of a number of syslog servers, on Debian 6 this server is called rsyslog.

To enable syslog messages to be received, you must modify /etc/rsyslog.conf and add/uncomment the following:

# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514


# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514

Then, restart syslog:

/etc/init.d/rsyslog restart

Although this is likely to be less of an issue for a local server, you should ensure that your firewall permits connections from your local network to the syslog server (TCP and UDP ports 514).

The Clients

Your client devices must be configured to then send their logs to this central server. The concept is straightforward enough, but the exact procedure varies slightly from server to server, and device to device. If your client uses a different syslog server, I suggest you do a little googling.

The principle is pretty much the same regardless, you must specify the location of the log file server and the level of logs to send (info is sufficient for most purposes). In the syslog configuration file add the following to the bottom:

*.info @192.168.0.1

On Debian/Ubuntu/Raspian clients, this setting is in the /etc/rsyslog.d/50-default.conf file.

Some embedded devices, like my Buffalo AirStation, have an admin setting to configure this for you. Other devices, like my Netgear ReadyNAS 2, has a bit more of an involved process (in this specific case, you must install the community SSH plugin, and then edit the syslog configuration manually).

Monitoring with logwatch

Logwatch is a handy tool that will analyse logs on your server and generate administrator reports listing the various things that have happened.

Out of the box, on Debian at least, logwatch is configured to assume that only log entries for the local machine will appear in log files, which can cause the reports to get confused. Logwatch does support multiple host logging, but it needs to be enabled.

The documented approach I found, which was to create a log file in /etc/logwatch/conf didn’t work for me. On Debian, this directory didn’t exist, and the nightly cron job seemed to ignore settings in both logwatch.conf and override.conf.

I eventually configured logwatch to handle multiple hosts, and to send out one email per host, but modifying the nightly system cronjob. In /etc/cron.daily/00logwatch, modify the execute line and add a --hostformat line:

#execute
/usr/sbin/logwatch --output mail --hostformat splitmail

After which you should receive one email per host logged by the central syslog server.

Fail2Ban is a simple, but powerful, open source intrusion detection and prevention system which can run on most POSIX compliant operating systems. It works by monitoring various system logs for signs of intrusion attempts (failed logins etc), and on finding them, executes a preconfigured action.

Typically, this action is to block further access attempts from the remote host, using local firewall rules.

Out of the box, Fail2Ban comes configured to monitor SSH for signs of intrusion. However, since it works by monitoring log files, Fail2Ban can be configured to monitor many other services. I figured it would be pretty cool if you could also use it to protect Elgg sites as well.

Elgg already has a per user account lockout on login, however it is not without its limitations. It is pretty basic, and while it protects against access to specific accounts, it does not protect against dictionary attacks against multiple or non-existent accounts. Using Fail2Ban, you can protect against multiple access attempts from the same IP address easily, and the cut them off at the network level, frustrating the attack.

Installing Fail2Ban

The first step to getting this all working is to install Fail2Ban.

This is covered in detail elsewhere, but on Debian/Ubuntu it was a simple matter of pulling it from the apt repo:

sudo apt-get install fail2ban

Out of the box Fail2Ban will block using IPTables, but if you use shorewall, as I do, you’ll need to modify the actions to use that.

Getting Elgg to log access

It is an omission (quite possibly on my part), but the default Elgg login action does not explicitly log login attempts and login errors. While it is quite probable that you could hack together some regexp to parse the apache error logs, these are often quite noisy, highly changeable, often stored in odd locations, and, more often than not, are turned off in production environments.

I thought I’d make things a little easier on myself, and so I wrote a tiny Elgg plugin which overrides the default login action and outputs explicit error messages to the system auth.log, on both success and failure.

Once installed, you should begin to see logging messages start to appear in your server’s auth log (usually /var/log/auth.log) along the lines of this:

Mar 22 18:24:43 web elgg(web.example.com)[16483]: Authentication failure for fakeuser from 111.222.333.444
Mar 22 18:25:05 web elgg(web.example.com)[16483]: Accepted password for admin from 111.222.333.444

Again, to keep things simple, and to avoid getting a regular expression headache, I kept the authentication messages similar to those used by the SSH filter.

Monitoring the log with Fail2Ban

Finally, you need to configure fail2ban to look out for the Elgg messages in the auth.log.

  • Copy the elgg.conf into your fail2ban filters directory, on Debian this is in /etc/fail2ban/filters.d/
  • Create a jail.local in /etc/fail2ban/ if you have not already done so, and then create a rule, along the lines of the following:

    [elgg]
    enabled = true
    filter = elgg
    logpath = /var/log/auth.log
    port = all

Restart Fail2Ban, and you should be up and running! To test, attempt to log in (using a machine on a different machine if at all possible) and try a few failed logins.

A future enhancement of this that you could consider, especially if running in a production environment, is to modify the block action to redirect queries from the offender’s IP to a place-holder page explaining why they have been banned. This could probably be done quite easily using a REDIRECT rule, although I’ve not tried it yet.

Anyway, code, as always, is on github. Have a play!

» Visit the project on Github…