Amazon, who are most widely known for their online shop, has another project called Amazon Web Services (AWS) offering website authors a collection of very powerful hosting tools as well as hybrid cloud services.

This is not a sideline for Amazon – a recent conversation I had with an Amazon guy I met at a Barcamp revealed that in terms of revenue AWS exceeds the store – and it dramatically lowers the cost of entry for web authors. Using AWS it is possible to compete with the big boys without the need to mortgage your house for the data center and it solves many storage and scalability problems, what’s more – its pay as you go.

To be clear then, Amazon is now a hosting company, and its rather curious as to why people (Amazon included) are not making a big deal of this.

Anywho, I’ve recently had the opportunity to work on a few projects which use AWS extensively, and since I did run into a few gotcha’s I’d thought I’d jot some notes down here, more as an aide-mémoire as anything else.

Getting an AWS account

Before you begin, you’re going to need an account. So, head over here and create one.

Creating a server

You are going to want to create a server to run your code on. The service you need here is Amazon Elastic Compute Cloud (EC2), so sign up for EC2 from the AWS Management Console.

You are going to have to select an EC2 Image to start from, and there are a number of them available. An Image (called AMIs) is a snapshot of your server, its disks and any software you want to run. Once configured you can start one or more Instances of it, which are your running servers. If you want more control over your servers, consider getting unmanaged vps hosting.

Once you’ve configured your server, you can create your own image which you can share or use to boot other instances. This whole process is made infinitely easier by using an EBS based image. EBS is a virtual disk, these can be added like normal disks to your server – sizes 1GB to 1TB. EBS backed images store data to these disks and so configuration changes are preserved, additionally new server images can be cloned with the click of a button.

You can add extra disks to your server, but they must be in the same availability zone (datacenter) as your EC2 image and currently can’t be shared between EC2 instances.

So:

  • Save your sanity and use an EBS backed image – if you’re looking for a Debian or Ubuntu based one, I recommend the ones created buy Alestic. You can search for community managed images from within the management console, be sure to select EBS backed! For reference, the AMI I often use is identified as ami-209e7349.
  • Grab an Elastic IP and assign it to your image, point your domain at this.
  • Your goal is to build an AMI consisting of a ready to go install of your web project, so install the necessary software.
  • Note however, that you want to build your AMI so that you can run multiple instances of it in order to handle scalability. Since EBS disks can’t be shared you may have to change your architecture slightly:
    • Don’t store user data on the server – user icons etc – these can’t be shared across instances. Store these instead on Amazon S3 and reference them directly using URL where possible. Note, temporary files like caches are probably ok.
    • If you use a database, use MySQL, but only install the client libraries on your image – you won’t be using a local server (see below)!

Once you’ve configured your server create an AMI out of it, this can then be booted multiple times.

Using a database

The database service you will most likely want to use is called Amazon Relational Database Service (RDS) which can function as a drop in replacement for MySQL.

Sign up for it using the management tool, create a database (making note of the passwords and user names you assign) and allow access from your running server instance(s) via the security settings.

The database is running on its own machine, so make a note of the host name in the properties – you will need to pass this in your connect string.

Conclusion

AWS is a powerful tool, if you use it right. There is a lot more complexity to cover of course, but this should serve as a start.

I’ll cover the more advanced scalability stuff later when I actually get to use them in anguish (and get the time to write about it!)

ProFTP is a configurable FTP server available on most *nix platforms.

I recently had the need to get this working and authenticating off a PHP maintained MySQL backend, and this post is primarily to aid my own memory should I ever have to do it again.

Installing ProFTP

In order to use MySQL as a back end you need to install some packages. If you’re using a Debian based distro like Ubuntu, this is easy:

apt-get install mysql-server proftpd proftpd-mod-mysql

The database schema

Next, you need to install the database schema to store your users and passwords.

CREATE TABLE IF NOT EXISTS users (
userid varchar(30) NOT NULL default '',
passwd varchar(128) NOT NULL default '',
uid int(11) default NULL,
gid int(11) default NULL,
homedir varchar(255) default NULL,
shell varchar(255) default NULL,
UNIQUE KEY uid (uid),
UNIQUE KEY userid (userid)
) TYPE=MyISAM;

CREATE TABLE IF NOT EXISTS groups (
groupname varchar(30) NOT NULL default '',
gid int(11) NOT NULL default '0',
members varchar(255) default NULL
) TYPE=MyISAM;

One important thing to note here – that caused me a fair amount of hair pulling when I tried to use encrypted passwords – is that the password field shown in many howtos on the internet is much too short. This causes the hashed password to be quietly truncated by MySQL when saved.

This results in a somewhat misleading “No such user found” error to appear in the logs when using encrypted passwords.

To end all argument I’ve allowed passwords up to 128 chars, but this field could probably be a good deal shorter.

The user table looks much like /etc/passwd and is largely self explanatory. The uid & gid fields correspond to a system user in most cases, but since we’re using virtual users they can largely be ignored. Homedir points to a location which will serve as the user’s default directory. Shell is largely unused and can be set to /bin/false or similar.

Configuring ProFTP

Next, you need to make some changes to the ProFTP configuration files stored in /etc/proftpd. While doing this it is handy to run proftp in debug mode from the console:

proftpd -nd6

proftpd.conf

  1. Make sure the AuthOrder line looks like:

    AuthOrder mod_sql.c

  2. Ensure that the following line is uncommented:

    Include /etc/proftpd/sql.conf

  3. For belts and braces I’ve included the following at the end, although I’m not entirely sure it’s strictly required:

    <IfModule mod_auth_pam.c>
    AuthPAM off
    </IfModule>

  4. Our users don’t need a valid shell, so:

    RequireValidShell off

modules.conf

  1. Make sure the following lines are uncommented:

    LoadModule mod_sql.c
    LoadModule mod_sql_mysql.c

sql.conf

  1. Set your SQL backend and ensure that authentication is turned on:

    SQLBackend mysql
    SQLEngine on
    SQLAuthenticate on

  2. Tell proftp how passwords are stored. You have a number of options here, but since I was using mysql’s PASSWORD function, I’ll defer to the backend.

    SQLAuthTypes backend

  3. Tell proftp how to connect to your database by providing the required connection details, ensure that the user has full access to these tables.

    SQLConnectInfo database@host user password

  4. Define your table structure in the format tablename fields….

    SQLUserInfo users userid passwd uid gid homedir shell
    SQLGroupInfo groups groupname gid members

Adding users

I manage users from within a PHP web application that I’m developing, but in a nutshell adding FTP users from this point is a simple insert statement looking something like:

mysql_query("REPLACE INTO users
(userid, passwd, uid, gid, homedir, shell)
VALUES
('$userid', PASSWORD('$password'), $uid, $gid, '$homedir', '$shell')");

Have fun!

Latest: Elgg Multisite is still active and has moved on to Github. Go join in!

I have just Open Sourced an “itch scratching” project I’ve been hacking on for a little while. So, without much further ado, I’d like to introduce you to Marcus Povey’s Multisite Elgg!

It is currently in Beta and the code could do with a bit of a tidy, but this is Open Source so roll up your sleeves and get involved.

What is it?
Multisite Elgg allows you to run multiple separate Elgg sites off of the same install of the codebase, saving disk space and making administration a whole bunch easier.

Currently based around the latest Elgg 1.7 release, once installed adding new Elgg sites is a matter of clicking on a button and entering in some details.

What can I do with it?
You can do everything that you can do with Elgg, but with the ability to create new networks on demand. This will for example let you:

  • Set up your own version of Ning! What with Ning phasing out free accounts, it is my hope that Multisite Elgg will let a thousand more Nings bloom!
  • In your organisation or institution, easily set up Elgg sites for each department.
  • If your one of the Elgg hosting companies out there, you may want to look at multisite in order to simplify your work flow.
  • … etc…

Installation
Once you have downloaded the installation package you will need to do a few things in order to get up and running. Multisite Elgg assumes that you have some knowledge of how to set up and run a server – there is no wizard just yet!

  1. Unzip the package on your web server.
  2. Point your master domain at the contents of the install location on your web server. This is your master control domain, go here to configure your sites. Because of this you might want to consider putting this behind some further access restrictions.
  3. Point any sub domains to the contents of the docroot folder, eg (/var/multisite/docroot). This directory forms the base of all your Elgg installs. To make things even more automated you may want to consider making this an Apache wildcard domain, if your DNS provider supports it.
  4. Chmod 777 docroot/data: This is the default location for multisite domains.
  5. Install schema/multisite_mysql.sql: Create a new database on your Mysql server and install the Multisite schema – this is your master control database.
  6. Rename settings.example.php in docroot/elgg/engine/ to settings.php and configure:

    $CONFIG->multisite->dbuser = ‘your username’;
    $CONFIG->multisite->dbpass = ‘password’;
    $CONFIG->multisite->dbhost = ‘host’;

    Make sure this user has sufficient privileges to create and grant access to databases and tables on your server. This will allow the admin tool to create the databases for your hosted sites automatically.

  7. Visit your master domain and configure your admin user
  8. Begin configuring your sites!

Creating sites
Once you have created an admin user, adding sites is easy. Currently you can only create one type of site, but in the future Multisite Elgg will let you create sites which have quotas and other access restrictions.

You have a box to enter database details, or you can leave them blank to use Multisite Elgg user defined above (which you may not want to do for security reasons).

You can also select which of the installed plugins you want to allow, this lets have different sites have different plugins available while still installing them on the same codebase.

Contributing
So, that was a brief introduction to Multisite Elgg. I hope that at least some of you out there find it useful!

As I said before, it’s Open Source, so if you want to get involved here are the important details:

If you want to contribute patches, feel free to use the bug tracker or discussion forum!

Enjoy!