Email is hard.

Sending an email from a web application is a tricky prospect, as sending emails directly from your mail server is a good way to get the email sent to spam and your server blacklisted.

Therefore, it’s common these days to send your email through a third party service. These services can also offer value added functionality such as delivery reports, open and click tracking. All good stuff.

I recently had to wire this up for a client of mine, who was having problems sending emails from their application. So, the most expedient thing to do was hook them up with mailgun.

The common (and indeed recommended) method of interacting with mailgun, and other such delivery services, is through a web API. This is especially true in cloud environment, where you may have numerous servers that spool up and down based on demand.

However, my client was running their app on a traditional rack server, and they were not wanting to go the infrastructure as a service route right now. So, again, for expediency, I figured the simplest thing to do was to set up the machine to send all email through the mailgun servers.

This is called a smarthost, and is pretty easy to do (although does require some configuration).

Set up your mailgun domains

The first step is to set up your mailgun domain, and configure your DNS settings.

I’ll leave this as an exercise for the reader, and it is covered in some detail in mailgun’s documentation.

I will however mention that you should make sure you check your existing DNS records, and don’t pick a mailgun domain that clashes. I made this mistake, and got a bunch of 554 The domain is unverified and requires DNS configuration. errors in my logs, despite mailgun reporting everything was ok.

Recreating a fresh mailgun domain, and re-entering the DNS config resolved this.

Install and configure Postfix

apt-get install postfix

I opted to use postfix here, because it’s configuration is slightly easier than exim’s.

On debian, the installer will ask you to choose what kind of configuration you want. Select “smarthost” and enter as the server.

You’re now going to want to configure the upstream username and password, so that postfix will use your mailgun account information.

Edit you /etc/postfix/ file and add the following:

Now edit /etc/postfix/password and enter your postfix username and password in the following format:

Once you’ve done that, build a hashed database file:

postmap /etc/postfix/password

Then reload your configuration:

postfix check; postfix reload

Now, any emails sent from your server (and by your web application) will automatically be sent through your mailgun server. Enjoy!

tail -f /var/log/mail.log

To see it in action (and to debug any problems).

I’ve previously written about how Known has built in support for OAuth2 and Open ID connect (both as a client and as a server). Well, over the past few weeks I’ve been doing some work to make this even more useful.

So, I thought I’d quickly jot down some notes as to what you can do with this functionality, and why you might find it cool.

Turn your Known site into an identity provider

The first thing you can do is use the OAuth2 server built in to Known to turn your site into an identity provider.

This means you will be able to create “apps”, allowing users on your site to be able to use third party applications and apps to make authenticated API calls.

It also means you can easily create a “login with” button, allowing your users to log into other sites using their Known profile on your site.

Connect your Known site to an identity provider

The next thing you can do is connect your site to another third party IDP using OAuth2, and allow those users to log in to your site.

This third party IDP could be your organisation’s single sign on service, a third party one, or another Known site.

If the IDP you’re connecting to supports OpenID connect, you can also enable the Federation feature.

What this does is let users with a valid OpenID Token retrieved from the IDP to make authenticated API calls on any Known sites that share that IDP, regardless of whether the user has used that site before.

Primarily, this functionality is designed for a modern micro service architecture world – so for example, you might have a React front end that needs to talk to one or more data sources over GraphQL, including getting blog data from a Known site. All of these services live in different containers, in distributed locations, with different local databases.



Something I’ve been pondering recently is whether this functionality might be able to let you do something pretty neat.

Consider, a Known site can be both a client and a server, and both issue and receive public key signed and verifiable OpenID Connect tokens for their users.

Each token knows where it comes from and can state who issued it.

This raises the possibility of being able to establish reciprocal links between sites – each Known (or other site – it’s an open protocol after all) could be both a client and server of each other.

With a bit of UX massaging, this could potentially let the users of each of these sites flow between each site in the network, and getting all the functionality of the local users.

Sure I’m not the first to be thinking this way, but it’s something to play with, and certainly will work a lot more seamlessly than the previously mooted PGP signed login (although I still think that’s pretty cool).

OpenID connect (OIDC) is a simple extension to the OAuth2 protocol, which lets a server include more information about the authenticated user (canonical ID, username, email, etc).

At the very simple level, this lets you quickly populate a new user account without making additional requests. However, since these ID tokens are signed, it lets you do a whole lot more.

For example, you can pass these tokens around when making API requests in a modern micro service environment – each micro service will be able to securely authenticate the user that is making the request independently.

Known has had OAuth support (client and server) for a while now, but recently I’ve extended both to support OIDC.

The client will validate and use OIDC tokens when authenticating against a server, and the Known OAuth server will now generate OIDC tokens for users authenticating against a Known OAuth2 application.

Requesting OIDC from the client

OIDC tokens are not automatically provided, so you need to request them. Do this by adding openid to your list of scopes. I also suggest you add email and profile to your scopes too, so you get some actually useful information about the authenticating user.

You’ll also need to provide a URL for where to get the public key for the issuing server. This isn’t terribly slick, but I intend to improve this going forward with some nice auto discovery.

» Visit the project on Github...

Issuing OIDC from the server

All new applications will have the necessary information to start issuing OIDC tokens.

A new key pair will automatically be generated, and you’ll be able to get public key information from:

» Visit the project on Github...