Those of you who have done any amount of web application programming should all be familiar with the concept of a Session, but for everyone else, a Session is a way of preserving state between each browser page load in the inherently stateless web environment.

This is required for, among other things, implementing the concept of a logged in user!

In PHP, this is typically implemented by setting a session ID to identify a user’s browser and saving it as a client side cookie. This cookie is then presented to the server alongside every page request, and the server uses this ID to pull the current session information from the server side session store and populate the $_SESSION variable.

This session data must, obviously, be stored somewhere, and typically this is on the server, either in a regular file (default in PHP) or in a database. This is pretty secure, and works well in traditional single web server environments. However, a server side session store presents problems when operating in modern load balanced environments, like Amazon Web Services, where requests for a web page are handled by a pool of servers.

The reason is simple… in a load balanced environment, a request for the first page may be handled by one server, but the second page request may be handled by an entirely different server. Obviously, if the sessions are stored locally on one server, then that information will not be available to other servers in the pool.

There are a number of different solutions to this problem. One typical solution is to store sessions in a location which is available to all machines in the pool, either a shared file store or in a shared database. Another technique commonly used is to make the load balancer “session aware”, and configure it to address all requests from a given client to the same physical machine for the duration of the session, but this may limit how well the load balancer can perform.

What about storing sessions on the client?

Perhaps a better way to handle this problem would be to store session data on the client browser itself? That way it could send the session data along with any request, and it wouldn’t matter which server in the pool handled the request.

Obviously, this presents some issues, first of which being security. If session data is stored on the client, then it could conceivably be edited by the user. If the session contains a user ID, and this isn’t protected, then this could let a user pretend to be another. Any data stored on the client browser must therefore be protected, typically this is done by the use of strong encryption.

So, to begin with we need to replace the default PHP session handler with one of our own, which will handle the encryption and cookie management. The full code can be found on GitHub, via the link below, but the main bits of the code to pay attention to are saving the session:

// Encrypt session
$encrypted_data = base64_encode(mcrypt_encrypt(MCRYPT_BLOWFISH, self::$key, $session_data, MCRYPT_MODE_CBC, self::$iv));

// Save in cookie using cookie defaults
setcookie($session_id, $encrypted_data);

Followed by reading the session back in:

return mcrypt_decrypt(MCRYPT_BLOWFISH, self::$key, base64_decode($_COOKIE[$session_id]), MCRYPT_MODE_CBC, self::$iv );

If things are working correctly, you should see something similar to this:

Cookie Sessions

Limitations

There are some limitations to this technique that you should be aware of before trying to use it.

Firstly, the total size of all cookies varies from browser to browser, but is limited to around 4K. Remember, this is the total size of all cookies, not just the encrypted session, so your application should only store a bare minimum in the session.

Secondly, and this relates somewhat to the previous point, since the session is sent in the HTTP header on every request, you could eat up bandwidth if you start storing lots in the session. Another reason to store the absolute minimum!

Thirdly, session saving is done using a cookie, so you must be done with sessions, and call session_write_close(); explicitly, before you send any non-header output to the browser.

Have fun!

» Visit the project on Github…

9 thoughts on “Encrypted client side PHP sessions

  1. So I have questions 😉

    What happens if you blast through your 4K limit?

    Storing it all in the cookie means that every request to the web server has an extra 4K payload. Perhaps there’s some way to prevent this happening for the pages that don’t really need it, even for people who don’t use separate domains / servers for their static assets?

    LocalStorage? I know it’s not supported in older browsers, but it *is* supported very widely – so perhaps you could have a database shim for those older browsers instead?

    Is it easy to kill the session from the server, so that the cookie becomes invalid? Would you change the key, or ..?

  2. Regardless of a 4K limit, you’re pushing more data back to the server per-request. This will most likely require more than one packet to handle the request, and as most connections have extremely low upload rates it’s also going to take longer. And then you’re adding the encryption layer. So it will most likely end up in a slower page load.

  3. Replying to you both, yes, you’re pushing more data with each page request. But like all things, it’s a compromise. Whether it’s a good compromise really depends on where your bottlenecks are and what kind of environment you’re in.

    In an AWS EC2 style environment, where you have 1-N web server instances behind an load balancer, having session information passed with the request has some significant advantages. If you’re only ever going to be talking to one server, file or memcache storage should be all you need. I still think storing sessions in the database backend is a Bad Thing for a number of reasons, not least of which it means you have to ALWAYS initialise a database connection for every page load, rather than only when requesting data, and you are ALWAYS writing to a table and invalidating everyone’s keys.

    Regarding the 4K limit, that is, in practice, a lot of space if you don’t get greedy and make some good decisions – so, user IDs rather than the full user object for example (User ID is sufficient to tell whether a user is logged in or out, and any more than than you’ll have to pull more than just the user object so storing the object in session presents very few speed advantages). My specific domain problems I have are with storing error/success messages for display after the user has done something (created a new object in the system for example), and for that I’m thinking it’s ok to write to a database (since those messages are not going to be every page load), and that the session only contains a flag saying that there are messages pending.

    Regarding encryption, symmetric encryption is actually pretty damn fast in practice. Typically the slow portion of modern encryption, if you’re thinking HTTPS etc, is the PK session key exchange handshake. Once that’s established, the symmetric key tunnel is actually really fast.

    I’ve not actually profiled *this* code, but I doubt the encryption is where the bottleneck is (more likely it’ll be the old Apache output buffer problem that’ll cause speed issues).

    @Ben: Re, local storage… yes, in the future that’d be great. I’m not saying this is the best or only solution, but cookies are at least more widely supported and known. Turns out Rails uses this technique also, so it’s not just me going off on one 🙂

    And invalidating the key is just a matter of changing the encryption key. You could even get clever and combine some user specific IV in that and invalidate specific users, but that’s not covered by this simplistic example.

  4. I toyed with this idea on paper: “cookieless stateless sessions,” but the methodology would be a little different.

    Infrastructure:
    The idea would be that each server would have a PGP keyring containing the other servers public keys or potentially a centralized key server setup where servers could lookup the other servers’ public keys. Each web head would generate a new keypair at some interval and revoke its old one. This also gives you the flexibility to revoke a compromised keypair fairly easy. In place of running a centralized key server and, therefore, creating a single point of failure, a distributed key server could be run amongst the web servers themselves, but I am not currently familiar with any project that provides a decentralized key server.

    Data Size:
    As long as the amount of data stored is small, then added bandwidth should not become an issue.

    Session Expiration: To enforce session expiration, a timestamp or nonce can be added to the encrypted payload.

    Encryption: The payload is encrypted with the public keys of all web servers. The beauty here is that if server1 initially encrypts the payload, then server 2-N can easily decrypt it is signed with their public keys. This provides us message integrity and verification that a trusted web server generated the message.

  5. 1. Encryption is not authentication. You should Encrypt then HMAC it (and make damn sure you verify the authentication in length-constant time, like the hash_compare() function coming in PHP 5.6).

    2. This code does not offer forward secrecy. The same key is used every time.

  6. Absolutely correct!

    To be clear to everyone, this is a proof of concept, not production code. There is much you need to do before this can be used in production.

  7. I was trying to solve an opposite problem – “how to handle session state across multiple servers in multiple data-centers”…. it turned out to be a huge PITA actually.

    Inside of a single data-center – it’s easy. Just use common memcache, redis, rdbms to store session data,
    or rsync session files to some common location…. Now – but when we have multiple datacenters – then what? We can’t really reliably sync session state across
    the globe into georgraphically disperce datacenters – it just doesn’t scale…

    Then I was like – “wait a minute – then how does google, twitter and others big players store session state data”?
    So I’ve began investigating their session handling:
    https://www.google.com/policies/technologies/types/,
    http://seclists.org/fulldisclosure/2010/Apr/430,
    http://programmers.stackexchange.com/questions/146692/why-do-popular-websites-store-very-complicated-session-related-data-in-cookies

    And it became clear: they store session data in cookies! Because there is really no other way to handle session state on scale!

  8. Sessions are, in many bits of web software, used to store the details of the currently logged in user, and to provide the concept of a user being logged in and logged out. Known is no exception in this.
    These sessions need to be stored, in some way, on the server. Usually this is either stored in files, or in a database.
    Known currently has the ability to store sessions in the currently enabled database engine (Mysql, mongo etc), or in files. However, for various reasons (e.g. you want to use a different database engine to store sessions, or you want to implement a fancy session store), it would be useful to decouple the sessions from the database.
    This pull request adds the ability to specify, in your config.ini, a handler class implementing the IdnoCommonSessionStorageInterface interface.
    This new class can then implement a session handler which uses a different storage engine, other than just files and the current database.


    Thanks for visiting! If you’re new here you might like to read a bit about me.
    (Psst… I am also available to hire! Find out more…)


    Follow @mapkyca
    !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?’http’:’https’;if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+’://platform.twitter.com/widgets.js’;fjs.parentNode.insertBefore(js,fjs);}}(document, ‘script’, ‘twitter-wjs’);


    Share this:EmailLinkedInTwitterGoogleFacebookReddit

Leave a Reply