Yesterday, I wrote a post outlining a draft specification for a possible way to handle login on a distributed social network, together with a reference implementation for Known.

I got some really positive feedback, including someone pointing out a potential replay vulnerability with the protocol as it stands.

I admit I had overlooked replay as an attack vector (oops!), but since peer review is exactly why open standards are more secure than propriatory standards, I thought I’d kick off the discussion now!

The Replay problem

Alice wants to see something that Bob has written, so logs in according to the protocol, however Eve is listening to the exchange and records the login. She then, later, sends the same data back to Bob. Bob sees the signature, sees that it is valid, and then logs Eve in as Alice.

Worse, Eve could send the same packet of data to Clare and David’s site as well, all without needing access to Alice’s key.

Eve needs to be able to intercept Alice’s login session, which, if HTTPS has been deployed is largely impractical, but since this can’t always be counted on I’d like to improve the protocol.

Countermeasures

Largely, countermeasures to a replay attack take the form of creating the signature over something non-repeatable and algorithmically verifiable that Alice can generate and Bob can check.

This may be some sort of algorithmically generated hash, a timestamp, or even just a random number, or record whether we have seen a specific signature before.

My specific implementation has an additional wrinkle in that it has to function over a distributed network, in which each node doesn’t necessarily talk to each other (so we can’t check whether we’ve seen a signature or random number before, since Bob might have seen it, but Clare and David won’t have).

I also want to avoid adding too much complexity, so I’d like to avoid, if I can, doing some sort of multi-stage handshaking; for example hitting an endpoint on the server to obtain a random session id, then signing that and sending it back. Basically, I’d still like to be able to talk to a server using Unix command line tools (gpg) and CURL if I can!

Proposed revision

Currently, when Alice logs in to Bob’s site, Alice signs their profile URL using her key and sends it to Bob. Bob then uses this profile url to verify that Alice is someone with access to Bob’s site/post and then users the signature to verify that it is indeed Alice who’s attempting to log in.

What I propose, is that in addition to forming the signature over Alice’s profile URL, she also forms it over the URL of the page she is trying to see, and also the current time in GMT.

Including the requested URL in the signature allows Bob to verify that the request is for access on his site. If Eve sent this packet to Clare or Dave, it could be easily discarded as invalid.

Adding the timestamp allows Bob to check that this isn’t an old packet being replayed back. Since any implementation should have a small tolerance (perhaps a few minutes either side) to allow for clock drift, using a timestamp allows a small window of attack where Eve could replay the login. To counter this, Bob’s implementation should remember, for a short while, timestamps received for Alice and if the same one is seen twice invalidate all of Alice’s sessions.

  • Why invalidate all of Alice’s sessions when we see the same timestamp twice, can’t we just assume that the second packet is Eve?”

    Sadly not – sophisticated attackers are able to attack from a position physically close to you, so Eve’s login may be received first. In the situation where two identical login requests are received, it is probably safer to treat both as invalid.

    Perhaps a sophisticated implementation could delay Alice’s first login for a few seconds (after verifying) to see if any duplicates are received, and only proceed if there are none. This would limit the need to permanently store timestamps against a user’s account, but may be more complex from an implementation point of view.

  • Why use a timestamp rather than a random number?

    I was going back and forth on this… a random number (nonce) would remove the vulnerability window, but it would require Bob’s site to store every number we’ve seen thus far, so I finally opted not to take this approach.

I’d be interested in your thoughts, so please, leave a comment!

6 thoughts on “OpenPGP Login spec: Countering replay attacks

  1. Marcus Povey is proposing to use PGP/GPG to log into personal websites such as Known.
    Where have I heard this before? 😉 Oh, yes, LID, circa 2005, before OpenID etc.
    Here is how a digitally signed LID requests looks like, broken into separate lines for better readability:

    http://example.com
        ?lid=http%3A%2F%2Fmylid.net%2Fjernst
        &lid-credtype=gpg%20--clearsign
        &lid-nonce=2014-05-30T16%3A54%3A57.016Z
        &lid-credential=SHA1%0AVersion%3A+GnuPG+v1.4.11+%28GNU%2FLinux%29%0A%0AiEYEARECAAYFAlOIt%2BEACgkQsIOiz0BhWYZ9MACcCelf5T6XyywOZ5jVq3eyMw9m%0A8C4AoJ6Vz47PKR2%2FEvNqDkv7OWFyHdSU%0A%3DpVzh%0A
    

    where:
    lid:
    The URL identifying the entity requesting access, e.g. my blog
    lid-credtype:
    for extensibility, specifies the kind of credential provided
    lid-nonce:
    a timestamp, to avoid reply attacks (Hi, Marcus!)
    lid-credential:
    the credential, a digital signature over the request and the nonce, from the gpg output without some of the boilerplate
    Some more info about LID is on the InfoGrid Wiki.
    Do I think this is a good idea? Oh, Yes! Much better than much other stuff that has been bandied about for identity on the internet in the past 9+ years.

  2. Have you thought about collaborating with Johannes Ernst and his circa 2005 LID spec? (Full-disclosure, my blog was the second implementation of a LID consumer). He’s already been through much of the pain you’re going through, and it’d be a chance for me to dust off old code…

  3. Just a quick update to say that I’ve now updated my reference implementation of OpenPGP signin to address the local and cross domain replay attack discussed in the post I made on Friday.
    The spec has been modified to require the sign in signature to include a timestamp and the requested url. The addition of these allows a site to verify that the request is directed to the correct site, and that it is a fresh request.
    Forming the signature
    The signature is formed over a message payload containing:
    The current date and time in ISO8601 format, as produced by date('c', time());
    Your profile URL
    The URL of the resource you’re requesting, which must be on the site you’re requesting it from.
    Separated by a new line and/or whitespace.
    Verifying the signature
    Before verifying the signature, the plugin extracts the urls and the timestamp:



    if (preg_match_all(“/(https?://[^s]+)/”, $signature, $matches, PREG_SET_ORDER)) {
    $user_id = $matches[0][0];
    $request_url = $matches[1][0];
    }

    if (preg_match(‘/([0-9]{4}-?[0-9]{2}-?[0-9]{2}T[0-9]{2}:?[0-9]{2}:?[0-9]{2}[+-Z]?([0-9]{2,4}:?([0-9]{2})?)?)/’,$signature, $matches))
    $timestamp = strtotime($matches[0]);


    1234567

    if (preg_match_all(“/(https?://[^s]+)/”, $signature, $matches, PREG_SET_ORDER)) {    $user_id = $matches[0][0];    $request_url = $matches[1][0]; } if (preg_match(‘/([0-9]{4}-?[0-9]{2}-?[0-9]{2}T[0-9]{2}:?[0-9]{2}:?[0-9]{2}[+-Z]?([0-9]{2,4}:?([0-9]{2})?)?)/’,$signature, $matches))    $timestamp = strtotime($matches[0]);


    Then checks that the $request_url is local, and the timestamp is within 5 seconds of now.
    If this is all you do, there still exists a short window of time where a replay would work for the same host. If an attacker were able to capture and replay the packet within 5 seconds to the same site, then at the moment they’d still be logged in. Unlikely, but it’s a mistake to underestimate an attacker’s abilities.
    So, my implementation also converts the timestamp/request url/user triple into a nonce internally, and stores this against a user object for a minute or so. This lets us identify and throw away any packets that we have encountered previously.
    Known issues and further discussion
    Ok, so at the moment there still exists a replay vulnerability whereby if your attacker is attacking you from an automated node physically near you, they can take take advantage of a race condition and get their replay attack in before the legitimate packet makes the full round trip to the server.
    Currently though, the only people we know to have the capability to do this are GCHQ and the NSA, and really the best counter to those guys would be to conduct the whole exchange over HTTPS (which is a good idea anyway).
    That said, an implementation of this protocol should, when encountering a duplicate packet, destroy all sessions belonging to that user.
    Share this:EmailLinkedInTwitterGoogleFacebookReddit

Leave a Reply