Wednesday, August 21, 2013

Why does the web still run on RC4?

Today marks the beginning of CHES in Santa Barbara, and Adam Langley (@agl__) of Google got things off to a flyer with a cracking invited talk explaining the prevalent usage of the RC4 cipher in TLS over HTTPS sessions, and how the browser and wider Internet community has had to cope with increasingly broken protocols.

RC4?

RC4 is a stream cipher designed in 1987 by Ron Rivest, was a trade secret of RSA until it was leaked in '94, and is in widespread usage in a variety of commercial cryptosystems.  Without going into too much detail here, essentially RC4 takes a 128-bit key and uses it to generate pseudo-random keystream bytes that you then XOR with your plaintext.  It also fast, simple enough to implement, and being a stream cipher doesn't require you to fiddle around with things like modes of operation, IVs or padding.

The problem with RC4 is that the first few keystream bytes you generate aren't really up to scratch with their pseudo-randomness.  What you want is a nicely uniformly distributed batch of ciphertexts to come out if you encrypt the same plaintext many times under different keys but what you get isn't quite that: if you plotted the probability of each ciphertext byte occurring, you'd want to see a lovely uniformly flat line, but what you actually see looks a lot less smooth.

What can you do with the biases?

RC4's biases have been exploited more than once: perhaps most prominently the WEP protocol was broken by Fluhrer, Mantin and Shamir, who leveraged biases in the first few keys to recover the long-term key using a fairly small amount of message encryptions.  One way of mitigating this is to get your implementation to drop the first however many bytes of the keystream---how many is sufficient, isn't immediately apparent.  From what I can see, bounds range from the first 768 bytes to the first 3072, or even higher.  Either way, not exactly ideal.

If TLS is the current whipping boy of the security community, RC4 is the latest in a fairly long line of sticks (said sticks being BEAST, CRIME, TIME, Lucky13) it's been hit with.  Just last week  Nadhem AlFardan, Dan Bernstein, Kenny Paterson, Bertram Poettering and Jacob Schuldt presented an attack on TLS (and WPA-TKIP) using RC4 at Usenix.  In brief, and probably discarding a lot of important detail, their attacks exploit single and double byte biases in the first 256 bytes in the RC4 keystream.  To be able to observe these biases we need to see many encryptions of the same plaintext, and unfortunately in the context of TLS, our browsers will repeatedly send things like cookies at the beginning of every new TLS session.  The required number of sessions required to begin detecting these biases is given by the authors to be somewhere between 224 and 230, so whilst RC4 in TLS isn't exactly doomed, it's pretty clear that it's rapidly nearing the edge of the cliff.

That's not exactly the end of the story either---for side-channel folk like me, RC4 is a particularly nice symmetric algorithm to come across when trying to attack web applications by observing their TLS traffic. Recent results in this area almost always exploit the fact that plaintext lengths leak through into packet sizes, for example allowing for message recovery on auto-complete text boxes such as the Google Suggest search query field.  Whereas a block cipher will destroy a bit of the leaked information by padding out each packet to a multiple of the block size, RC4 (although any stream cipher will also do this) exactly preserves the plaintext lengths for the attacker.  If you're trying to recover a search query based on the lengths of the suggestions returned to you, being able to count the exact number of characters is....helpful.

A recent history of TLS

The obvious question is that given we have all these attacks, why are we still using RC4? The first thing to mention is that we're actually using it in a big way: Adam's statistics put the RC4 ciphersuites (e.g. RC4-SHA1, ECDHE-RC4-SHA) at a combined 50% market share of Internet traffic.  To understand why this is the case, we need to go on a bit of a history lesson.  There are a bazillion different ciphersuite options in the various flavours of SSL/TLS, ranging from the 'woah there' (RC2, DES) to the very modern (AES-GCM).  RC4 is one of them, and disregarding other factors it's understandable given its relative speed and ease of deployment to see it being used.

The recent spate of TLS vulnerabilities has created a bit of a ping-pong game between ciphersuites: in 2011 Duong and Rizzo unleashed the BEAST, which exploited vulnerabilities in CBC mode.  BEAST is already mitigated in TLS 1.1 and 1.2, but as the usual refrain goes, it's rare to encounter a browser/server combo that can support a 1.1 or 1.2 connection.  In the absence of an immediate solution, the best option for a worried website owner was to simply drop usage of CBC mode and switch to the best alternative---RC4.

Along comes 2013 and we hit a new snag---AlFardan et al's attack on TLS exploiting RC4 biases.  This really does put us between a rock and a hard place; the two most commonly supported TLS 1.0 ciphersuite options are either RC4 or CBC mode based.  Meaning only those webserver owners savvy enough to keep their SSL libraries sufficiently up-to-date to be able to switch to something like AES-GCM can say they're in the clear, and everyone else has to decide whether they're more scared of the BEAST or the why-doesn't-this-have-a-snappy-name RC4 attack.

So what now?

This is where Adam introduced his 'three steps for getting your crypto stuff deployed in practice':
  1. Take your solution, and make sure you have stable high-speed, public domain implementations of it in as many products as possible. 
  2. Wait some, the longer the better.
  3. Go break the old stuff you want to replace.
 Adam suggests that right now we're going through this with TLS: RC4 and CBC are now broken, and the question is really what we're going to replace them with.  AES-GCM seems like a natural candidate: it's an AEAD block cipher mode that has pretty much got steps 1 & 2 covered, and comes with some handy hardware support from Intel.

At this point Adam took us through his patch to NSS containing a constant time code for doing MAC and padding checks for TLS (to mitigate against the Lucky13 attack).  I won't go into detail here, but it suffices to say that constant-time programming looks hard with a capital H; as with a lot of crypto code, you often have to fight against the processor or compiler's natural instincts to achieve your security goal.   Just looking at it for a few minutes gave me a headache.

Adam then took issue with GCM here: his suggestion was that constant time software implementations of GHASH (required in some circumstance, often when you don't have the AES-NI / PCLMULQDQ extensions available) are very difficult, and suggested the possibly controversial alternative of using the ChaCha or Salsa20 ciphers with some form of MAC.

All in all it was an interesting view on how the infrastructure built up around probably the most commonly used cryptosystem has to cope with backwards compatibility and solutions engineering in the face of advances in cryptography.  If you're into analogies, certainly the current state of play in which some people are starting (legitimate) fires, some are rushing around to put them out, and a large amount of people are blissfully unaware that they're either being burned or are in danger of being burned isn't ideal.  Whilst there is certainly scope for improvement, there is some solid progress in this area: for example we have solutions like certificate pinning, HSTS, and enforced minimum public key sizes---no more 512 bit RSA keys!

No comments:

Post a Comment