Thursday, January 16, 2014

Mid-game Attacks & the nuances of drop-in replacement

On the third day of Real World Crypto 2014, Moti Yung presented a talk entitled "Let's get Real about the real world". After beginning with a look at some key developments in cryptography from the past twenty years and some predictions on the direction of future security research, the focus of the talk was "midgame attacks", where an attacker learns the complete intermediate state. It was concluded with the teasing reveal of a SpongeWrap-like Authenticated Encryption method that I'm sure we'll be hearing more about later in the year.

The first section of the talk provided an interesting selection of events cited by the speaker as being important landmarks in the recent history cryptography for how they modified the attack model, starting with the development of side-channel techniques by Paul Kocher et al in the late 90's, which demonstrated that implementation details were security-critical. Citing recent trends, including Adi Shamir's comments at RSA2013 and related articles (such as Nigel Smart's essay on this blog) and especially the Snowden revelations, the speaker noted that we are entering a new paradigm, in which we must assume that the attacker is already within our machine.

Having justified why such an attack model is of practical interest, the talk moved on to "Mid-game Attacks". In the game an attacker is allowed to pause the algorithm in the middle of calculation, and read the entire internal state. The attackers victory condition clearly depends on the situation being modelled, but for example an attacker who 'pauses' an Authenticated Encryption algorithm like AES-GCM mid-calculation would learn the key or key schedule from the state, leading to a trivial key-recovery attack.

This provides a very powerful adversary, so the obvious question is are any current schemes secure against it? Well, as described in this blog clearly not: the attacker should simply stop calculation immediately and read off the inputs. So, we should not expect to secure an entire scheme to this notion. However, if we allow a small amount of pre- and post- processing to be done in a safe environment, the security game models a very real threat. There are many possible scenarios in which this can occur, such as in a computer with limited amounts of trusted hardware or HSMs. Another, very relevant to modern developments, is cloud computation, where a user wishes to push the majority of a computation off to an untrusted cloud without sacrificing security. Overall, formalisation of such a game appears to provide a neat alternative to the more general option of recasting a primitive in the 2-player MPC setting.

Sticking with symmetric crypto, are there any schemes secure in this new setting? The answer is yes, an examples of which are HMAC-SHA1 and HMAC-SHA256. To see this, recall that the HMAC of message m under key k is HMAC(k,m):=h(k,h(k,m)), where h(.) is the appropriate hash function. First, we begin the calculation of h(k,m) on the local (secure) machine, stopping as soon as k has been absorbed into the internal state of the hash function. Passing this state and the message m to the cloud (insecure) machine majority of the calculation of y=h(k,m) can then be pushed out to the cloud (insecure) machine, by simply resuming the calculation. Finally, we calculate HMAC(k,m)=h(k,y) locally. Since h is a hash function and thus pre-image resistant, no matter when the attacker reads the state of the cloud machine, he will be unable to recover the key.

What a reader may notice is that I did not list HMAC-SHA3 as an example. To be blunt, this is because it isn't. This is because the 'proof' given above sketches over a very important detail: when the attacker asks the state, he is also provided with the internal state of the hash function.  As such it is not simply sufficient for the hash function to be one-way, it's internals must be also.
This isn't a problem for hash functions build with the Merkle–Damgård construction (like both those given above) since they internally use a construction believed to be one-way. However, SHA-3 is different. As a Sponge construction, critical to it's security is the assumption that part of the internal state (known as the 'capacity') remains hidden. If this is leaked, the construction is reversible (at least until an unknown input is reached). The same observation can be made of similar constructions such as the SpongeWrap authenticated encryption mode.

A solution for this apparent weakness of SpongeWrap was presented by adding xor-ing the previous capacity into the new one (meaning s_r,s_c  f(s_r , s_c) ⊕ 0....0||s_c ). The effect of this is to make the iterating function one-way. Why this is not implemented in 'standard' Keccak constructions is simply because it leads to an increase in implementation size without strengthening the construction towards its stated objectives.

So, what are the key things to take away from this talk? The attack model is interesting, and the new Authenticated Encryption method presented may turn out to be very significant over time. However,  in terms of a workshop on Real World Crypto, perhaps the most important thing to take away from this talk is how careful we should be with primitives. When changing design paradigms, we must be wary of applications utilising 'accidental features': properties standard across all previous designs, but outside the definition of the primitive. If presenting a new primitive as a drop-in replacement for another, it must be just that, and if not it should be made clear to developers that some properties they may have expected will not be provided.

No comments:

Post a Comment