Leakage, Leakage, Leakage ...
Once more we looked into recent papers attempting to deal with leakage in cryptographic schemes, this time we focused on public key cryptography: signature and encryption schemes in the bounded leakage and continuous leakage model were discussed/examined in our last study group.
After last week's session on definitions and models, and 'fundamentals' such as defining LR-OWFs via SPRs (handy to know as signature schemes can be derived from OWFs) we started off with a paper Boyle et al. (EC' 2011) that gives ''Fully leakage resilient signatures" in the bounded leakage model and then goes on to show how to adapt it to the continuous leakage model. Let's first look more closely at what the title implies/promises.
Fully LR in this context alludes to the idea that an adversary can query a leakage oracle not only on the secret key, but on the entire secret state including any randomness used during signature creation. Further it can tolerate 'a lot of leakage'. This is in contrast to some previous work (referenced in the paper) which only allowed 'not too much' leakage about the secret key. Clearly this is then a relevant improvement as it is well known that for a wide range of practical signature schemes leakage about the ephemaral keys is what brings those systems down (relevant but not in the paper mentioned articles such as Nguyen
and Shparlinski, "The Insecurity of the Elliptic Curve Digital Signature Algorithm with Partially Known Nonces", DESI 30(2), 2003) show that knowing a handful (quite literally!) of bits of a couple of ephemeral keys allows to easily recover the secret signing key ...
Well what else does this new construction give us: no random oracles used in the proof, no OCLI, and the authors argue that there are even some instantiations which one could argue are theoretically efficient. A side remark mid-way through the note reveals though some previously not mentioned restrictions on the leakage: when it comes down to the actual proof there is no leakage assumed for the randomness used in key generation or refreshing (in the continuous leakage case). Especially the latter assumption is rather undesireable for practice ...
Conclusion: a construction that does all what we want in a model which is almost good enough for practice. (Quite a praise as I look at it from the lense of a practitioner).
The second paper we looked at was by Kiltz and Pietrzak (AC' 2010) entitled 'Leakage Resilient El Gamal Encryption'. The authors of this paper took a refreshing approach: rather than cooking up a scheme from 'scratch' they had a (brief?) look at what practitioners already do to secure El Gamal type schemes in practice and attempted to prove some properties of the well known defenses used. So what do practitioners do in practice? Assume a pair of (secret key, public key)=(x, X=g^x). Then encryption proceeds something like that: the message (represented as group element) is blinded with an ephemeral key y which is transmitted via DH, hence the ciphertext is a pair (c1, c2)=(g^y, mX^y). Decryption recovers m by computing c2/c1^x. In practice an attacker will attempt to reveal x most likely by targetting the exponentiation c1^x as c1 can be observed/chosen and the exponentiation routine will most likely use x in very small chunks. To prevent this practitioners use a technique called blinding (typically applied to both exponent and message) to prevent such attacks.
The reason why I point this fact out explicitely is that in this article the authors explain that 'Even tough side-channel leakage may contain lots of dat, only a small fraction can actually be exploited in each measurement.' Wrong. I'll try to clarify what actually happens: each point (=sample data point, a collection of data points gives a 'trace' which is the actual observation/measurement) contains information about the internal state of the device at this particular point in time. If only a small part of the secret has 'contributed' to the leakage at this point, then only this small part of the secret leaks at this point (in time). As one trace consists of many points (remember we observe 'lots of data'), there is 'lots of leakage' contained within one trace. Especially implementations of exponentiations will produce distinct 'patterns' (i.e. have different leakage characteristics corresponding to 0 and 1 bits in the secret) that will almost certainly allow to extract the entire secret given one single observation. This is a key point, which is often overlooked by the theory community (by assuming that not the entire secret leaks given one query to the leakage oracle). It implies that without countermeasures (including blinding) along the lines developed mainly within the CHES community you don't even need to start considering LR schemes developed by theorists: the basic assumption that not the entire secret leaks in one observation actually requires to do something, it is not 'naturally' guaranteed.
Back to the paper and their statements about the security of blinding schemes. Many schemes are known by practitioners, and on a high level they work like this: rather than working with your secret exponent x directly, share it (depending on your underlying algebraic structure) either multiplicatively or additively (whether to use additive or multiplicative sharing is typically influenced by performance considerations). Additive sharing in the case of El Gamal encryption is not a good idea: if rather than using x directly we were to share it and use x'= x+r_i and x''=x-r_i it would be easy to calculate the the last 'few' bits of x by knowing the last t bits of x' and x'' as those bits would be given by x' mod 2^t + x'' mod 2^t (minus a fudge factor if there was an overflow mod p). In practice people use what was suggested already by Kocher in "Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems": one chosses random r_i, computes its inverse invr_i and shares x such that x=x'*x'' with x'=x*r_i and x''=rinv_i. This can be done reasonably efficiently, and one can even update r_i and rinv_i easily (by e.g. just squaring them) such that they are not re-used in subsequent encryption rounds (Would the proof in the paper by Kiltz and Pietrzak extend to this way of updating/refreshing shares?).
The cool thing about the paper is that they are able to prove (in case of bilinear groups, using the generic group model) that the above outlined strategy (minus the efficient update of shares mentioned at the end) is actually CCA1.