Leakage, leakage, leakage.
Consider a leaky device that's supposed to handle a secret key for encryption, what can we say about its security?
If the device leaks the whole secret key, obviously there's no security in the encryption, so we somehow need to restrict what leakage an adversary can observe if we want to give any positive guarantees.
At this point the views of theoreticians and practitioners start to diverge. The gist of today's talk was that theoreticians work in models based on the state of the art at the time the model is created but take some time to catch up when practicioners find new results.
What leakage?
There are several models floating around in the cryptographic theory.
The main distinction is between bounding the type of leakage an adversary can observe (i.e. what functions of the secret key) and the amount of leakage.
If we look at bounding the amount of leakage, there's three main models.
First, the "relative leakage" model which bounds the amount of leakage (in bits) compared to the size of the secret. Secondly, the "bounded retrieval" model in which the total amount of leakage the adversary can observe is bounded.
Finally, the "continuous leakage" model in which the factor of time comes into play: the device in question runs in a sequence of time periods and the leakage that the adversary can observe in each period is bounded, but there's no overall bound. This means the device must refresh its keys as it transitions from one period to the next.
Criticism of the Models
All these models have one practical drawback: they don't really reflect what's going on in practice. On the one hand they're too strong, the adversary's observations being usually limited by a constraint such as "polynomial size leakage". Practitioners can only dream of observing that much leakage and if they could, none of today's devices could stand up to such a powerful adversary.
On the other hand, the models miss out the main challenges in today's practice, designing more efficient ways to observe and evaluate leakage. In practice, the success of an attack will depend on the quality of the statistical model used to analyse the leakage, the "noise" in the leakage, the number and quality of observations one can make and so on; current research involves ways to improve these factors or find trade-offs between them.
Theoretical leakage models on the other hand are all-or-nothing, you get the leakage in one go or you don't.
OCLI - really?
Many years ago, when power analysis first appeared on scene, it was noticed that CMOS logic required very little power to "hold" a state but what caused power spikes was actually the changing of states, whether from 0 to 1 or vice versa.
And so it was proposed to base theoretical models on the assumption that one could only observe leakage from data that is being computed on, as opposed to just sitting around in memory - Only Computation Leaks Information (OCLI).
Unfortunately today's practice does not support this assumption.
Shamir's cold-boot attack (pour liquid gas into laptop to freeze RAM, tear it out, copy contents before it thaws) is an attack on data just sitting in memory.
There are many other types of side channel beside power consumption - a computer virus could be considered a form of leakage to the adversary.
Even defining what is "data being computed upon" isn't that easy: at the hardware level with all the buffers, caches, pipelines and other optimisations in today's processors even a chunk of x86 assembly code won't give you a simple answer to when which gate is switched.
At the software level, things get even worse. The abstraction offered by an operating system may shift memory pages around in a manner mostly oblivious to applications running on it, not to speak of several processes running in parallel.
Finally with today's nanoscale logic, even the principle that power consumption is primarily related to changes in state does not necessarily hold anymore.
Some theoretical examples
The theory of leakage resilience is not all useless, though. We looked at two examples.
If f is a one-way function, what happens if a few bits of an input are leaked?
It's fairly easy to see that if there's a preimage-extractor given a few (at most logarithmically many) bits of input then one can just guess or brute-force these bits to construct a preimage-extractor that doesn't require any leakage.
More interestingly, if f is second-preimage resistant then f will be leakage resilient one-way if the leakage is small in the compression ratio.
Suppose you're given a preimage and want to find a second one that maps to the same image as the first, and you have a preimage-extractor that needs some leakage to work.
You can feed it the image of your challenge and answer its leakage requirements with the preimage that you already know.
If you do all this formally and get the math right the result is that you have a decent probability of the extractor giving you a different preimage to the one you know.
A real example
Finally we'll look at an example of a deployed protocol in set-top boxes that was designed to be leakage resilient.
There are many "common sense" components to reduce the exposure to known attacks: a root key is used only to derive message-keys using a hash function tree construction.
This prevents an attacker from getting too many traces on the root key itself.
A public, fresh random message id is chosen for each mesage to encrypt. This prevents an attacker from repeatedly encrypting with the same parameters.
Different constants are attached to each hash input in the construction to prevent correlations between different hash calls.
The message is encrypted in blocks, or rather chunks of blocks that just about fit in a smartcard's memory and the key is refreshed after each chunk.
There's no proof of this construction using today's models and it seems unlikely that there will be one any time soon but we're not aware of any published break of this particular protocol.
Conclusion
There's a big gap between theory and practice in side-channel attacks and praticioners (like today's speaker) sometimes wonder if the theory bears much relation to their work at all.
We still don't have a satisfactory theoretical model that adequately reflects what goes on in practical attacks.