This week the final ECRYPT II event is taking place on Tenerife. With it comes an end to nine successful years of European integration of cryptologic research by the Networks of Excellence ECRYPT I and ECRYPT II. ECRYPT has been very active and it has justly regained a great reputation as organizer of summer schools, workshops, and research retreats. The opportunities that were provided will be sorely missed, especially by PhD students and junior researchers, for whom ECRYPT was a way to meet and build a personal network of excellence. The sad news is that there is no ECRYPT III on the horizon yet, at least not as an EU funded Network of Excellence. Luckily, the experience of ECRYPT shows that schools and workshops can be organized with a relatively limited budget. Given the great value for money, there is a real willingness among those behind ECRYPT to continue offering the experience from which so many past and current PhD students benefitted.
One talk of the morning session I particularly enjoyed was by Joan Daemen. He is part of the Keccak team that won the SHA-3 competition. Joan made a very reasoned call to consider permutations as the basis of symmetric cryptography (whereas traditionally block ciphers or hash functions seem to take this place). The sponge design, as used by SHA-3, is a good example of permutation-based cryptography and Joan explained how other symmetric goals (such as authenticated encryption) can also be achieved by sponge-like constructions. An interesting observation is the distinction between keyed and unkeyed primitives. For the former, an attacker has less information on the state used by a sponge, which arguably means that the sponge can absorb more input per iteration and needs fewer rounds per iteration. It seems that there are still many questions to be answered, e.g. when using permutation-based cryptography what are the possible security-efficiency tradeoffs possible and what can be said about the number of rounds, or is the only option reliance on the opinion of expert designers based on preliminary cryptanalysis?
The afternoon finished with a panel discussion on provable security. The panel consisted of François-Xavier Standaert (FX), Dan Bernstein (DJB), and Giuseppe Persiano (Pino), plus Michel Abdalla as moderator. Given the diverse background of the panel, a lively debate was expected and the personal positioning statements (what they found good and useful about provable security, and what less so)
clearly drew the lines.
FX considers constructive proofs useful as they can guide practice, and tight bounds can help in this respect as well. Negative results on the other hand are less useful. He also thinks that theory for the sake of theory is fine, as long as there is no false advertising about practical implications that are in reality not there.
DJB focused on what provable security is trying to do, without wishing to concentrate on proof errors, looseness, limited models etc., as he considers them temporary problems that can largely be resolved. Nonetheless, his view of provable security is rather dim. As an example, he referred to the hash function 4x 9y mod p by Chaum et al. This construction is provably secure in the sense that collisions lead to a discrete logarithm. However, finding discrete logarithms in the multiplicative group of integers modulo a prime has become significantly easier since the hash function was first proposed (and if p is a 256 bit prime, finding collisions in the hash function will be easy despite the proof of security).
DJB's interpretation is that a proof of security (or reduction really, but let's not dwell on terminology here) requires structure of a scheme and it is this structure that will often also aid an attacker in breaking the scheme. Rather than looking at reductions, DJB suggest that users should select cryptographic systems based on cryptanalysis, as it is only sustained resistance against cryptanalysis that builds trust in a cryptosystem. DJB's goal is security, not having a proof, and he believes the two are negatively correlated due to the structure required for a proof.
Pino strongly disagreed with DJB, using science as a lens to argue a point. People try to explain a natural phenomenon by building a model for the phenomenon and prove theorems in the model, under the belief that these theorems will have an impact on reality. In cryptology, this knowledge is subsequently used to create real things with certain desired (security) properties. Good science requires that the model describes reality. This means that if a proven scheme is broken, the model must be wrong and needs to be adapted and similarly, if the model claims something to be impossible, it should not be possible in real life. Pino posited that a model should be falsifiable and should not be bizarre, leading to results that are bizar even in retrospect.
The comparison with science, and physics, in particular, led to some heated discussion. One important property of a physics model is its predictive nature: a good model allows prediction of the outcome of new experiments, lending physics an air of verifiability (notwithstanding the possibility of later experiments to falsify current theory). It is not clear how well cryptologic models satisfy this predictive property. Moreover, DJB remarks that in physics there is a common goal of understanding physics, whereas in cryptology choices have to be made (e.g. which candidate will become SHA-3) and the question is what role do security proofs play in this choices, if any.
Michel steers the discussion towards models taking into account side channels and leakage. FX gives a brief presentation on his view, with an emphasis on empirical verifiability and practical relevance. He notices that there is considerable tension between leakage-resilience model on the one hand and side-channel reality on the other. For instance, secure refreshing of the state for every iteration is unlikely to be either practically possible or needed (refreshing at initialization is another matter).
His main point is that the models typically used in leakage-resilient papers seldom capture reality and in some cases make assumptions that are demonstrably false (for instance, a common assumption is that only computation leaks, but at the gate level this is not what is observed in practice). Following Pino's earlier exposition, this prompts the question whether a flaw in the cryptographic model should lead to abolishing the model.
Pino agrees that there are models he is not comfortable with. In his opinion this includes the current leakage models, but also universal composability (UC) which is reknown for its many impossibility results (of things that seem possible in practice) and the random oracle model (where the impossibility results exploit artefacts of the model that). DJB asserts that he random oracle and generic group model are useful in the sense that a reduction using this model excludes certain generic attacks. Pino concludes that he does not understand why people keep using problematic models and offers the insight ``Bad people do bad things.''
All in all a wonderful and lively panel discussion, with great audience participation.