The first talk of the day was given by Sriram Srinivasan, of the University of Surrey, on the topic of key sizes (in the context of electronic voting). Typically, when cryptographers give a cryptographic proof they discuss security in terms of the security parameter. However, what actually is the security parameter? For example, a key size of 128 bits alone is meaningless. When considering an asymmetric scheme, a 128 bits modulus really would provide no security at all. If we consider primitives, we find that to achieve an equivalent of 128bit AES security, in the asymmetric setting (for say RSA) we need keys that are 3248 bits long. When we want schemes based on elliptic curves, or discrete logarithms the key length is different again. There are various resources available to help choose the appropriate key size; the Ecrypt II Report on Algorithms and Key Sizes being one such document.
If we now extend our thinking to a protocol built from multiple primitives, we have several questions to answer to make choices about the security. How long do we need the protocol to provide security for? If our protocol is a voting scheme, some people may want a scheme to provide security for at least their lifetime, others may want it to remain secure forever. When proving the security of schemes, the proof often introduces a security loss. These may need to be considered. Sometimes this security loss may mean one needs to increase the key size of the underlying primitives (to provide equivalent security), but sometimes this loss may not be (practically) relevant. So, the question now would be what does this mean, and how on earth do we choose key sizes? And this, unfortunately, is a very open question indeed, which I will not try to answer here, and was the source of much discussion at the Cryptoforma Workshop.
A second talk today by Graham Steel on "Attacking and fixing PKCS#11 security tokens" discussed various ways of attacking cryptographic tokens in order to discover the cryptographic keys stored on the device. To do this they have built "Tookan", an automated tool. This tool reverse engineers a device by querying the device through its API to construct information on how the device functions (and commands available etc). This data can then be run through a model checker which can check for possible vulnerabilities in the token. For example, say there are two keys on the device, k1 and k2. You ask the device to encrypt k1 under k2, receiving back a ciphertext; you then ask the device to decrypt the same ciphertext and receive k1 as the plaintext. These techniques were applied to several real devices, and of the 17 commercially available tokens, 9 were vulnerable to attacks, and 8 had severely restricted functionality. Interesting, and slightly worrying!