A blog for the cryptography group of the University of Bristol. To enable discussion on cryptography and other matters related to our research.
Friday, August 23, 2013
Crypto 2013: Wednesday Afternoon
The first of those talks presented not a new MPC protocol but a way to construct an MPC protocol from one with less players. To do so, they employ the player emulation technique by Hirt and Maurer, which is best explained by the following example. Take an MPC protocol (like BGW) with three players v1,v2,v3 and passive security against one corrupted player. Now replace player v1 by the same protocol run with the players w1,w2,w3. With respect to corruptions, there are now two possible scenarios that maintain security. Either v1 or one of v2,v3 can be corrupted in the first protocol instance. The former case corresponds to any number of w1,w2,w3 being corrupted, even all of them. In the latter case, v1 must not be corrupted, which implies that at most one out of w1,w2,w3 can be corrupted. Generalizing this example, one can model one protocol instance as a MAJ3 logic gate that takes three inputs and outputs the majority of them.
It remains to find a formula with only such gates that outputs true if at most t inputs are false. Such a formula directly implies an MPC protocol for n parties with threshold t. It is proven that, in the information-theoretic setting, the threshold must be less than n/2. Another constraint is that the complexity of the constructed MPC protocol grows exponentially with the depth of the formula, so one wants to have a log-depth formula. The authors present two ways of constructing such a formula for any n, a randomized one for the maximal t < n/2 that is correct with high probability and a deterministic one that with only approximate t. The authors also present a similar approximate solution for the case of active security, where at most n/3 players can be corrupted. Here the base case is a four-party protocol with at most one corruption.
The other talk explored the space between purely active and purely passive corruption in multi-party computation. Classical work like BGW assumes that all corrupted parties are corrupted in the same way, and a recent work by Ishai et al. proposed a protocol that is secure against passive corruption of t < n or active corruption of t < n/2 parties. However, the latter does not allow a mixed case, say some active corruption and passive corruption such that the total number of corrupted parties is at least n/2.
The new work now allows k < n/2 active corruptions and n-k-1 passive corruptions for an unknown k. To achieve this, the authors propose a new variant of verifiable secret-sharing, gradual VSS. VSS is essentially secret-sharing with the reconstruction guaranteeing correctness. Traditional VSS provides secrecy against t corruptions and robustness against n-t-1 corruptions for a fixed t. Gradual VSS in addition uses additive secret sharing to produce k shares, which then are shared using traditional VSS for every t in 1,...,k. For reconstruction, the additive secret shares are reconstructed one after the other starting with t. If any of those reconstructions fail, the protocol is aborted, and if this happens before the reconstruction with k=1, the secret is protected by the additive shares that are not reconstructed yet. This guarantees secrecy and robustness for any combination of passive and active corruption in the permitted range.
CHES invited talk: The future of SHA-3
Thanks to the widely-acknowledged success of the AES contest, open competitions have begun to play an increasingly important role in arriving at new, rigorously tested primitives. When -- at and after Crypto 2004 -- several new and powerful attack techniques against Merkle-Damgaard based constructions were introduced (Joux, Biham/Chen, Wang), confidence in existing hashes was substantially shaken. At the same time, NIST was pushing to move from 80-bit to 112-bit security, which required switching from SHA-1 to SHA-2. Even though SHA-2 was not (and still is not) known to be insecure, the ongoing flurry of cryptanalytic progress produced a sense of skepticism about the value of a costly transition to a new construction liable (it then seemed) to be broken at any minute.
What was needed was a new hash standard -- one sufficiently different to SHA-2 to offer useful performance/implementation trade-offs as well as to (hopefully) remain immune to any future attack against its predecessor.
Following vibrant discussion at hash workshops in Gaithersburg 2005 and UCSB 2006, the competition was announced in 2007 with an anticipated 5-year schedule. Response from the research community was admirable: 64 initial submissions (compared with 15 for AES). A first quick cut (excluding those not fitting the guidelines) brought it down to a list of 51, published in December 2008. The high number and huge diversity of designs made the selection rounds very difficult. But in another year or so many had been broken or seriously dented and a list of 14 was announced. It took another year and a half to reduce the list to 5, and then, in the words of Joyce…
"What clashes here of wills gen wonts, oystrygods gaggin fishy-gods! Brékkek Kékkek Kékkek Kékkek!" (Finnegans Wake, Ch 1) [1]...another two years of rigorous cryptanalytic scrutiny and performance evaluation and Keccak was announced as the winner in October 2012.
Decisions were made based on security, performance, and how well the construction complemented SHA-2. None of the final candidates were knocked out by cryptanalysis, though some (domain extenders) have the benefit of security proofs, and Keccak and Blake have the best security margins. All 5 have acceptable performance, though Keccak performs particularly well in dedicated hardware. But a big point in Keccak's favour seems to have been its fundamental difference from SHA-2, rendering shared vulnerabilities unlikely and offering scope for flexible deployment.
In particular, Keccak is a sponge function, taking an input stream of any given length and producing an output stream of any desired length. Variable length output is needed by many protocols (OAEP, most KDFs, the fix for Bleichenbacher's DSA attack). Whilst it is desirable to have this as a part of a hash definition, Kelsey pointed out that it may be tricky to use it correctly (i.e., without introducing vulnerabilities) as different output lengths on the same input produce related hashes, for example:
SHAKE256(X,224) = ABCDEFG(SHAKE = SHA + Keccak).
SHAKE256(X,256) = ABCDEFGH
The security of sponges relates to a parameter C called the capacity. Unlike Merkle-Damgaard constructions which aim at n/2-bit collision resistance and n-bit preimage resistance (where n is the number of output bits), collisions and preimages for a sponge function are equally hard to find. A good construction aims at a resistance of C/2 bits. The nice thing about this is that the security level is determined by function internals, not output size. But, of course, the larger C is, the slower the hashing.
Kelsey then spoke about the planned changes to Keccak in preparation for standardisation. The original proposal had four versions, each with different capacity (224, 256, 384, 512). They guaranteed n-bit preimage resistance by making capacity huge, but suffered a big performance hit to achieve this. In NIST's view, this trade-off produces higher claimed security than what is actually needed and therefore is not worth it. They therefore propose that SHA3 will have only 2 capacities:
1) SHA3-224, SHA3-256, SHAKE256: 128 bits of security against everything (C = 256)Other anticipated changes to the original proposal include a different padding scheme.
2) SHA3-334 SHA3-512, SHAKE512: 256 bits of security against everything (C = 512)
The next step for NIST is to get the FIPS out. They plan to have a draft for public comment around the end of October 2013. Kelsey warned that the FIPS process can be slow, and is mostly outside of NIST control (the final document goes to Secretary of Commerce for approval) so NIST are unable to make guarantees about dates.
Eventually, they would also like to standardise other nice features of Keccak: a duplex mode for authenticated encryption, a dedicated PRF which can be used in place of HMAC, support for tree hashing (incorporating Keccak team's Sakura padding scheme where possible), and possibly (in the further future) random number generation using the duplex mode. They are also interested in analysis of Keccak with smaller permutation sizes, as this could be nice for constrained devices.
Throughout the talk, and in concluding, Kelsey was keen to emphasise how highly NIST value the input of the academic community and how open they are to further comment and critique -- with regard to both the planned modifications to the main functionality and to the avenues for further standardisation and applications of sponge functions and the Keccak duplex mode. In the question time concerns were raised by one delegate that the fairly dramatic capacity reductions proposed amount to standardising an untested cipher (thus somewhat defeating the point of the competition); another delegate put forward that such parameter changes do not invalidate the analysed security of the underlying algorithm. Clearly there is much yet to be discussed -- and NIST appear extremely keen to engage fully with this discussion. To this end there will be a NIST hash workshop co-located with next year's Crypto, at which they hope to hear research community insights on all things SHA-2 and SHA-3: Keccak with smaller permutations, cryptanalysis and differential/linear trail bounds, tree hashing, generic hash-based authenticated encryption, clever applications for sponges or duplex mode and so on. Plenty to work towards in the meantime!
[1] I concede that Joyce may not in fact have been referring to the SHA-3 competition. But it seems as good a guess as any...
Thursday, August 22, 2013
Crypto 2013: Thursday morning
The general theme of the first two sessions seemed to be authentication. The first talk was given by Maciej Obremski on Non-Malleable Codes from Two-Source extractors. The application that he mentioned was the storage of data on some device, where an adversary might tamper with it. He talked about ways to encode this data (using non-malleable codes) such that if an adversary tampers with the encoded data, either the result will still decode to the original data, or it decodes to some data that is independent of the original data (i.e., a constant function). Finally, Maciej talked a little bit about the notion of leakage-resilience of the resulting scheme. It is still secure even if the adversary has access to some (bounded) leakage of the encoding function and is able to use this information while tampering with the encoded data.
The second talk was on Optimal Coding for Streaming Authentication and Interactive Communication, given by Ran Gelles. He talked about a combination of error-correction and authentication of data streams, in the setting where the adversary can introduce some (globally bounded) noise on the channel. The usual approach is to use a Message Authentication Code and Error Correcting Codes on the whole of the data, but that is not feasible in a setting where the data is a stream and comes in bit-by-bit. The next possibility would be to divide the data into chunks and applying the previous method to each of the chunks. However, in this case the adversary might be able to attack certain blocks completely while still maintaining low global noise. Furthermore, the authentication guarantee is only per block. The solution that Ran described made use of two interesting tools: blueberry codes and tree codes. The blueberry codes are codes where each bit of the stream is mapped to some random symbol based on the authentication key. This means that, when the adversary changes a symbol, it might no longer correspond to one of the possible symbols for that position. Thus, it is revealed that the symbol was altered and it can be treated as an erasure. The tree codes are binary trees where each node has a label such that two different paths of equal length have labels of a large Hamming distance. These trees have a so-called self-recovery property, which allows for the recovery of most of the message, but unfortunately this can take exponential time. To get around this, the authors propose splitting up the tree in several trees of logarithmic size. Ran also mentioned several extensions to the security, where the user can decide to abort (which occurs with polynomially small probability) such that the probability that the user decodes incorrectly occurs with negligible probability. Finally, he showed that Blueberry codes are useful in other settings as well, such as the interactive communication setting where both users of the system want to transmit data to each other.
The last talk of the session did not contain any authentication, but was instead on the limits of certain proof techniques in the area of secret sharing. Carles Padro talked about Secret Sharing, Rank Inequalities and Information Inequalities. He described the conjecture in secret sharing that there exist certain bad families of access structures where the size of the shares in a secret sharing scheme grow exponentially in the number of users. For linear secret sharing schemes, the lower bound is known to be super-polynomial, but no such bound exists for general secret sharing schemes. All current progress on this result comes from proofs using information inequalities. Previously it was shown that for a limited number of variables (up to 5) such lower bounds could not be bigger than linear in the number of users. In this work the authors show that for any bounded number of variables r, the lower bound can not be bigger than a polynomial of degree r-2 in the number of users.
In the second session, Thomas Peters spoke about Linearly Homomorphic Structure-Preserving Signatures and their applications. I will not go into detail here, but Thomas showed that the combination of these two notions--Linearly Homomorphic Signatures, which are useful for computing on authenticated data, and Structure-Preserving Signatures, which allow composition with algebraic tools such as Groth Sahai proofs--results in possible applications beyond just the sum of the application of its parts. Specifically, Thomas mentioned verifiable computation mechanisms for encrypted data as well as non-malleable structure-preserving commitments.
The last talk before the coffee break was given by Daniel Masny, who talked about Man-in-the-Middle Secure Authentication Schemes from LPN and Weak PRFs. These kinds of authentication schemes are designed for constrained devices, where having a regular PRF may be too costly. Instead, they use the weaker notion of a weak PRF and attempt to design protocols that still remain secure against adversaries. Daniel described two solutions, the first of which is based on the LPN problem. He showed that, in comparison to a previous solution using MACs their solution results in smaller key sizes at the cost of one extra round of communication. The other solution consists of adding one extra secret element to the key of a weak PRF-based scheme, which results in security against Man-in-the-Middle attacks.
After the authentic coffee break we entered the world of Quantum Security. Although there is no fully functional quantum computer that has all the desired properties, people have been extending cryptographic models to the quantum world. For example, Frédéric Dupuis talked about Achieving the limits of the noisy-storage model using entanglement sampling. By looking at the information that a party holds about a certain quantum system before and after a transformation, they were able to provide lower bounds for specific transformations. These lower bounds then allowed the authors to prove a weak string erasure protocol secure in the bounded quantum storage model.
Douglas Stebila talked about Quantum one-time programs, which are programs that can only be run once and then 'self-destruct', preventing them from being run again. Because classical memory can be copied, classical solutions require special assumptions such as secure hardware tokens. Since quantum states cannot be copied due to the no cloning theorem, it appears that such an application would be easier in a quantum world. However, the authors show that this is not the case, and that it is only possible for certain trivial quantum programs.
Most of the cryptographic community considers quantum adversaries interacting with classical users. An adversary might use his quantum computer to break the classical RSA of the users, for example. In contrast, Mark Zhandry talked about quantum adversaries interacting with quantum users in his talk on Secure Signatures and Chosen Ciphertext Security in a Quantum Computing World. In their model of a quantum signature scheme, the adversary may decide to request a signature on a superposition of all messages and receive a superposition of signatures for all messages. Measuring then gives the adversary a valid signature. Now, the challenge for the adversary is to output more signatures than he queried (and possibly measured). Mark also described a new technique called intermediate measurement, where you measure some small subset of the qubits (thus collapsing the superposition) and continue the algorithm. This is a useful tool in showing the security of schemes in this quantum model.
The last talk of the morning was about Everlasting Multi-Party Computation, given by Dominique Unruh. Everlasting secure protocols remain secure against adversaries that have unlimited computation power as soon as the protocol ends. As long as the protocol is secure against computationally bounded adversaries now, i.e., while running, it remains secure forever. In the face of several impossibility results (even when using trusted set-up information), Dominique showed a solution for everlasting MPC in the quantum model using signature cards.
CHES 2013: Wednesday afternoon
Wednesday, August 21, 2013
Why does the web still run on RC4?
- Take your solution, and make sure you have stable high-speed, public domain implementations of it in as many products as possible.
- Wait some, the longer the better.
- Go break the old stuff you want to replace.
Crypto 2013: Monday Afternoon
After a brief session on the foundations of hardness (which I will not blog about due to the hardness of foundations), the Monday afternoon was dedicated to Cryptanalysis. There were seven talks on the topic (thankfully spread over two sessions with a break in between).
The first talk was suprisingly against cryptanalysis. Marc Stevens introduced a new paradigm which he had coined ``counter cryptanalysis''. For this work he received the conference's Best Young-Researcher Paper Award and Marc did not disappoint with his presentation.
Previously Marc (and coauthors) had won the Best Paper Award at Crypto'09 for creating a rogue CA certificate. For that work, Marc and his coauthors had created two messages that collided under MD5. One message was relatively harmless and was asked to be signed by a higher level certificate authority, the other message was less harmless and contained the credentials of a new, rogue certificate authority. Due to the collision, the signature on the harmless message is equally valid for the harmful message. At the time, this attack clearly demonstrated a serious problem with the hash-then-sign infrastructure based on by now broken hash functions.
As Marc explained, migration from MD5 (and SHA-1) to SHA-2 or beyond is in practice not as easy as it might sound from a theoretical point of view. Any individual signer could migrate, but as long as a few do not migrate, verifiers will still have to accept the old, insecure schemes. An attacker would then seek out the signers using outdated crypto and the problem persists. A verifier could stop accepting weak signatures, but again there is a practical problem as there are simply too many valid signatures around to invalidate them.
The solution proposed by Marc is to accept old, valid signatures, yet to refuse to sign, and signatures on, forged messages. While filtering out forged messages sounds like a great suggestion, the obvious question is how to determine whether a message is honest or forged. This is where Marc's expertise and the new paradigm of countercryptanalysis comes into play. As Marc defined it, the goal is ``to detect cryptanalytic attacks at the cryptographic level; the design itself should not change.''
To understand why this is at all possible, some background on modern collision attacks is required. It turns out that dedicated attacks are highly specialized and tend to introduce certain unavoidable anomalies. For MD5 there are a small number of differential steps used by all known attacks, regardless of the full differential trail or path. Moreover, it is hard to see how to avoid these differential steps without blowing up the attack complexity considerably. If a message pair was crafted to lead to a collision, given just one of these messages and guessing the right differential at the right place will allow efficient recovery of the companion message and the full differential path. On the other hand, for a normal, honest message no collision will be found this way and the message will pass the test. Thus a hash function can be augmented by this additional test, which returns a single bit to indicate if the message led to a collision or not. This extra check can be used online by both the signer and the verifier, to avoid and limit the damage of forgeries.
The final part of the talk concerned the offline uses of the new attack. Recently a new and highly sophisticated virus, Flame, was discovered. It specifically targeted the Middle-East and had gone undetected for over 5 years. What makes it interesting from a cryptographic perspective is the way it operated. Somewhere in Microsoft Windows chain of command, it turned out that there was a hash-then-sign service authenticating the Update functionality. Flame (or its developers) had created an illegitimate sub-CA certificate using a collision attack on MD5. Although only one of the involved messages was known, using countercryptanalysis Marc could recover the colliding message (that must have been signed directly at some point) as well as the differential path. Surprisingly, the path was novel, indicating that some cryptanalytic effort had gone into creating the Flame virus.
The second talk of the session, and the last I will be blogging about, concerned attacks on locking systems. The presentations was by Daehyun Strobel, but he represented a fairly large team of collaborators. Traditional, mechanical locks can be picked quite easily and as a result electronic locks and access control are becoming increasingly popular. One of the more popular models is the SimonsVoss 3060 G2, which consists of a lock and transponder to open the lock. The system supports a large number of digital locking cylinders and transponders, so the right people will be able to access the right rooms. The system is based on proprietary software.
The first step performed by the Bochum team was to open up some of the hardware. This revealed several components and further investigation showed that the PIC16F886 microchip carried out all the cryptographically relevant operations. SimonsVoss had protected this chip by disabling straightforward code extraction. However, by adapting a known fault attack involving decapsulation of the chip and subsequent bathing in UV-C light Daehyun and his colleagues had managed to disable the standard protection mechanism. This allowed reconstruction of the authentication protocol, which turned out to be a multi-round challenge-based protocol using two proprietary functions.
Both functions are based on a modified version of DES combined with a proprietary obscurity function. Closer analysis revealed several weaknesses that, when combined, led to an attack on a live lock taking a measly 0.65 seconds with a standard PC. SimonsVoss has since released a patch to update the challenge protocol. Daehyun or his coauthors had not yet investigated this new challenge protocol or, indeed, the update mechanism itself.
Crypto 2013: Tuesday Afternoon
As with many deployed key-exchange protocols the handshake and record layer overlap. With the key confirmation or "finished" messages being sent over the established channel to preform the final steps of authentication. As a result the handshake protocol cannot be proved secure under standard notions of security for key-exchange like that of Bellare and Rogaway. Previous work has been performed on analysing the TLS handshake protocol; one of the first analyses was performed by Bristolians: Morrissey, Smart and Warinschi. In this work they considered a truncated version of TLS and were then able to prove this secure. The unfortunate problem with this approach is that it no longer accurately captures exactly how TLS operates in practice. More recently Jager et al. defined the notion of Authenticated and Confidential Channel Establishment (ACCE) and proved the security of one particular mode of the handshake protocol, namely TLS-DHE with mutual authentication. This configuration choice is not that commonly used in practice for TLS and leaves the security of many other more widely used versions of the handshake protocol to remain unproven.
The goal of the handshake protocol is to establish keys which will ensure data confidentiality and integrity plus additionally providing entity authentication. The key exchange protocol is either based on Diffie-Hellman or uses RSA to send an encrypted secret (from client to server) used to establish the application key. The authentication may either be server-only (more common) or mutual (where the client must additionally provide a certificate).
In Krawczyk et al.'s paper they look to prove security for all possible modes of the handshake protocol. In the talk Hoeteck provided a detailed description for the case of TLS-RSA. In summary the protocol in this case is as follows. The client will send a randomly chosen pre-master secret encrypted using PKCS#1v1.5 from which the server can derive the application key. This is then followed by the sending of the "finished" messages. By using PKCS#1v1.5 in this manner the handshake protocol would in fact be vulnerable to Bleichenbacher's attack, allowing an adversary to determine the pre-master secret and hence also the application keys used. To resist against this attack TLS performs an adhoc fix where decryption failures are suppressed (the attack relies on a ciphertext validity oracle). Since the underlying encryption scheme cannot be proved CCA secure the authors instead use the notion of constrained CCA introduced by Hofheinz and Kiltz. From this they can then prove the security of the entire handshake and TLS protocol for both the server only and mutual authentication cases. Their final security results are related to a version of the ACCE notion. In the paper they go on to prove (in a similar way) the security of TLS-DH, TLS-DHE and also TLS-CCA for CCA secure encryption schemes such as RSA-OAEP.
In their work the authors have provided a complete analysis of the core of TLS, capturing a wide class of attacks within their model. It should be noted though that they have omitted some aspects of TLS from their analysis such as ciphersuite negotiation, renegotiation and compression (CRIME attack). Heoteck finished by asking the question "Is TLS fundamentally flawed?". His answer to this was no, despite TLS violating some basic principles of how cryptography should be used and not moving to a CCA secure encryption scheme (as a reaction to Bleichenbacher's attack). The design of TLS-RSA is fragile but is shown to be provably secure, all be it accidentally.
Crypto 2013: Tuesday morning
The first talk dealt with the concept of fairness. A protocol is fair if an adversary who learns the output of the function cannot prevent the other parties from receiving the same output, by aborting the protocol, for instance. This is naturally a highly desirable property for secure computation, but even when it comes to the most basic protocols, our understanding of fairness is lacking. For example, it is impossible to construct a fair protocol for the coin flipping functionality, where two parties jointly generate a random bit. The talk gave some new results here, such as showing that coin flipping is impossible, even given a protocol that fairly computes random-OT, using a nice application of spectral graph theory.
Later in the morning, topics moved to leakage resilience and then symmetric encryption, which was (perhaps surprisingly) my favourite session of the day, dealing with topics related to cloud security and format-preserving encryption. The last few years have seen a massive surge of interest in techniques for computing on encrypted data and their application to cloud security. Whilst FHE as a concept is very nice and general, it is currently quite impractical, particularly for the oft-touted application of encrypted database search, since it requires computation at least linear in the database size. An alternative for this application is symmetric searchable encryption, explained Hugo Krawczyk.
Their scheme allows efficient, arbitrary boolean queries on keywords, with both the queries and database remaining semantically secure against the cloud. Most impressively, it has been tested on 10TB databases with 100 million records and over 100 billion indexed record-keyword pairs. The key improvement over previous work is efficient support for conjunctions (multiple keywords), which run in time proportional to the least frequent search term, whereas previous schemes allowing this were linear in the entire database size. However, information is still leaked on the access pattern through repeated results and queries, which seems a difficult problem to efficiently overcome.
Another highlight of the symmetric crypto session was Scott Yilek's talk on fully secure small domain encryption. This is applicable to format-preserving encryption, in which a ciphertext 'looks like' a plaintext. So for instance, an encrypted credit card number still looks like a credit number, which is very important for companies upgrading legacy systems to support encryption. The difficulty here is constructing a block cipher for arbitrary small domains, e.g. {0,1}^20 or {0, ..., 9}^16.
Scott's talk was very well presented (I highly recommend checking out the video if you missed it), with diagrams of playing cards for a simple recursive shuffling algorithm used to build a new primitive called a pseudo-random separator (PRS). To build a fully secure PRP there is the 'icicle' construction, which chains together several PRSs, illustrated by a diagram resembling icicles hanging from a sloped ceiling. The construction improves by a log(n) factor on previous fully secure schemes, but other schemes that are not fully secure are still more efficient.
Monday, August 19, 2013
Crypto 2013: Session One
Tuesday, August 13, 2013
DIAC 2013: Directions in Authenticated Ciphers
After lunch Kenny Paterson gave the invited talk on AE modes used in TLS. In his talk he spoke about a number of attacks on TLS, concentrating on two of his own recent attacks. Firstly the Lucky 13 attack which was published at IEEE S&P earlier this year. This attack exploits the use of CBC mode, a 13-byte header, the MAC-encode-encrypt construction and the timing difference between the different MAC checks performed for the two situations of valid and invalid padding (due to an extra compression function calculation). Secondly he briefly described new work on RC4 where they determined all the statistical biases within the keystream and how to utilise these to perform an attack on TLS. This work will be presented later this week at USENIX Security.
During the rest of the afternoon there were a further five talks. The final presentation was given by Begul Bilgin. This talk introduced a new AE scheme called FIDES (named after the goddess of trust) which was designed for use in constrained environments. Due to its online, single-pass structure it has certain efficiency advantages and uses a much smaller number of gates compared to other lightweight schemes such as Hummingbird and Grain. The construction uses a sponge-like structure and comes in versions with either a 80-bit or 96-bit key. The day was concluded with a meal at a nearby Greek restaurant.
On Tuesday there were five talks. The third was given by Shay Gueron who spoke about the software performance of AES-GCM. The purpose of this talk was to give a baseline with which entrants to the CAESAR competition can be compared. The current best implementations of GCM use Intel's AES-NI instruction set and PCLMULQDQ (used to speed up binary field computations needed for GHASH). At the moment speeds are limited by the GHASH calculation with this accounting for 70% of the computation time on Sandy Bridge and Ivy Bridge processors and 40% on Haswell processors. In summary Gueron felt the target schemes entered to the CAESAR competition should aim to achieve is performance of below 1 cycle per byte.
The final presentation of the workshop was a special "bonus" talk given by Phil Rogaway. Here he argued that the generic compositions of Encrypt&MAC, MAC-then-Encrypt and Encrypt-then-MAC proved by Bellare and Namprempre (BN) should not be the full story of how we now view and study AE. When we think of AE we may consider nonce and IV-based schemes; the work of BN only considers probabilistic schemes. In addition it is now common knowledge that incorporating associated data into our design and analysis is important. This then leads to looking at various other ways of combining primitives to achieve AE. Take nonce-based encryption and a MAC; here the three compositions E&M, EtM and MtE can now all be used to obtain a secure nonce-based AE but with a small caveat; only if encryption is the inverse of decryption (this would require a change to the current definition of nonce-based encryption). Next take an IV-based encryption scheme and combine it with a (single or multi-input) MAC to obtain a nonce-based AE scheme. Rogaway stated that in this situation there are a total of six secure constructions with a seventh method with which they are (as yet) unsure if security is obtained. Examples of current modes fitting in these compositions are EAX and SIV. Finally he discussed whether providing a direct construction from a permutation to a nonce-based AE is actually any better than a construction going via other primitives such as a block cipher.