## Thursday, January 28, 2016

### A Modular Framework for Building Variable-Input-Length Tweakable Ciphers

For this weeks study group I presented a paper by Shrimpton and Terashima from AsiaCrypt 2013[1]: A Modular Framework for Building Variable-Input-Length Tweakable Ciphers. Starting from the bottom, the authors take a modular approach to build an Authenticated Encryption (AE) scheme, starting with some relatively simple primitives and extending their functionality until full AE is supported. So, the paper can be broken up into 4 sections:
1. Introduce our primitives: Tweakable Block Ciphers (both Beyond Birthday fixed-input-length and Variable-input-length)
2. Combine these with their new Protected IV (PIV) to form an Arbitrary Input Length Tweakable Block cipher (AIL TBC)
3. Provide two explicit examples of such constructions
4. Build a secure AE scheme out of a VIL TBC
From a practical point of view, Part 3 is arguably the most interesting, because it provides explicit examples of secure constructions and (when combined with 4) yields an AE scheme that may be secure beyond the birthday bound (which is the point at which most symmetric security proofs break down).
This blog will mainly focus on Part 2, and readers are encouraged to read the full paper for more details on .

### Background

Very briefly, let us sketch a few key notions:
• A TBC (Tweakable Blockcipher) acts like a family of block ciphers, one for each tweak. If it is secure, then any change in the tweak should make the TBC act completely differently, which we call an STPRP (for strong tweakable pseudo-random permutation).
• A primitive is VIL (Variable Input Length) secure if it is secure when queried on messages of different lengths
• A primitive is AIL (Arbitrary Input Length) secure if it is secure when queried with message of any (single) length
• Authenticated Encryption was discussed in a blog post last year[2] and (roughly) corresponds to secure communication between two people sharing a key.
• The Birthday Bound on n bits is roughly $q^2/2^n$, and is the point many symmetric security results break down. A scheme that is still secure for $q>2^{n/2}$ is known as Beyond Birthday Bound (BBB) secure.

### The Protected IV (PIV) construction

The key aim of the paper was to build a secure VIL TBC (ie an STPRP) from a fixed-width TBC with variable length tweak (F) and a VIL TBC (V). To do so, the authors describe the PIV (for Protected-IV) scheme. A diagram of the construction is given to the right, and we thank the authors for permission to reproduce their graphic. It can be seen as an extension of the SIV scheme[3], except that by re-encrypting the keeping the IV secret ("protecting" it) and letting it carry some information about the plaintext, the authors have managed to remove the ciphertext expansion required for SIV security.
The most interesting thing about the scheme is that V does not have to be secure as a VIL TBC: V only has to be secure if the tweak is never repeated (similar to the idea of a nonce-based authenticated encryption scheme). This makes V much easier to construct with (for example) a slight variant of counter mode sufficing.
The idea behind the proof is relatively intuitive, built around the fact that (because F is secure) the IV is random and doesn't repeat (up to a birthday bound term on |IV|). So, V is always called with a unique tweak, securely encrypting the X_r (or decrypting the Y_r) content, and so the output is nicely random, making the whole scheme a secure STPRP. Thus security of the scheme reduces to the security of F, V and of a birthday attack on the IV.

### Instantiations and Building Authenticated Encryption

To close, the paper provided some instantiations, and explains how to extend the Encode-then-Encipher[4] concept and proof to achieve strong Authenticated Encryption from a STPRP. We didn't have time to discuss these elements in detail, but observed that to achieve Beyond Birthday security, the IV had to be twice as wide as the birthday bound we seek to pass.

### References

1. A Modular Framework for Building Variable-Input-Length Tweakable Ciphers, Shrimpton & Terashima
2. 52 Things #27: What is AEAD?, from this blog
3. Deterministic Authenticated-Encryption: A Provable-Security Treatment of the Key-Wrap Problem Rogaway & Shrimpton
4. Encode-then-encipher encryption: How to exploit nonces or redundancy in plaintexts for efficient cryptography BEllare & Rogaway

## Monday, January 18, 2016

### Sixth Bar-Ilan University Winter School on Cryptography

This is the first of a two-part blog post has been collaboratively written for the ECRYPT-EU blog by Eduardo (University of Bristol), Marie-Sarah, Matthias and Ralph. Part 2 can be found here.

Earlier this month, from 4-7 January, a few ECRYPT-NET fellows and about a hundred others attended the Bar-Ilan University winter school on cryptography. It took place in Ramat Gan, a suburb of Tel Aviv, at the Kfar Maccabiah hotel and conference centre (named after the Maccabiah, or Jewish Olympics, that take place there every four years). The school was intense, but well organized. It was split into two parts, verifiable computation and special encryption, and so will be our coverage of it.

### Part 1: Verifiable Computation

Michael Walfish, Yael Tauman Kalai, and Eran Tromer guided us through methods for verifying the outsourced computation of functions. We particularly appreciated the crystal-clear overview of all sessions, given by Michael Walfish, that emphasized how the content of the talks fit together.

Let's set the scene for verifiable computation: a client (the verifier) wants to outsource the computation of a function f to a server (the prover) who has more computing resources. But how does the verifier know that the value returned by the prover is actually the result of applying the function f to the purported inputs? A malicious or a lazy server could indeed modify the process to gain some advantages as, for instance, reducing the cost of operating a system.

Verifiable computation of f(x) comprises the following two phases:
1. Program representation (or arithmetization): The verifier (or a party he trusts) expresses the function f as a set of arithmetic constraints over a field, in terms of the input x, output y, and intermediate variables z. Each of these xy, and z may be vectors, e.g., x=(x1,x2,x3). Typically, the format these constraints needs to have is that of degree-2 polynomials that equal 0 when they (and the function f) are satisfied.
2. Solving and proving: The server must prove to the client that the solution it returns, y, is correct. The landscape of proof protocols shows a trade-off between efficiency, expressiveness and additional properties like zero-knowledge or non-interactivity. The speaker himself recently wrote a survey which is a very nice introduction to the state of the art and these trade-offs.

Yael Kalai's talks took a more theoretical approach, guiding us through the evolution of Probabillistically Checkable Proofs (PCPs). She emphasized the importance of "good" security assumptions, where "good" requires at least, and according to her point of view, that the underlying assumptions can be efficiently falsified. These theoretical worries were well founded, as most of today's verifiable computation protocols rely on SNARKs (standing for Succint Non-interactive ARguments of Knowledge) which cannot be proved secure via black-box reductions from (efficiently) falsifiable assumptions.

Yael's talks provided also very interesting and intuitive examples. To give one, suppose that Peggy and Victor are playing chess. After a number of moves, Peggy (the prover) wants to prove to Victor (the verifier) that she has a checkmate. If Victor fails to see it, it is for Peggy easier to convince him by continuing the game (an interactive proof) until he does, rather than to explain all the possible combinations of moves without moving any piece (a non-interactive proof). This intuition of the power of interaction extends to the rest of the proof systems. Finally, she even showed us how the subject fits in the quantum framework, introducing us to the notion of non-signalling adversaries.

Eran Tromer recovered the line of Michael Walfish and focused on the details of SNARKs and how they are actually constructed. Among others, he has been writing libsnark, a C++ library that is used a lot for verifiable computation systems relying on SNARKs. He also showed us a potential application for them, called Zerocash. Zerocash is a protocol that provides a privacy-preserving version of Bitcoin. In contrast to Bitcoin, where all the transactions are public in the block chain, Zerocash does not contain information about the payment’s origin, destination or amount. The correctness of the transaction is guaranteed via a zero-knowledge proof. More details can be found here.

### Part 1.5: Excursion to Caesarea and Binyamina Winery

 Sun starting to go down in Caesarea.

Tuesday afternoon, we made an excursion to the remains of Caesarea, a Roman city on the Mediterranean coast that was built by Herod over 2000 years ago. To say that it had a tumultuous history would be an understatement. Our tour included a walk through a "graveyard" of columns and capitals, the amphitheatre (whose first row is still intact), and the hippodrome.

Next, we took an informative tour of the Binyamina winery. We learned that grapes are crushed with a flexible rubber material to simulate the skin of feet. For red wine, the grapes are fermented (skin, seeds, and all) before being crushed. For white wine, seeds and skin are removed (by sedimentation) after pressing, then fermented. The tannin (bitter-tasting substances) in wine comes from the seeds, skin, and maybe the material of the barrel in which it is matured. Whether wine is aged or matured in an American oak (sweeter) or French oak (more tannins, adds more complex flavours) barrel affects the final product. Tannins prevent oxidation, so red wine (with more tannins) can be matured longer. Stopping fermentation early makes wine more sweet.

After learning how wine is made, we concluded the day by learning how it tastes (using our five senses!) and enjoying a generous dinner.

The second and last part of the post can be found in the ECRYPT-EU blog.

## Friday, January 8, 2016

This week saw Real World Crypto 2016 in Stanford California. The highlight was the first awarding of the Levchin prize for work in the field of practical cryptography. The prize award is donated by Max Levchin, a founder of PayPal, and two such prizes of $10,000 will be awarded annually. The first recipients of the award are • Phil Rogaway for his long standing work on developing practical cryptographic algorithms, the development of practice oriented provable security, format preserving encryption and numerous other algorithms which are used every day to secure our online world. • The miTLS team for their work on producing a formal analysis of the TLS protocol specification, and in the process finding a number of real world attacks on this protocol such as the triple-handshake attack. The real purpose of the award though is to highlight work to the wider community that one can have deep and lasting impact on society by working in an area as mathematically opaque as cryptography. Awards such as this, and events such as Real World Crypto, are designed to raise the profile of applied work in this space and encourage people to apply their skills to solving the pressing security problems affecting our online world. In the rest of the conference there was an amazing program of interesting talks (although I would say so, since I was on the panel for selecting the talks). The highlight of day one for me was the talk by Adrienne Porter Felt on usability issues related to TLS failures in Google Chrome. By collecting numerous bug reports from Chrome users the team at Google found that most errors are not due to poor server configurations (indeed most errors occur when users connect to sites such as Google or Facebook), but are due to poor client configurations. For example a significant proportion of errors are caused by device times being incorrect. So lesson: Make sure you set your clocks correctly. One highlight of the second day was Hovav Shacham's talk on the recent discovery of a backdoor Juniper's ScreenOS. The initial backdoor was rather uninteresting in that if a certain key combination was presented a user would be given enhanced privileges. However, on discovery of this backdoor Hovav and his colleagues discovered a more interesting potential backdoor based on the Dual-EC PRNG that could compromise the VPN traffic that Juniper is used to protect. The interesting part was that previous cryptographic focus on Dual-EC has been on products which had explicitly listed Dual-EC usage as part of their FIPS certification. The Juniper product had not explicitly listed that it used Dual-EC, so the discovery of a Dual-EC based potential backdoor could imply that many more products, by many more vendors, could be using the Dual-EC PRNG. The talks generating the most interest on the third day were the ones explaining the new Intel SGX technology. This is a technology which allows applications to run in an "encrypted enclave" on an Intel chip; where data is held encrypted in memory and is only decrypted as it enters the chip and is processed. When it returns to memory it is automatically encrypted. At its heart this idea goes back to the original paper on homomorphic encryption by Rivest et al from the mid 1970s. However, the new Intel technology has a number of additional features which make it suitable for a modern environment. The first talk by Rebekeh Leslie Hurd introduced the overall technology and some of the attestation and communication issues needed to authenticate the enclaves, and allow enclaves to talk to each other. The second talk by Shay Gueron discussed the details of how the memory is encrypted in a way which respects the cache architecture on modern microprocessors. ## Saturday, December 5, 2015 ### Secure Computation from Millionaire In the last session of AsiaCrypt2015, Muthuramakrishnan Venkitasubramaniam presented his paper Secure Computation from Millionaire'', joint work with abhi shelat. Given a function f , the standard way to perform secure computation for f consists of two different steps: in the first f is transformed into either an arithmetic/boolean circuit or a RAM program, in the second a generic secure computation protocol (for example Yao/GMW based, or information theoretic) is applied to perform the actual evaluation. In his talk Muthuramakrishnan described a different approach for designing secure computation protocols. The method he described appeared first at Eurocrypt 2004, in the work of Aggarwal, Mishra and Pinkas, where it was used for computing the median: Alice holds a private dataset S_A and Bob a private dataset S_B, each party computes the median of its own dataset, say m_A for Alice and m_B for Bob, and jointly compare them. If for example m_A < m_B, then Alice deletes all the values in S_A smaller than m_A and Bob deletes all the values in his dataset bigger than m_B. Thereafter, they recursively repeat this step on the smaller dataset. By replacing each comparison with a small secure protocol, it is possible to prove security of the overall protocol in the semi-honest model. This technique was later generalized by Brickell and Shmatikov and applied to solve shortest path problem. The natural question which now arises is: what else can we compute using the comparison (millionaire) function? The authors generalized the techniques from both the aforementioned papers to a large class of problems (matroid optimizations, convex hull, job scheduling, set cover) that can be seen as instantiations of the greedy-algorithms paradigm. The main idea is that of reducing these problems to iterative comparison operations and gradually revealing the answer, as for the median computation. Unfortunately, this implies that simulation-based security can only be guaranteed (under certain conditions) in the semi-honest and covert setting, because a malicious adversary could adaptively abort in the middle of the computation. Muthuramakrishnan concluded his talk with interesting open problems, i.e. trying to find examples that admit malicious security and generalize their framework to other primitives or paradigm. ## Friday, December 4, 2015 ### Workshop On Lattice Cryptography It is the day after AsiaCrypt 2015 and there are two workshops being held in Auckland. The one which is most relevant for my research is that on Lattice Based Cryptography; which consists of four talks. One by Jung Hee Cheon on "Multilinear Maps and Their Cryptanalysis", one by Amit Sahai on "Obfuscation", one by Fre Vercauteren on "Weak Instances of RLWE" and one by Martin Albrecht on "Small Secret LWE". Cheon first described a very naive version of multi-linear maps and then went on to show how this can be attacked by creating non-trivial encodings of zero, and then taking greatest common divisors. Then he went on to generalise this naive scheme to the CLT scheme (which is a bit like the DGHV FHE scheme). The naive attack does not apply to CLT as the dimension increased, meaning taking naive greatest common divisors would not work. Cheon then showed how to extend the naive attack to the CLT case by turning the gcd extraction into an eigenvalue extraction problem. This done by building quadratic forms which represent encodings of zero. The result is that for the CLT scheme one can break the equivalent of the DLP problem. Cheon then went on to present the GGH scheme, which is a bit like the NTRU FHE scheme; except the instead of encrypting via c=[(m+r*p)/z] for an integer p, one encodes via c=[(m+r*g)/z] for a polynomial g which generates the ideal lattice <g>. Modifying the prior attack in this situation allows us to recover a basis of this ideal. But finding a short vector in this lattice can be hard. However, by utilizing encodings of zero one can actually solve the equivalent of the CDH problem. Both attacks rely heavily on the presence of encodings of zero. So the attacks do not apply to situations in which one does not publish such encodings; i.e. applications such as indistinguishability Obfuscation (iO). Amit Sahai then gave an introduction to iO; he motivated it via an analogy of an attacker who captures your brain and is able to read and tamper with every neuron, yet we still do not want the attacker to know what we are thinking about. This is the problem which obfuscation tries to solve in the computing realm. Martin pointed out that this would be a great way to produce malware! Amit then put Multi-Party Computation within this analogy. He suggested we can think of MPC as protecting our brain against the tampering adversary, by dividing the brain up into portions. As long as one portion is kept out of the adversaries control we can use MPC to protect our thoughts. Obfuscation tries to do the same, without there needing to be an honest part of the brain. Any program which is suitable for obfuscation must be unlearnable from query access to the program. Since otherwise the adversary could learn the program from the input/output behaviour. However, black-box obfuscation has been shown to be impossible; essentially because their are contrived programs which are unlearnable but for which one cannot produce an obfuscation, since any obfuscated program has an explicit attack against it. This is why iO as a concept was presented; since it at least seems possible to achieve. The idea is that if you have two equivalent programs and we obfuscate one of them, then the adversary cannot tell which one we obfuscated. One way of thinking of this is as a psuedo-canonicalizer. The question is what useful can one do if we could create an obfuscator which satisfied the iO definition. Amit gave the application of building demo versions of software, without needing to re-engineer the software. Fre Vercauteren then discussed a more in depth analysis of a paper from CRYPTO this year on Weak Instances of Ring-LWE. The CRYPTO paper gave instances where decision Ring-LWE was easy, but search appeared to be hard. However, Fre's talk showed that the search problem was in fact easy from the start, and thus the CRYPTO paper was less surprising than it at first seemed to be. As with all things on Ring-LWE the question arises as to how to choose the error distributions. Fre spend the first part of his talk discussing the geometry of number fields, and in particular the Minkowski embedding. The Ring-LWE problem generates errors according to a discrete Gaussian distribution in the Minkowski embedding, Poly-LWE is to generate the errors according to a discrete Gaussian in the polynomial embedding. Eisentrager et al discussed cases for which Poly-LWE was easy, these were then extended by Elias et al to special cases of decision Ring-LWE. They did this by mapping the special Ring-LWE instance to a special Poly-LWE instance.This is done by pulling back the problem from Ring-LWE to Poly-LWE via the matrix which defines the Minkowski embedding. The Poly-LWE attack requires that q is larger than f(1), and hence q will "kind of show up" in the coefficients of the defining polynomial f. So the fields being attacked, are very special indeed. ## Thursday, December 3, 2015 ### Garbling with constant gate size In the final MPC session of Asiacrypt, Carmen Kempka from NTT presented a Garbling scheme for formulas with constant size of garbled gates. The talk described an interesting new technique for garbling Boolean formulas (i.e. circuits with one output) based on trapdoor permutations and "backwards" garbling that, remarkably, achieves a constant size of just 4 bits per garbled gate. Garbling schemes are a popular technique for constructing secure two-party computation schemes, and most practical approaches are based on the classic technique of Yao, where for each gate of the circuit, the resulting garbled circuit contains several ciphertexts that must be transmitted in the 2-PC protocol, where each ciphertext is of size$O(k)\$ bits. At Eurocrypt this year, Zahur, Rosulek and Evans proposed the half-gate technique for garbling with just two ciphertexts per gate, and also showed that any "linear" garbling scheme cannot possibly do better than this. So, to reduce gate size any more, new non-linear techniques are needed; in Carmen's talk, some possible such techniques were given.

The main idea of the scheme is to use randomly generated, independent ciphertexts for each gate, so that the circuit evaluator can reconstruct these from just a single seed. The main difficulty is to ensure that the evaluator cannot then do the same as the circuit constructor, and create additional garbled circuits other than the one specified. To overcome this problem, they use trapdoor permutations (as opposed to typical garbling schemes, which just use hash functions) combined with a backwards garbling technique, where the input wire keys for each gate are chosen by solving a system of equations in the output keys and the ciphertexts.

The overall size of a garbled circuit is 4 bits per garbled gate, plus a ciphertext for each input, so this is the first scheme to achieve a constant gate size without a huge expansion in the input key size (as in Kolesnikov's scheme from 2005). However, since each input wire is uniquely determined by the output wires, gates can only have fan-out one, so the scheme is restricted to garbling formulas only. Extension to general circuits is possible, but at the cost of including 2 ciphertexts per gate.

Overall, the idea is neat, and it seems like a very interesting open problem to overcome the limitations of the scheme for general circuits, and reduce the size of the TDP-based ciphertexts for input gates.

### Asiacrypt 2015: The Moral Character of Cryptographic Work

The distinguished IACR lecture at this year's Asiacrypt was given by Phillip Rogaway, who chose to talk more about the political implications of cryptographic work rather than the technology itself. This was certainly refreshing to see.

Phil started his talk by highlighting some historical events on the relationship of ethics and sciences: the nuclear bomb, the Nazi doctors, and the environmental movement. There is now the general idea that scientists should not harm society, but actually contribute to the social good as an obligation from the professional role. This manifests itself in various ethical codes and organizations; even the IACR bylaws oblige it to serve the public welfare. According to the talk, however, these values are in decline. The military has no problems recruiting scientists, and it provides more and more funding for science. At the same time, cryptography, like any technology, is used as a tool of power, which is recognized much more by popular culture than by cryptographers themselves.

An older generation of cryptographers seems to more aware of this, for example David Chaum, who mentions big data collection as early as in the 1980s as well as the possible effects on behavior. Comparing the citations of David Chaum's most cited paper on electronic mail with Goldwasser and Micali's on probabibilistic encryption, one can see that only the latter is picked up by the usual cryptography conferences. Phil argues that this split is more political than anything else and that cryptographic primitives do not inherently favor individuals or powerful entities. For example, conventional encryption can be used as well to protect one's information as to take control from the user as in trusted computing.

All these issues gained much more attention by the Snowden leaks in the summer of 2013. It was revealed that mass surveillance is rife, obscured both by secrecy and complexity. Unsurprisingly, there is significant disagreement between government agencies and surveillance studies. While the former argue that cryptography destroys the balance between security and privacy, the latter show that surveillance simply is an instrument of power that makes people conformant (or killed by drones). Furthermore, they also argue that security and privacy are not necessarily mutually exclusive. There is historic evidence of unsavory uses of political surveillance from the FBI's letter to Martin Luther King Jr. trying to convince him of suicide to totalitarian regimes of today.

Considering all this, Phil claimed that while cryptography for security is a success, cryptography for privacy is not, and moreover, that cryptography becomes more and more self-serving ("crypto-for-crypto"). To counter this, he presented a few suggestions for interesting problems such xMail and big-key cryptography. The former is about sending messages via an untrusted server without allowing the server to link sender and receiver, the latter assumes that an attacker has already subverted the machine holding a key, but only has limited bandwidth to send information.

The last part of the talk consisted of twelve suggestions for cryptographers essentially calling for a more holistic view of our work. The suggestions cover quite a range from thinking twice about military funding to stopping to draw cute little devils for the adversary when it is in fact a large state-sponsored agency. The most interesting suggestions in my opinion is that we should taste our own medicine, that is, we should use privacy tools and improve them if necessary. However, there was also the suggestion to write fewer but more relevant papers, which is orthogonal to the current incentives in science.

Phil concluded with the quote "Just because you don't take an interest in politics doesn't mean politics won't take an interest in you."

There is an accompanying essay on his website.