Friday, May 31, 2013

Eurocrypt 2013: Day Three, Secure Computation

On the third day of Eurocrypt, there were three presentations on secure computation. This post will cover two of them.

Tore Frederiksen from Aarhus University presented an efficiency improvement for active two party computation based on Yao's garbled circuits. The core component is an XOR-homomorphic commitment based on OT, which makes the approach compatible with garbled circuit optimizations like free-XOR. This was not the case for a previous work by the same group where they used Pedersen commitments. However, both use cut-and-choose at gate level as opposed to at circuit level to achieve active security.

Most of the talk was devoted to the XOR-homomorphic commitment scheme based on OT and a linear error correcting code. The latter is required to have secret-sharing properties in order to achieve hiding. To produce a commitment, the input is encoded using the code and then input to a k-out-of-n random-OT. For the binding property, the receiver selects some bits of the encoded vector to check them against the output of the OT protocol.

By using OT extension, the usage of asymmetric cryptography can be reduced to depend only on the security parameter. Furthermore, all protocols are parallelizable, which leads to an overall constant number of rounds.

At the end of the session, Hoeteck Wee from George Washington University gave a pleasant talk about a result on secure computation with limited interaction. In the so-called one-pass model, the parties interact with a server once in a fixed order, after which the server announces the result. There is some inherent leakage: If the server colludes with the last k parties, they can evaluation the function with all possible inputs from those parties

Previous work in this model covered symmetric functions like sum and majority, and proved that pseudo-random functions are impossible to compute. The contribution presented at the conference extends the achievable functions to sparse multi-variate polynomials (e.g., variance) and read-once branching programs (e.g., string matching, finite automata, second-prize auctions), which allow branching once per player.

The protocol works as follows: The server encrypts all possible outputs under the keys of all parties and the server in order according to the branching in the program, that is, the outer-most encryption is under the key of the party associated with the last branching. The parties then interact with the server in the reverse order, that is, the party associated to the last branching comes first. Every party picks the possible ciphertexts according to his input, decrypts them and sends them to the server, who forwards them to the next party. At the end, the server decrypts the result.

The complexity of the protocol is determined by the width of the branching program. If the server is honest, secrecy is guaranteed by the encryption under the public key of the server. However, if the server colludes with the last k parties, the protocol inherently leaks information as described above, but not more. To achieve active security, non-interactive zero-knowledge arguments can be used. Finally, the biggest open question is the gap between the functions known how to compute in this model and the impossibility result.

Godel Prize goes to a "Tripartite Pairing" of Cryptographers

The 2013 Godel Prize winners have just been announced as being Antoine Joux, Dan Boneh and Matt Franklin; for their work in pairing based cryptography. The press release can be found here.

In the early 2000's the world of cryptography was revolutionized by two papers by these authors. A bit of background is probably needed to understand the revolution. In 1985 Victor Miller and Neal Koblitz introduced the idea of using elliptic curves as a means of building cryptographic systems; and since then elliptic curves have become deployed in a number of application domains such as on the web, in your mobile phone, or in your games console.

However, quite early on it was realised that some elliptic curves are weaker than others; in particular those for which there exists an efficiently computable bilinear pairing (sometimes called a bilinear map). In its basic form this is a map from two groups G1 and G2 into a new group GT which is linear in the first two coordinates. In practice one instantiates these with G1 and G2 being subgroups of an elliptic curve and GT being a subgroup of a finite field. Only a small subset of elliptic curves have an efficiently computable pairing, and since the early 1990's these had been avoided in normal applications of elliptic curves in cryptography.

However, in a paper in the ANTS conference in 2000 Antoine Joux showed how if we carefully chose the parameters of elliptic curves with a pairing then we could achieve a cryptographic functionality which we could not achieve with other techniques. This first application was to allowing three parties to agree a secret key using only one round of interaction; so called Tripartite Diffie-Hellman.

Then in 2001 at the CRYPTO conference Boneh and Franklin showed how the same technique could be used to create an identity based encryption scheme. This is an encryption scheme in which the receivers key is simply their name. Again this is something which had not be possible before.

Over the first decade of this century the number of papers on so-called Pairing Based Cryptography exploded. For example, according to Google Scholar, Joux's paper has been cited 886 times and the Boneh/Franklin paper has been cited 4527 times. The following work has ranged from algorithms to efficiently compute the various pairings needed (including the work in Bristol on the Ate-pairing algorithm) through to work on using pairings in advanced protocols such as group signatures, credentials, functional and attribute based encryption and our own work recent work on DAA protocols.

We have all just returned from Eurocrypt where the best paper award was given to some new ground breaking work which showed how one can construct pairings between more than two objects efficiently; so called multi-linear maps. We have discussed new work a lot in this blog in recent months; so I refer the reader to the previous posts on this topic.  It can be expected that the development of multi-linear maps is going to have the same profound effect that the development of bilinear pairings had in the last decade.

Thursday, May 30, 2013

Eurocrypt 2013: Day Three, Session Two

The second talk in the 'Public Key Encryption' session of this year's Eurocrypt was given by Dennis Hofheinz, on his paper 'Circular Chosen-Ciphertext Security with Compact Ciphertexts'. The work gives the first KDM-CCA secure scheme with ciphertexts that have O(1) group elements, building on the past work of Boneh et al. who give a KDM-CPA secure scheme. For comparison the other CIRC-CCA secure scheme in the literature has ciphertexts with O(k) group elements. In the KDM-CCA security game, the adversary submits a (length-regular) function to the challenger and receives an encryption of either the function acting upon the secret keys in the system or an encryption of a zero message, and naturally the scheme is secure under this notion if all (efficient) adversaries cannot distinguish between these cases. Note that CIRC-CCA security (called clique security by Boneh et al.) is the scenario where the adversary is restricted to functions that on input take the secret keys in the system and output one of these secret keys. This is a special case of (and implied by) KDM-CCA security.

The KDM-CPA secure scheme of Boneh et al. relies on the possibility of constructing encryptions of secret keys (under their corresponding public keys) publicly, something that will clearly deem it KDM-CCA insecure. Consequently, something else is needed to realise a KDM-CCA scheme from this starting point. The main technical tool used to ramp up to KDM-CCA security is the 'Lossy Algebraic Filter', a family of functions indexed by a public key and a tag. A function LAF from this family takes as input a vector of elements Xi in Zp. For an injective tag, LAF will be injective. If the tag is lossy, then LAF only depends on a linear combination of the input elements. Crucially, different input vectors with the same linear combination (Σi=1n ωi Xi mod p) are mapped to the same value. The coefficients only depend on the public key of the filter, and not on the tag. Three properties are necessary for this definition to acheive its purpose. First of all, lossy and injective tags must be computationally indistinguishable; lossy tags can be generated using a special trapdoor; and finally that new lossy tags cannot be found efficiently without this trapdoor, even given polynomially many lossy tags before. The paper details how to construct LAFs based on the DLIN assumption with a suitable cryptographic pairing, and each tag corresponds to n DLIN-encrypted Waters signatures (if the signatures are valid then the tag is lossy).

The constructed CIRC-CCA secure PKE scheme adds an authentication tag (an encrypted image of the plaintext message under an LAF), and decryption queries are disallowed where the authentication tag is invalid. To prove security, all tags for key-dependent encryptions are made with respect to lossy filter tags, so little information about the secret key is released (information-theoretically speaking). But the fact that adversarial decryption queries must correspond to injective tags, the adversary needs to be able to guess the whole secret key to make any valid decryption query. The resulting scheme is secure under a combination of the DCR, DLIN and DDH assumptions (in appropriate groups).

Wednesday, May 29, 2013

Eurocrypt 2013: Homomorphic Cryptography session


The first afternoon session of day 2 was on homomorphic cryptography. There were three talks: the first was on fully homomorphic encryption, and the next two on the slightly lesser-known topics of homomorphic MACs and authenticated data structures. In this post I will talk about the first two talks, which captivated me the most.

Tancrède Lepoint gave the first talk, on integer-based fully homomorphic encryption, which was a merge of two papers. Much recent work on FHE has focussed on constructions based on ring-LWE, whilst Gentry's scheme and the integer-based DGHV variant are often ignored due to large parameter sizes. The talk aimed to bridge this gap and show that actually, the integer scheme can be competitive with ring-LWE. Following on from last year's Eurocrypt paper, which allowed for better 'noise control' using modulus switching,  Tancrède described how batching techniques can be used to perform parallel homomorphic operations on ciphertexts. Batching allows multiple plaintexts to be 'packed' into a single ciphertext, so that when a homomorphic operation is performed on a ciphertext, it is applied to every plaintext element in parallel. These techniques were originally developed by Smart and Vercauteren, for plaintext spaces of polynomial rings, and it turns out that the CRT can similarly be used for the scheme over the integers. The use of batching makes a huge practical difference: previously a ciphertext of size 20 million bits could only encrypt a single bit; now the same-sized ciphertext can encrypt around 300 bits.

In addition to parallel operations, effective use of batching also requires the ability to permute plaintext elements within a single ciphertext. This is possible with ring-LWE based schemes because of their nice algebraic structure, however it seems more difficult over the integers. Instead, they take advantage of the bootstrapping step, which homomorphically decrypts a ciphertext, to perform a permutation. Using these techniques, they present an implementation of homomorphic AES that takes 12 minutes per block on average, compared with 5 minutes per block for the implementation from Crypto last year. This shows that integer-based FHE is still a viable option, although it is worth noting that the Crypto implementation did not use bootstrapping; it would be interesting to see if adding bootstrapping could put it much further ahead or not.

The second talk of the session was given by Dario Catalano, on homomorphic MACs for arithmetic circuits. In the scenario of delegating computation to a server, it is often important to be able to verify that the computation was carried out on the correct data, which was previously authenticated by the client. Using homomorphic MACs allows the client to verify the server's output, with respect to the function computed and the original, authenticated inputs. The key element is a mechanism whereby anyone can take a set of MAC'd messages and compute a MAC valid for the output of any function of the messages. Previous work in this area was only homomorphic for linear functions, or required the use of FHE, which is undesirable. Their construction allows homomorphism for polynomially sized circuits, is secure based only on the existence of PRFs, and is very efficient to compute the MACs. However, the complexity of verification grows with the degree of the circuit, which seems unfortunate, especially for the application of outsourced computation. Another open problem Dario mentioned was to construct a fully homomorphic MAC, for circuits of arbitrary size.

Tuesday, May 28, 2013

EuroCrypt 2013: Day one, part three

The first talk on Monday afternoon at EuroCrypt'13 was given by Nicolas Veyrat-Charvillon on "Security Evaluations Beyond Computing Power - How to Analyze Side-Channel Attacks you Cannot Mount?" (with B. Gérard and F.-X. Standaert as coauthors). The paper addresses an issue faced by certification labs that have to assess the security of a device against side channel attacks: In certification you know everything about the device that you are attacking, you know the key that you're attacking but if, after having made a certain effort, the side-channel attack doesn't yield the correct key you want to know how far away of the correct key you are.

Let's look at the example of a side channel attack on AES: After the attack, you have for each 0<=i<16 word of the state a probability distribution Di that contains the probability of every possible value for this word. So you can compute for all of the possible 2128 keys their probability as returned by the attack. In particular, you can compute the probability of the most likely key and the probability of the correct key. (Of course, a real attacker wouldn't know the correct key; you know the correct key only because you're a certification lab staging an attack on something you already know.) The big question that remains is: How many other keys have a probability higher than the correct key? If the certification lab can answer that question with a good enough approximation, then they know how much additional effort is needed to complete the attack by brute force.

The stupid way to answer that question is to check the probabilities of all 2128 keys. But of course, if you were able to do that, you wouldn't need a side-channel attack in the first place, you could just brute force AES.  The authors present in their paper the first feasible method to answer this question with a reasonably good approximation. Their primary idea is to arrange all the possible keys in a suitable hyper cube representation where they can make suitable time-memory trade-offs in the arrangement. This enables effective carving out of key candidate volumes that all have bigger or smaller probability. So they only have to keep track of the size of the carved out volumes and, when the remaining volume is small enough, they have their answer.

Obviously, being able to tell how much additional brute force effort is needed after a failed side-channel attack is important for certification labs to make a qualified judgement about the device's security. So the biggest remaining question for follow-up work in this direction is:  Does the remaining brute-force effort relate to the additional number of traces that need to measured, stored & processed for a pure side-channel attack to be successful? Would the attacker be better of continuing with more measurements instead of brute-forcing at the end?

Monday, May 27, 2013

Eurocrypt 2013 : Number Theory Session

The number theory session consisted of three talks. One by Joux on the DLP, one by people from Microsoft Redmond on Genus 2 curves, and one by Bouillaguet and co-workers on breaking systems based on hidden quadratic equations.  In this post I will mainly concentrate on the first one, and make a small comment on the second. The third talk was very interesting, and basically concluded that the systems considered in this work should not be considered secure.

Joux presented his work on the medium prime case of DLP in a finite field. He first outlined the basis Function Field Sieve method of Joux and Lercier for solving such DLPs: If the finite field is of size p^k, for p of size Lp^k(1/3), then one selects two polynomials f1(x) and f2(x) of degree d1 and d2 respectively such that d1*d2>k and such that
x-f1(f2(x))
is divisible, modulo p, by an irreducible factor of degree exactly k.

The attacker then selects a smoothness basis of x-a and y-a, and sets y=f2(x), and x=f1(y). With this identification the bivariate linear polynomial
x y + a y + b x + c
can be expressed either as a polynomial in x, or a polynomial in y. When both such polynomials split over the factor base one obtains a relation.

The basic idea of Joux's new method was to apply the technique used when the field can be expressed as a Kummer extension, to all extensions. Namely to fix on y=x^d1. Joux then showed that when one obtains a single relation one can amplify this to many relations using the substitution x = v X for some v. A similar trick working when we consider the other side of the relation. Thus relation finding becomes a task of taking known factorizations and matching them up. This produces a great improvement in the so-called sieving step; in fact eliminating the need for sieving entirely.

Joux ended with discussing his recent work on charactereristic two discrete logarithms; which we have discussed elsewhere in this blog. He presented a new world record; announced in the last week of solving the DLP in a finite field of 6168 bits.


In the next talk Craig Costello presented some interesting performance improvement to arithmetic on genus two curves; obtaining some very impressive results. He presented two implementations; one based on general arithmetic using a four dimensional GLV style trick, and the second based on a Montgomery ladder technique based on the Kummer surface. He left with a tantalising open problem which was to apply GLV style methods to the Kummer surface. This looks very interesting, and a possible place to start this line of work would be to see what can be done in the simpler case of elliptic curves, where both GLV and the Kummer surface are easier to understand

Day one - part one - two



The third talk of Monday was given by Chris Peikert on ring-LWE.  Lattice cryptography bases security mainly on SIS and LWE problemS. Whereas they enjoy worst-case security, this approach typically leads to constructions where key sizes and runtimes are big, roughly quadratic on the security parameter (Omega(n^2)). It turns out that their analogous on the ring  setting are more efficient (Omega(n)), so it is important to understand until what extent, without compromising security, the ring counterpart can be  employed.

A key part in the security reduction of RLWE to lattices problems is that the underlying ring should have a particular algebraic structure.  Namely, the class of rings considered are cyclotomic rings, defined as the quotient of polynomial rings with integers coefficents over  cyclotomic polynomials, (the m-th cyclotomic polynomial is that with its  roots being exactly all nth-primitive roots of unity). So unless one is able to provide a reduction for any number field, the question is, what cyclotomic rings and what representations of the  ring elements are best suitable for efficiency purposes?

Many applications  require the degree to be a power of two. This work is on extending the setting to more general cyclotomic rings.  From a practical point of view, the most obvious reason for doing so is that the security level of an application may not need to call to the next power of two in the ring dimension. Working with rings with tighter dimensions allows e.g. to have smaller keys. Also on the power of two case, improvements on  efficiency like SIMD operations cannot be applied.

Using any cyclotomic field would not comprOmise security at all, but other  difficulties arise if we do not restrict to some nicer structures. For example, reduction modulo general or irregular cyclotomic polynomials is  ususally cumbersome because the polynomial can take any shape. The  situation is different if the order is prime.

In the talk, three features were presented which in somehow also determine  the ring structure; the canonical embedding, the tensorial decomposition,  and the dual ring. A crucial quantity when trading security by efficiency is  the expansion factor of the ring; it controls the noise growth after arithmetic  operations. The size of this term can be given in different ways, for example one is tempted to consider the norm of the vector associated to the noise. This way, one is using the so-called coefficient embedding. Unfortunately this yields expansion factors depending on the cyclotomic polynomial, and they are usually quite loose, getting too large  with high composite m. Another important caveat is that security  decreases with the expansion factor. On the other hand we may use the  canonical embedding. It maps the ring element (seen as a polynomial) to the vector of its evaluation at each primitive root, seated on the complex vectorial space. A nice property is that the expansion factor is very small and independent of the base ring. One might suspect that tensorial decomposition will allow SIMD operations. The decomposition is done via an old theorem that states that the m-th cyclotomic  ring can be  expressed as an multivariate quotient depending on the factorization of m. It turns out that this characterization with a basis of the ring called 'powerful basis' renders much efficient  constructions, than if one uses the univariate version with the standard basis (sometimes called power basis). Lastly, under the canonical embedding the  cylotomic ring is not self dual. In the formalization of the RLWE problem  it was shown that the dual was necessary, and although it is possible to  get rid of it DD12, it seems better to keep it since the error tolerance  in decryption is larger when using the dual.

The take-home message of the talk was that mathematical objects (canonical embedding) and representations (tensorial bases) yields provable hardness, fast algorithms and tighter analysis.

EuroCrypt 2013: Day one, part one

Today marks the start of Eurocrypt 2013 in Athens. The first session was on lattices and the first talk was on Candidate Multilinear Maps from Ideal Lattices, which received the best paper award. This paper was also discussed in an earlier blog post as part of a study group.

The second talk was on Lossy Codes and a New Variant of the Learning-With-Errors Problem, by Nico Döttling (who gave the talk) and Jörn Müller-Quade from KIT. In the Learning With Errors problem, you are given some noisy inner products (mod q) of a secret vector with random vectors and the goal is to retrieve the secret vector (search version) or distinguish these noisy inner products from random (decision version). In certain cases (depending on the modulus q and the noise distribution) it is possible to reduce the search to the decision problem. More importantly, if the noise is taken from an appropriate Gaussian distribution, it can be shown that the average-case instances are as hard as worst-case instances through a worst-case to average-case reduction.

However, it is cumbersome to sample such Gaussian noise, and in this paper the authors examine whether it is possible to get a worst-case reduction for some non-Gaussian noise distribution. They give a new worst-case connection for LWE with uniformly distributed errors, but it comes with the drawback that the number of LWE samples that will be given out must be fixed in advance. As the paper title indicates, their proof relies on lossy codes, which is a proof-strategy that was used earlier by Peikert and Waters. Strictly speaking, they do not come with a new proof of the worst-case reduction, but use the proof for Gaussian noise in such a way that they do not require the Gaussians in the LWE instantiation.

For a fixed number of LWE samples, it is possible to write the problem in matrix form, where the matrix consists of the random vectors used in the inner products. In the original LWE variant this matrix consists of randomly drawn elements mod q. In the variant in this paper, this matrix is replaced by one from a distribution of lossy matrices, which are computationally indistinguishable from random matrices. If this lossy problem is somehow computationally easier than the standard LWE problem, this allows you to distinguish between the lossy matrices and random matrices. The main part of the paper is dedicated to showing that their specific LWE construction is actually lossy for their uniform noise distribution.

The end result is a reduction from worst-case GapSVP to the average case of their LWE variant. Compared to previous reductions, this one suffers from the loss of an additional factor m in the approximation factor for the lattice problem. It is an open problem to tighten this reduction.

Sunday, May 26, 2013

Workshop on Mathematics of Information-Theoretic Cryptography


A workshop on Mathematics of Information-Theoretic Cryptography has been held this week in Leiden. The first morning started with a very interesting talk given by Amos Beimel on Multilinear Secret Sharing Schemes. He described a recent joint work with Aner Ben-Efraim, Carles Padro, and Ilya Tyomkin.   Multilinear secret sharing schemes (MSSS) can be seen as a generalization of  linear secret sharing schemes, in which  the secret is composed by some field elements instead of just one, and they are based on multi-target monotone span  program, a generalization of monotone span program. He showed, using representation of groups, that ideal multilinear p-schemes (in which the secret is composed of p field elements, for every prime p) are more powerful of ideal k-linear multilinear schemes for k<p.

The  talk  “Low Rank Parity Check Codes (LRPC) and their applications to cryptography”, by Gilles Zemor, was particularly appealing.  These codes can be seen as the equivalent of LDPC codes (Low Density Parity Check codes) for the rank metric. More precisely, a LRPC code of rank d, length n and dimension k over Fq^m,   is a code with parity  check matrix H, that is  an (n-k) x m matrix such that the subspace of Fq^m  generated by its coefficients has dimension at most d. He described how these codes can be used to define a  public key cryptosystem, with small keys and a poor structure, which is not based on the Gabidulin codes. Interestingly, as the more recent MDPC cryptosystem,  this construction can be seen as a  generalization of the NTRU cryptosystem, but with the rank metric.
The last talk of the day was given by Harald Niederreiter. He showed some applications of global function fields, i.e. algebraic function fields of one variable over a finite fields.

For me, the highlight of the third day was the talk given by Serge Fehr. He presented a joint work with Marco Tomamichel, Jedrzej Kaniewski, and Stephanie Wehner, that will be presented at Eurocrypt next week. During the talk he described a new quantum game (a monogamy-of-entangled-game) with various and important applications in cryptography. For example it is used to prove that standard BB84 QKD (Quantum Key Distribution) remains secure even when one party uses fully untrusted measurement devices.

Monday, May 20, 2013

Study Group: Non-Committing Encryption

The latest study group was presented by Arpita on the topic of non-committing encryption with material from the following papers:

Non-committing encryption denotes semantically secure public-key encryption with the additional possibility of producing fake public keys and fake ciphertexts. Those should be computationally indistinguishable from real public keys and ciphertexts, respectively, but the fake ciphertexts can efficiently be opened to any message. A stronger notion is deniable encryption, where a real ciphertext can be opened to any message, and it is even possible to compute a secret key that decrypts a ciphertext to any message. Obviously, this would be helpful for individuals in jurisdictions that require key disclosure.

The main motivation for non-committing encryption (NCE) is adaptive security, that is, against an adversary that can corrupted parties in a protocol at any time in a protocol unlike a static adversary that has commit himself at the beginning. NCE can be used to achieve adaptive security in several circumstances such as secure channels, security against selective opening attacks (SOA), and oblivious transfer (implying multiparty computation). For selective opening attacks, the adversary is given a number of ciphertexts under the same public key, of which he can select some to learn the message and tge encryption randomness. A SOA-secure encryption scheme will guarantee that no information on the remaining ciphertexts can be deduced.

Non-commiting encryption was introduced by Canetti et al. in 1996. Damgård and Nielsen proved that NCE can be constructed from simulatable public-key encryption and that ElGamal achieves this property. A simulatable cryptosystem allows to generate a public key and a ciphertext without generating a secret key and inputting message, respectively, and it is possible to extract adequate generation randomness from such a fake public key or ciphertext. The construction involves the sender and the receiver agreeing on a random bit to encrypt one bit as follows: The receiver generates a real and a fake public key and sends them in random order. The sender then encrypts one out two public messages using one of the keys at random, generates a fake ciphertext for the other key, and sends both ciphertexts. If the receiver can decrypt the ciphertext corresponding to the real key to the right public message, they have both chosen the same random bit, which can be used to encrypt one bit. Otherwise, they repeat the protocol.

Unlike simple public-key encryption, this protocol requires interactivity. Nielsen proved in a paper at Crypto 2002  that non-interactive non-committing encryption implies that the secret key has to be as long as the message. However, Choi et al. introduced an NCE scheme that requires only expected two rounds, that is, it is non-interactive with high probability.

Sunday, May 19, 2013

School on Information-Theoretic Cryptography


Last week the school on Mathematics of Information-Theoretic Cryptography was held at the Lorentz Center in Leiden, Netherlands. It was intended as a warm up for the workshop following next week at the same center, and that aims to bring together people working in the field, and present latest results on a variety of i-t topics.

I quite enjoyed the school and found it very rewarding, as most of the lectures took special interest in giving careful presentations, on blackboard,  with no hesitation on going into the details of what was being discussed. People in the audience feeling akin to take notes were able to do it without loosing track of what the lecturer was saying.

The school was opened by Jesper Buus Nielsen on Monday. He gave an intuitive presentation on Secure Multiparty Computation (MPC). He discussed aspects of passive security, the UC model, and the efficiency of MPC. Things like Shamir Secret Sharing, security of protocols via simulators and environments, dispute control technique, randomness extraction with  Vandermonde matrices, SIMD operations and its relation to Benes networks, and more was dissected.

Tuesday and Wednesday were dedicated to a more in-deep range of mathematical topics. Peter Beelen gave a comprehensive introduction of function fields, which constitutes one of the building blocks of algebraic geometric secret sharing. Besides it can be hard to understand for someone with a no-so-wide knowledge in Algebra, he basically started from the scratch, giving examples and intuitions on why concepts are developed in a particular way. He also settled practice exercises that were later resolved on the board. He discussed objects like valuation rings, places, formal expressions known as differentials, the Riemann-Roch theorem, Ihara's constant, and recursively constructions of tower of function fields. I think it was a meticulous introduction, and surely papers dealing with AG secret sharing are much more understandable for anyone who attended this lecture. Chaoping Xing discussed aspects of Algebraic Curves. He was mainly concerned with three applications; algebraic geometric codes (comparison and improvements on bounds of the asymptotic behaviour, and generalization of
Reed-Solomon codes), field multiplication on extension fields (alternative to convolution product in polynomial rings, defined via two maps and the Schur product, and its connection to function fields), and  lastly, perhaps a somewhat less crypto-related application, the Quasi-Monte Methods.

Carles Padro gave explicit definitions of secret sharing schemes seen as random variables, realizations of SS for general access structures, and the relation to matroids and polymatroids. Also discussed on lower and upper bounds of the Information Rate. Thursday was devoted to arithmetic SS, and was driven by Ronald Cramer and Ignacio Cascudo. They explained the Codex Framework, which tries to embody together SS schemes that are multiplicative. Many of the aspects discussed the previous days crystalized in this framework. As an aside, Ronald also informally explained the research development on the topic in, roughly, the last twenty years.

The last day, Friday, Yuval Ishai also talked about MPC and Zero-Knowledge Proofs.  He noted that historically i-t cryptography and honest majority MPC  have made use of 2PC and ZKP, whereas the other direction seems less explored. He gave high level description on several topics regarding this connection, the one I enjoyed the most was the description of the "MPC in the Head" technique.

All the material is available at the school website.

Tuesday, May 7, 2013

Study Group: Multinlinear Maps (II) - Application



Following on from last weeks study group on the construction of multilinear maps from ideal lattices (http://eprint.iacr.org/2012/610.pdf), this weeks study group, given by Essam and Gareth, was on applications of multilinear maps. One of the papers (appearing at Crypto later this year) this week can be found here:  http://eprint.iacr.org/2013/128.pdf. 

This paper, entitled "Attribute Based Encryption for Circuits from Multi-linear Maps", can be seen as something of a breakthrough for ABE, as previous results were significantly weaker.  We note that there is another concurrent work due to S. Gorbunov, V. Vaikuntanathan and H. Weeto appearing at STOC which achieves a similar result basing security on the Learning With Errors assumption.  At the time of writing this paper was unavailable.

Before going into details, let's talk about ABE.  Introduced by Sahai and Waters in 05, ABE comes in two distinct flavours:  key-policy and ciphertext-policy.  In the former,  secret keys are generated per boolean functions f from an allowable class of functions F.  Ciphertexts encrypting messages are associated with an attribute (assignment of boolean values) x.  Decryption of a ciphertext with a secret key sk_f is possible iff f(x) = 1.  Ciphertext-policy is the just the same with the role of key and ciphertext reversed. 

One of the main challenges has been to increase the depth of function f which can be evaluated.  Previously this was only possible for log(n) depth circuits, where n is the max length of an attribute.  The goal of this work is to realise ABE for any depth circuit.  They construct a KP - ABE and CP-ABE for any polynomially bounded depth circuit, though it should be noted that this work is only a proof-of-concept and is not necessarily intended to be efficient in any way.

They identify the main problem in previous constructions (in paticular those based on bilinear maps) as "backtracking":  lets say I decrypt a ciphertext with attribute x so I possess a secret key sk_f such that f(x) = 1.  Now consider an OR gate (in f).  When using pairings, a ciphertext which succesfully decrypts also allows us to learn information about a gate for which the ciphertext would not have succesfully decrypted since the pairing computation evaluates to the same thing on the OR gate.  Now if the gate for which said attribute would not evalute to 1 on, had fan-out higher than 1, we can potentially use this information to decrypt a ciphertext for which f(x) should evaluate to 0, but the additional information is used such that it evaluates to 1.

The beauty of multi-linear maps is that we can replace the single bi-linear computation with maps from groups e:G_i X G_j -> G_{i+j} such that 
e(g_i^a,g_j^b) = g_{i+j}^{ab}


Using what they call "move-forward-then-shift" they can avoid the backtracking issue.  Essentially, the difference lies in the fact we are now in a new group, whereas previously we applied the bilinear map once and the required value the adversary would need to cheat lied in the exponent coming from a bilinear map.   Thus backtracking is no-longer possible.

Wednesday, May 1, 2013

Study Group: Multilinear Maps (I)

This week's student group (30 Apr) was given by Joop and Enrique. They discussed the paper "Candidate Multilinear Maps from Ideal Lattices" (PDF). The paper describes plausible lattice-based constructions with properties that approximate the soughtafter multilinear maps in hard-discrete-logarithm groups. The security of the constructions relies on seemingly hard assumption (which is a multilinear analog of DDH) in ideal lattices.

Definition. Cryptographic Multilinear Maps [BS03]:
For κ+1 cyclic groups G_1,...,G_κ ,G_T (written additively) of the same order p, a κ  multilinear map e : G_1,...,G_κ→ G_T has the following properties:

1. For elements {g_i  G_i}_{i=1,...,κ}, index [κ] and integer αZ_pit holds that e(g_1,...,α· g_i,...,g _κ) = α· e(g_1,...,g _κ).

2. The map e is non-degenerate in the following sense: if the elements {g_i G_i}_{i=1,...,κ}, are all generators of their respective groups, then e(g_1,...,g_κ ) is a generator of G_T .

A cryptographic multilinear map scheme consists of efficient procedures for instance-generation, element-encoding validation, group-operation and negation, and multilinear map,
MMP = (InstGenEncTestadd, neg, map). These procedures are as follows.
  • Instance Generation. A randomized algorithm InstGen that takes the security parameter λ and the multi-linearity parameter κ , and outputs (G_1,...,G_T,p,e,g_1,...,g_κ ).  G_i's and G_T describe the groups, p Z is their order,  G_1,...,G_κ→ G_T describes an κ -multilinear map as above, and g_i∈{0,1}^{*}  for i = 1,..., κ encode generators in these groups. Denote params=(G_1,...,G_T,p,e).
  • Element Encoding. Given the instance params, an index i∈[κ ], and a string x ∈{0,1}^{*},  EncTest(params,i,x) decides if x encodes an element in G_i. Similarly EncTest(params, κ+1; x) efficiently recognizes description of elements in G_T .
  • Group Operation. Given x,yG_i, add(params,i,x,y) computes x+y G_i and neg(params,i,x) computes -x G_i. This implies also that for any α G_i we can e ciently compute α· G_i.
  • Multilinear Map.For {x_i  G_i}_{i=1,...,κ}, map(params,x_1,...,x_κ) computes e(x_1,...,x_n)G_T.

Hardness Assumptions

For the multilinear map to be cryptographically useful, at least the discrete logarithm must be hard in the respective groups, and we usually also need the multilinear-DDH to be hard.


1. Multilinear Discrete-log (MDL). The Multilinear Discrete-Log problem is hard for a scheme MMP, if for all κ > 1, all   [κ] and all probabilistic polynomial time algorithms, the discrete logarithm advantage of A

AdvDlog_{MMP,A,κ}(λ) = Pr[A(params,i,g_i, α· g_i) = α : (params,g_1,...,g_l) ← InstGen(1^λ,1^κ ), αZp], 
is negligible in λ.


2. Multilinear DDH (MDDH). For a symmetric scheme MMP (with G_1 = G_2 = ), the Multilinear Decision-Di e-Hellman problem is hard for MMP if for any and every probabilistic polynomial time algorithms A, the advantage of A in distinguishing between the following two distributions is negligible in : (params, gα_{0}g, α_{1}g,...,α_{κ}g,( Prod_{i=0,..,κ} α_i)·e(g,..,g)) and (paramsgα_{0}g, α_{1}g,...,α_{κ}g, α·e(g,..,g)).


Graded Encoding Scheme: Efficient Procedure, the Dream Version
The authors first describe  a "dream version" of the efficient procedures which they do not know how to realize, then  modify them to deal with technicalities that arise from their use of lattices in the realization.

Instance Generation. The randomized InstGen(1 ^λ,1^κ ) takes as inputs the parameters λ, κ, and outputs (params, p_{zt}), where params is a description of a κ-Graded Encoding System, and p_{zt} is a zero-test parameter for level κ.

Ring Sampler. The randomized samp(params) outputs a level-zero encoding α  S_{0}^( α) for a nearly uniform element α  R.

Encoding. The possibly randomized enc(paramsi, α) takes a level-zero encoding α  S_{0}^( α) for some   α ∈ R and index i ≦ κ , and outputs the level-i encoding u  S_{i}^( α)  for the same α .

Addition and Negation. Given params and two encodings relative to the same index, u_{1 S_{i}^( α_1
and u_{2 S_{i}^( α_2), we have add(params, i, u_1, u_2) = u_1 + u_2  S_{i}^( α_1+ α_2), and neg(params, i, u_1) = -u_1  S_{i}^(-α).

Multiplication. For u_1  S_{i_1}^(α_1), u_2  S_{i_2}^(α_2) such that i_1+ i_2 ≦ κ , we have mul(params, i_1, u_1, i_2, u_2) = u_1 × u_2  S_{i_1 + i_2}^(α_1 · α_2) .

Zero-test. The procedure isZero(params, u) output 1 if u  S_{κ}^(0) and 0 otherwise.

Extraction. This procedure extracts a canonical and random representation of ring elements from their level-κ encoding. Namely ext(paramsp_{zt}, u) outputs s  {0,1}^λ, such that:
(a) For any α ∈ R and two u_1, u_2  S_{κ }^(α), 
ext(paramsp_{zt}, u_1) = ext(paramsp_{zt}, u_2),
(b) The distribution {ext(paramsp_{zt}, u) : α ∈ R, u ∈ S_{κ }^(α)} is nearly uniform over {0,1}^λ.


New Encoding Scheme
An instance of the scheme relative to the parameters encodes elements of a quotient ring QR = / I, where I is a principal ideal I = <g>   R, generated by a short vector g. Namely, the ring elements that are encoded in the scheme are cosets of the form e + I for some vector e. The short generator g itself is kept secret. In addition, the system depends on another secret element z, which is chosen at random in R_q. For higher-level encodings, a level-i encoding of the same coset is a vector of the form c/z^i  R_q with c e + I short. Speci cally, for {0, 1, ..., κ} the set of all level-i encodings is S_i = {c/z_i   R_q :  ||c|| < q^{8/g}}, and the set of levle-i encodings of the plaintext element e + I is
S_{i}^(e + I) = {c/z_i   R_q :  c ∈ e + I, ||c|| < q^{8/g}}.

Instance Generation. The instance-generation procedure chooses at random the ideal-generator g and denominator z. The denominator z  is chosen uniformly at random in R_q. The generator g  R should be chosen so that both g and g^{-1}  K = Q[X]/(X^n +1) are short. Once we have g, z, we choose and publish some other elements in R_q. Speci cally we have m + 1 elements rand_1,..., x_m, y that are used for encoding, and an element p_{zt} that is used as a zero-testing parameter. We also choose a random seed s for a strong randomness extractor. The instance-generation procedure outputs 
params = (n, q, y, {x_i}_i, s) and p_{zt}.

Sampling level-zero encodings. To sample a level-zero encoding of a random coset, we just draw a random short element in R, d ← D_{Z_n, σ'} , where σ' = σ n (for σ  that was used to sample g).

Encodings at higher levels. To allow encoding of cosets at higher levels, we publish as part of our instance-generation a level-one encoding of 1 + I, namely an element y = [a/z]_q where a   1 + I is short. A  simplistic method of doing that is drawing a  ←  D_{1+I, σ'} , then computing y from a.

Adding and multiplying encodings. It is easy to see that the encoding as above is additively homomorphic, in the sense that adding encodings yields an encoding of the sum. This follows since if we have many short c_j 's then their sum is still short, || Prod_{j} c_j || ≦ q, and therefore the sum  cProd_{j} c_j = [Prod_{j} c_j]_R_q belong to the coset Prod_{j} (c_j+I). Hence, if we denote u_j = c_j/z  R_q then each u_j is an encoding of the coset c_j + I, and the sum [Prod_{j} u_j]_q is of the form c/z where c is still a short element in the sum of the cosets.

Zero-testingisZero(params, p_{zt}, u) = 1 if ||[p_{zt}_q ]||_∞ < q^{3/4}, or otherwise 0.

Extraction. To extract a canonical and random representation of a coset from an encoding u = [c/z ^κ]_q, we just multiply by the zero-testing parameter p_{zt}, collect the (log q)/  most-signifi cant bits of each of the n coe fficients of the result, and apply a strong randomness extractor to the collected bits (using the seed from the public parameters). Namely ext(params,  p_{zt}, u) = EXTRACT_s(msbs([u ·  p_{zt}]_q)) (msbs of coe fficient representation). This works because for any two encodings u, u' of the same coset we have ||p_{zt}u - p_{zt}u'|| = ||p_{zt}(u - u')|| < q^{3/4}.

The security of the graded encoding systems relies on new and at present it seems unlikely that they can be reduced to more established assumptions, such as learning-with-errors (LWE),  or even the NTRU hardness assumption.