Thursday, December 15, 2011

IMA Coding and Crypto, Day 1 Morning

This year is the 25th anniversary of the conference colloquially known as Cirencester. It is traditionally held every two years at the agricultural college in Cirencester. It's a rather remote location and it has the effect that in the evening everybody cozies up in the college bar with big wooden fire. A few years back I remember I kept having to climb a big mould of whatever just in order to get telephone reception.

Anyway, this year's Cirencester is not actually in Cirencester, instead it is in Oxford, at the Lady Margaret Hall. This college used to be women only, but these days it is mixed (just like Royal Holloway, University of London). It is located a bit north of the center, about 10-15 mins walk and the buildings are all rather new (certainly for Oxford college standards). So it's not quite as remote as the real Cirencester, but it is a decent emulation (including some of the agricultural whiffs that occassionally manage to make it into the lecture theatre).

The first talk of the conference was by David Nacchache who gave an invited talk consisting of three parts. The first part dealt with code obfuscation. The goal of code obfuscation is simple: you take a piece of code and modify it in such a way that it behaves exactly the same as the original piece, yet all other information about the original code is hidden. For instance, a programmer might want to hide the control flow of a problem, or a cryptographer might want to hide a key that is a fixed input to the program. Obfuscation is a rather hard problem cryptographically, rife with negative results. The main negative result is due to Barak et al., who showed that there exist programs that cannot be obfuscated at all: any obfuscation will invariably leak a predicate about the original. In his talk, Naccache explained programs of a much higher degree of unobfuscatability: any (functionally equivalent) obfuscation will necessarily leak the original code. The main tool of this remarkable result is the concept of Quines, that is programs that print themselves. While this may sound rather negative, Naccache also pointed out a positive, namely their transformation takes any program and turns it into a fully unobfuscatable one, which creates a potential mechanism for detecting forgeries of some original piece of code.

The second part was dedicated to identifying finger prints. While there are many algorithms for this already, most boil down to comparing a fingerprint found on a crime scene against a database, checking for a match one by one. These matches are quite costly and Naccache showed how one can use relatively lightweight statistical methods to precompute an ordering on the database that increases the probability of finding a hit sooner rather than later (thus reducing overall search time).

Finally, Naccache discussed the problem of figuring out the computation that was performed based on inputs and outputs only. In the past work had been done related to the partial extraction of an exponent based on the projective representation of a scalar multiple, here the focus was on arithmetic circuits over a finite field of small depth. As it was still work in progress, the details are left for later.

After a very short break, the first contributed talk was delivered by our very own Peter Scholl. I think it was his first conference presentation and he did a sterling job! The topic under investigation was improved key generation for Gentry's fully homomorphic encryption scheme. When Gentry proposed his breakthrough scheme, the efficiency left much to be desired and in fact, it was completely impractical to deploy the scheme for a realistic security setting. Much progress has been made in the mean time, but this progress does not always blend well: Gentry and Halevi presented a vastly improved key generation scheme and Smart and Vercauteren presented several significant improvements to other parts of the scheme, yet the Gentry-Halevi key generation sadly does not apply to the parameters Smart and Vercauteren need for their key ideas. In this work, Peter (in joint work with Nigel) adapted the ideas of the Gentry and Halevi to the Smart-Vercauteren setting. It turned out that by introducing several neat new techniques and optimizations, key generation suddenly becomes feasible. I would especially like to draw attention to one of the conclusions, namely that you really need to implement these more complicated schemes to see where the challenges are and what can be done about the bottlenecks. This point was also brought home in later talks, for instance the excellent invited talk by Ivan Damgaard and the inspiring talk by Mike Scott.

After Peter, Frederik Armknecht (fresh from presenting at Asiacrypt) gave a talk on constructing homomorphic encryption schemes from coding theory. The advantages of using coding theory are obvious: the computations involved are almost guaranteed to be simple, plus linear codes naturally allow one homomorphism naturally. The hope was that this simplicity could be leveraged to create an efficient scheme supporting both additions and multiplications. As it turns out, this is not as easy as one might hope for. The schemes presented by Armknecht still had some limitations that would reduce their overall useability. Chief amongst these were fairly strict restrictions on the number of ciphertexts allowed (which also implied a secret-key only setting) and a limitation on the number of multiplications that could be handled. Nonetheless, one can imagine that in some scenarios these scheme deliver exactly what is needed in a relative cheap and painful way.

No comments:

Post a Comment