Today marks the start of Eurocrypt 2013 in Athens. The first session was on lattices and the first talk was on Candidate Multilinear Maps from Ideal Lattices, which received the best paper award. This paper was also discussed in an earlier blog post as part of a study group.
The second talk was on Lossy Codes and a New Variant of the Learning-With-Errors Problem, by Nico Döttling (who gave the talk) and Jörn Müller-Quade from KIT. In the Learning With Errors problem, you are given some noisy inner products (mod q) of a secret vector with random vectors and the goal is to retrieve the secret vector (search version) or distinguish these noisy inner products from random (decision version). In certain cases (depending on the modulus q and the noise distribution) it is possible to reduce the search to the decision problem. More importantly, if the noise is taken from an appropriate Gaussian distribution, it can be shown that the average-case instances are as hard as worst-case instances through a worst-case to average-case reduction.
However, it is cumbersome to sample such Gaussian noise, and in this paper the authors examine whether it is possible to get a worst-case reduction for some non-Gaussian noise distribution. They give a new worst-case connection for LWE with uniformly distributed errors, but it comes with the drawback that the number of LWE samples that will be given out must be fixed in advance. As the paper title indicates, their proof relies on lossy codes, which is a proof-strategy that was used earlier by Peikert and Waters. Strictly speaking, they do not come with a new proof of the worst-case reduction, but use the proof for Gaussian noise in such a way that they do not require the Gaussians in the LWE instantiation.
For a fixed number of LWE samples, it is possible to write the problem in matrix form, where the matrix consists of the random vectors used in the inner products. In the original LWE variant this matrix consists of randomly drawn elements mod q. In the variant in this paper, this matrix is replaced by one from a distribution of lossy matrices, which are computationally indistinguishable from random matrices. If this lossy problem is somehow computationally easier than the standard LWE problem, this allows you to distinguish between the lossy matrices and random matrices. The main part of the paper is dedicated to showing that their specific LWE construction is actually lossy for their uniform noise distribution.
The end result is a reduction from worst-case GapSVP to the average case of their LWE variant. Compared to previous reductions, this one suffers from the loss of an additional factor m in the approximation factor for the lattice problem. It is an open problem to tighten this reduction.
No comments:
Post a Comment