Tuesday, May 2, 2017

Eurocrypt 2017 - Parallel Implementations of Masking Schemes and the Bounded Moment Leakage Model

MathJax TeX Test Page
Side-channel analysis made its way into Eurocrypt this year thanks to two talks, the first of which given by François-Xavier Standaert on a new model to prove security of implementations in. When talking about provable security in the context of side-channel analysis, there is one prominent model that comes to mind: the d-probing model, where the adversary is allowed to probe d internal variables (somewhat related to d wires inside an actual implementation) and learn them. Another very famous model, introduced ten years later, is the noisy leakage model in which the adversary is allowed probes on all intermediate variables (or wires) but the learnt values are affected by errors due to noise. To complete the picture, it was proved that security in the probing model implies security in the noisy leakage one.

The work of Barthe, Dupressoir, Faust, Grégoire, Standaert and Strub is motivated precisely by the analysis of these two models in relation to how they specifically deal with parallel implementation of cryptosystems. On one hand, the probing model admits very simple and elegant description and proofs' techniques but it is inherently oriented towards serial implementations; on the other hand, the noisy leakage model naturally includes parallel implementations in its scope but, admitting the existence of noise in leakage functions, it lacks simplicity. The latter is particularly important when circuits are analysed with automated tools and formal methods, because these can rarely deal with errors.

The contribution of the paper can then be summarised in the definition of a new model trying to acquire the pros of both previous models: the Bounded Moment leakage Model (BMM). The authors show how it relates to the probing model and give constructions being secure in their model. In particular, they prove that BMM is strictly weaker than the probing model in that security in the latter implies security in the former but they give a counterexample that the opposite does not hold. The informal definition of the model given during the talk is the following:
An implementation is secure at order o in the BMM if all mixed statistical moments of order up to o of its leakage vectors are independent of any sensitive variable manipulated.

A parallel multiplication algorithm and a parallel refreshing algorithm are the examples brought to show practical cases where the reduction between models stated before holds, the statement of which is the following:
A parallel implementation is secure at order o in the BMM if its serialisation is secure at order o in the probing model.
The falsity of the converse is shown in a slightly different setting, namely the one of continuous leakage: the adversary does not just learn values carried by some wires by probing them, but such an operation can be repeated as many times as desired and the probes can be moved adaptively. Clearly this is a much stronger adversary in that accumulation of knowledge over multiple probing sessions is possible, which is used as a counterexample to show that security in the continuous BMM does not imply security in the continuous probing model. The refreshing scheme mentioned above can easily be broken in the latter after a number of iterations linear in the number of shares, but not in the former as adapting the position of the probes does not help: an adversary in the BMM can already ask for leakage on a bounded function of all the shares.

Both slides and paper are already available.

Eurocrypt 2017: On dual lattice attacks against small-secret LWE and parameter choices in HElib and SEAL

This morning, Martin gave a great talk on lattice attacks and parameter choices for Learning With Errors (LWE) with small and sparse secret. The work presents new attacks on LWE instances, yielding revised security estimates. This leads to a revised exponent of the dual lattice attack by a factor of 2L/(2L+1), for log q = Θ(L*log n). The paper exploits the fact that most lattice-based FHE schemes use short and sparse secret. We will write q to denote the LWE modulus throughout.

Let's first have a look at the set-up. Remember LWE consists of distinguishing between pairs (A, As+e) and (A,b). In the first instance, A is selected uniformly at random and b is selected from a special (usually Gaussian) distribution. In the second one, both A and b are uniformly random. Selecting s, as this work shows, is perhaps trickier than previously thought. Theory says that, in order to preserve security, selecting a short and sparse secret s means the dimension must be increased to n*log_2(q). Practice says just ignore that and pick a small secret anyway. More formally, HElib typically picks a secret s such that exactly h=64 entries are in {-1,1} and all the rest are 0. SEAL picks uniformly random secrets in {-1,0,1}.

We also recall that the dual lattice attack consists of finding a short vector w such that Aw = 0, then checking if
<Aw, (As+e)w> = <w,e>
is short. If we are in the presence of an LWE sample, e is short, so the inner product is short. Short*short = short, as any good cryptographer can tell you.

The improvements presented in this paper rely on three main observations. Firsly, a revised dual lattice attack is presented. This step is done by adapting BKW-style algorithms in order to increase efficiency and can be done in general, i.e. does not depend on either shortness or sparseness of the secret. It is achieved by applying BKZ to the target basis, then re-randomising the result and applying BKZ again, with different block size.

The second optimisation exploits the fact that we have small secrets. We observe that we can relax the condition on w somewhat. Indeed, if s is short, then finding w such that Aw is short instead of 0 is good enough. Therefore, we look for vectors (v,w) in the lattice

L = {(y,x): yA = x (mod q)}.

Now in small secret LWE instances, ||s||<||e|| and so we may allow ||v||>||w|| such that
||<w,s>|| ≈ ||<v,e>||.

Finally, the sparsity of the small secret is exploited. This essentially relies on the following observation: when s is very sparse, most of the columns of A become irrelevant, so we can just ignore them.

The final algorithm SILKE is the combination of the three above steps. The steps are the following.
• Perform BKZ twice with different block sizes to produce many short vectors
• Scale the normal form of the dual lattice
• If sparse, ignore the presumed zero columns, correct for mistakes by checking shifted distribution

As usual, Martin wins Best Slides Award for including kittens.