One of the last sessions at yesterday’s Eurocrypt was an excellent talk by Tibor
Jager, who presented his paper On The Impossibility of Tight Cryptographic Reductions, which is joint work with Christoph Bader, Yong Li and Sven Schäge.
Recall that we often prove the security of a public key
primitive by constructing an algorithm, called a reduction, that is able to
violate some complexity assumption $A$ (like CDH or DDH) if it has access to an
adversary that breaks the primitive. So if $A$ is true, then such an adversary
cannot exist.
In the setting where there are $n$ users, it is often the
case that the probability of the reduction breaking $A$ is $\frac{1}{n}$ times
the probability that the adversary breaks the primitive. This is what we mean
when we say that the reduction is not “tight”: it is essentially $n$ times
easier for an adversary to break the primitive than for the reduction to
violate $A$. If $n$ is very large, then we must choose very large parameters to
be sure that the primitive is unbreakable with these parameters (assuming $A$
is true). So tight reductions are highly desirable in practice.
The paper presented yesterday shows that there are many
public key primitives for which no tight reductions can exist. We will illustrate
this here with the example of an encryption scheme, but the result easily
generalises to other primitives.
Consider the following multi-user security game $G$ for an
encryption scheme: the adversary is given $n$ public keys $pk_1, \dots, pk_n$,
outputs an index $i$ between 1 and $n$, receives the secret keys $sk_j$ for $j \in \{1, \dots, n\} \setminus \{ i \}$, and wins the game if it outputs a valid secret key for $pk_i$.
This looks like an odd game and it corresponds to a very
weak notion of security, but showing that no tight reduction exists for $G$
implies that no tight reduction exists for a more natural game: $n$-key IND-CPA where the adversary can adaptively corrupt up to $n-1$ secret keys and query its test oracle using a single uncorrupted key.
We will suppose that a reduction from $G$ to an assumption
$A$ exists, and use the reduction itself to violate $A$ directly, without
accessing an adversary against $G$. Instead, our meta-reduction will simulate an ideal
adversary against $G$ and violate $A$ using the reduction's output from interacting with the simulated adversary. To do the simulation, we essentially replay the
reduction $n$ times, supplying a different index $i$ each time and storing the
secret keys returned by the reduction, provided that they are actually valid for the
public keys. Then it is trivial to “win” $G$ by simply outputting one of these
stored secret keys, so our meta-reduction behaves like a very powerful adversary against $G$.
One of two things can happen: either the reduction answers
honestly for exactly one index $i$, in which case our rewinding strategy won’t work
but the reduction isn’t tight; or our meta-reduction perfectly simulates an ideal
adversary, so we violate $A$. So either the
reduction isn’t tight, or $A$ is false.
Many of the technical details are omitted here, but it’s worth pointing out one of the conditions that is necessary for this argument to go through. Simply forwarding valid secret keys provided by the reduction doesn’t quite
simulate an ideal adversary, since these secret keys could be “non-uniform” in
some way. So there must be an efficient algorithm that takes a public key $pk$ and a valid secret key $sk$ and uniformly samples from all the valid secret keys for $pk$. The easiest way to
achieve this is if there is exactly one valid secret key for each public key:
the “resampling” algorithm just returns its input.
I find this result very neat. It represents the intersection of theoretical work and practical concerns; often the two are very different things.
No comments:
Post a Comment