Tuesday was a short day with only two sessions; one on signatures, on which I will focus, and one on information-theoretic cryptography. The afternoon was then reserved for the social event, which was a walking tour in the historic centre of Tallinn.
The first talk of the day was by Sven Schäge from Bochum who talked about tight proofs for signatures schemes without random oracles. For many standard-model signatures schemes, the proof of unforgeability splits into two cases depending on whether or not the adversary recycles parts of a signature it obtained in a signing query for its forgery. For the recycle case, the simulator in the proof typically guesses which of the queries the forgery will relate to, which introduces a polynomial loss in the reduction. Having tighter security proofs now allows us to use smaller parameters for practical instantiations.
The rest of the session were Bristol papers: the first one by Dario Catalano and Fiore, and Bogdan. Dario II introduced the concept of adaptive pseudo-free groups. This is an abstraction of objects typically used in cryptography where certain operations are "hard". A pseudo-free group is a group with efficient group operations which behaves like a free group for computationally bounded adversaries, such as an RSA group. For cryptographic applications this concept might be too weak, as e.g. for signature schemes, the adversary is allowed to see solutions to non-trivial equations by making signing queries. Adaptive pseudo-free groups extend thus pseudo-free groups to adaptive adversaries.
The last talk of the session was by myself on a new primitive called commuting signatures. They extend the functionality of verifiably encrypted signatures, which enable encryption of signatures and/or messages while giving a publicly verifiable proof that the content is valid. Commuting signatures allow a signer, given an encryption of a message, to produce a verifiably encrypted signature on the plaintext. As an application I gave the currently most efficient instantiation of delegatable anonymous credentials, which can be non-interactively delegated.
A blog for the cryptography group of the University of Bristol. To enable discussion on cryptography and other matters related to our research.
Friday, May 20, 2011
Monday, May 16, 2011
Eurocrypt '11 - Tallinn - Day One
We started off with the best paper at the conference - and a strong candidate for "best talk" too!
Efficient Authentication from Hard Learning Problems looked at how you could do authentication in a RFID tag. Forget Moore's Law, this is state-of the art technology and we're talking about a few thousand gates total. Even AES is not an option, so the authors developed an authentication protocol based on the "Learning Parity with Noise" problem (LPN), building on the HB scheme and adding security against active adversaries.
The basic idea is that prover and verifier share a secret vector, the verifier picks a random challenge vector and the prover replies with the inner product of the challenge and the secret XOR some biased random bit - this can fail for both honest and dishonest provers but the difference in failure probabilities allows you to construct a secure protocol through repetition.
Next up, a variant of NTRU which comes with a proof that it really is as hard as finding short vectors in lattices and a great introduction to the field of lattice-based cryptography by Phong Nguyen (invited talk).
The afternoon was devoted to side channels and (fully) homomorphic cryptography, where among others we saw fast(er) implementations of pairings, AES and Gentry's FHE scheme.
One highlight on side channels: A Formal Study of Power Variability Issues and Side-Channel Attacks for Nanoscale Devices.
As processsors get smaller, variability between different chips - even of the same series - starts to influence measurements like power traces, so you can't measure one chip to construct a hypothesis then test it on another as you'd like - the more exactly your hypothesis fits the first chip, the worse variation will make it on the second. The authors showed how you get better results by aiming for robustness rather than precision.
As an aside, chip manufacturers may be able to thwart side-channel attacks better by putting less effort into reducing variability.
Efficient Authentication from Hard Learning Problems looked at how you could do authentication in a RFID tag. Forget Moore's Law, this is state-of the art technology and we're talking about a few thousand gates total. Even AES is not an option, so the authors developed an authentication protocol based on the "Learning Parity with Noise" problem (LPN), building on the HB scheme and adding security against active adversaries.
The basic idea is that prover and verifier share a secret vector, the verifier picks a random challenge vector and the prover replies with the inner product of the challenge and the secret XOR some biased random bit - this can fail for both honest and dishonest provers but the difference in failure probabilities allows you to construct a secure protocol through repetition.
Next up, a variant of NTRU which comes with a proof that it really is as hard as finding short vectors in lattices and a great introduction to the field of lattice-based cryptography by Phong Nguyen (invited talk).
The afternoon was devoted to side channels and (fully) homomorphic cryptography, where among others we saw fast(er) implementations of pairings, AES and Gentry's FHE scheme.
One highlight on side channels: A Formal Study of Power Variability Issues and Side-Channel Attacks for Nanoscale Devices.
As processsors get smaller, variability between different chips - even of the same series - starts to influence measurements like power traces, so you can't measure one chip to construct a hypothesis then test it on another as you'd like - the more exactly your hypothesis fits the first chip, the worse variation will make it on the second. The authors showed how you get better results by aiming for robustness rather than precision.
As an aside, chip manufacturers may be able to thwart side-channel attacks better by putting less effort into reducing variability.
Subscribe to:
Posts (Atom)