(The distractions of the conference kept me from finishing this post at the time, but since it has proved such a frequent topic of conversation over the past week I decided it was worth belated coverage).
Ron Rivest began his invited talk with a quote from Abraham Lincoln: "How many legs does a dog have if you call the tail a leg? Four; calling a tail a leg doesn't make it a leg". Likewise, a "secret" key is not secret by virtue of its name alone. We seek schemes which provide some sort of guarantee of secrecy or security within a particular adversarial scenario. But it is hard to perfectly model real-life threats so that, in practice, provably secure schemes can at best be supposed to be difficult or costly to compromise.
FlipIt is a game-based approach to modeling security which embraces this limitation and addresses the challenge of finding the best strategy for managing (over time) a secure system in a cost-efficient way. The ongoing, continuous-time game comprises two players - an adversary and a defender - who can, by turns, compromise and restore security (for example, by finding and respectively resetting the secret key). There are distinct costs associated with each move, and likewise a payoff to each player for time spent in the preferred 'state' (the adversary benefits when the system is compromised, the defender when it is secure). However, the players do not know the true state until they make a move, at which point they get to see the entire history of moves to date. Of course, neither wants to expend on needless moves and so is faced with a strategy selection problem which can be solved by a cost-benefit analysis.
The optimal strategy will depend on the distribution of the opponent's moves and on the particular costs and payoffs involved. Non-adaptive strategies include periodic (fixed time interval moves), exponential (memoryless moves at a certain fixed rate), and renewal (a more general process by which the distribution changes after each move). Adaptive strategies are more complicated, and use the feedback information from the revealed history to inform future moves. Rivest performed cost-benefit analyses for some illustrative examples and discussed the conditions for various different equilibria to be achieved.
It was interesting to see an alternative perspective on the challenge of security and the way in which cross-disciplinary research can be used to build multi-faceted responses to multi-faceted problems. Whilst the elimination of vulnerabilities is clearly important, such endeavours should be complemented with efforts to mitigate the damage done by 'inevitable' compromise.
Peter Schmidt-Nielsen gave a rump session talk (which I regret to have missed) in which he applied the Cramer-Rao Bound in addressing one of the open problems: "What's the optimal non-adaptive strategy against an adaptive opponent". (Slides here (pdf)).
A blog for the cryptography group of the University of Bristol. To enable discussion on cryptography and other matters related to our research.
Saturday, August 27, 2011
Sunday, August 21, 2011
Crypto 2011---Final Impressions
The conference has come to an end last Thursday and I have already made my way back to Bristol. Before going back to the normal rhythm of work tomorrow, a lazy Sunday afternoon seems like a good opportunity to give an overall summary of the conference, roughly a week after it all started (on Sunday) with Shrimpy's 40th, a board meeting, and the welcome reception.
On Monday the technical program commenced, with as high points Ron Rivest's Distinguished Lecture on FlipIt and the presentation of the best paper. Both topics are quite distinct, but they share the impression that this is only just the beginning and that they might spawn far more research.
Tuesday was relatively short but had the rump session in the evening. The talk I found funniest was by Yuji Suga who extended the sponge methodology to various other cleaning-related household utensils (if I recollect correctly, he was also responsible for the "three degrees" talk at CHES'09's rump session). Also far funnier than it should have been was the panel discussion on leakage. The important point here was delivery, delivery, delivery! On a more serious note, Bogdanov, Khovratovich, and Rechberger announced the first theoretical single-key attack against the full AES-128: they achieve key recovery in time roughly 2^126 instead of 2^128. While shaving off two bits might not sound very impressive (and given the data complexity the usual exhaustive search arguably remains the most "practical"), it is a truly remarkable result. It will be presented at Asiacrypt (the notifications went out on Tuesday, so there was quite some gossiping about who got in and who did not).
Wednesday was again a full day and included the excellent invited talk by Roger Dingledine on Tor. Unfortunately I did not get the opportunity to speak to him in person. On Thursday the conference was drawing towards an end. This was clear especially from the last session, which really wasn't as well attended as it deserved to be. Cryptology based on coding theory or multivariate equations is still not very sexy: both suffered from too many broken early attempts, but like lattice-based and pairing-based cryptography, who knows a revival may be nigh?
Overall I consider Crypto'11 a very successful conference. It was (only) my fourth Crypto, but so far it has to be my favourite. Thanks to Tom Shrimpton and the UCSB team, everything ran smoothly and the atmosphere was wonderful. What struck me in this respect, was the seemingly larger number of spouses and especially offspring on campus. Phil Rogaway (and his PC) managed to come up with a broader program than is usual (certainly for Crypto, which in my opinion tends to be a bit narrower than Eurocrypt). Perhaps this was partly due to a relatively large number of accepted papers, but for a good broad program the battle for good submissions has to be won first. To accommodate presentation of all the papers, the schedule was quite tight and with shorter talks than usual. Surprisingly, I found that the shorter talks were far better to follow than the regular ones. Next year the program chairs are encouraged (by the Board) to accept even more papers, for instance by running sessions in parallel or sacrificing the free Tuesday afternoon. This year, I already skipped several sessions---mainly to discuss an ongoing project with Sasha Boldyreva---and I fear that accepting more papers (and having more presentations) might result in me dropping out of even more sessions and talks...
At the moment, it looks like our group will not be present at Asiacrypt in Korea, but we will be back in full force at Eurocrypt in Cambridge next year. Moreover, there are several other events in the mean time (CHES, Dagstuhl seminars and Nigel's Cambridgean adventures) plus, from October onwards, weekly blogging related to our study group.
On Monday the technical program commenced, with as high points Ron Rivest's Distinguished Lecture on FlipIt and the presentation of the best paper. Both topics are quite distinct, but they share the impression that this is only just the beginning and that they might spawn far more research.
Tuesday was relatively short but had the rump session in the evening. The talk I found funniest was by Yuji Suga who extended the sponge methodology to various other cleaning-related household utensils (if I recollect correctly, he was also responsible for the "three degrees" talk at CHES'09's rump session). Also far funnier than it should have been was the panel discussion on leakage. The important point here was delivery, delivery, delivery! On a more serious note, Bogdanov, Khovratovich, and Rechberger announced the first theoretical single-key attack against the full AES-128: they achieve key recovery in time roughly 2^126 instead of 2^128. While shaving off two bits might not sound very impressive (and given the data complexity the usual exhaustive search arguably remains the most "practical"), it is a truly remarkable result. It will be presented at Asiacrypt (the notifications went out on Tuesday, so there was quite some gossiping about who got in and who did not).
Wednesday was again a full day and included the excellent invited talk by Roger Dingledine on Tor. Unfortunately I did not get the opportunity to speak to him in person. On Thursday the conference was drawing towards an end. This was clear especially from the last session, which really wasn't as well attended as it deserved to be. Cryptology based on coding theory or multivariate equations is still not very sexy: both suffered from too many broken early attempts, but like lattice-based and pairing-based cryptography, who knows a revival may be nigh?
Overall I consider Crypto'11 a very successful conference. It was (only) my fourth Crypto, but so far it has to be my favourite. Thanks to Tom Shrimpton and the UCSB team, everything ran smoothly and the atmosphere was wonderful. What struck me in this respect, was the seemingly larger number of spouses and especially offspring on campus. Phil Rogaway (and his PC) managed to come up with a broader program than is usual (certainly for Crypto, which in my opinion tends to be a bit narrower than Eurocrypt). Perhaps this was partly due to a relatively large number of accepted papers, but for a good broad program the battle for good submissions has to be won first. To accommodate presentation of all the papers, the schedule was quite tight and with shorter talks than usual. Surprisingly, I found that the shorter talks were far better to follow than the regular ones. Next year the program chairs are encouraged (by the Board) to accept even more papers, for instance by running sessions in parallel or sacrificing the free Tuesday afternoon. This year, I already skipped several sessions---mainly to discuss an ongoing project with Sasha Boldyreva---and I fear that accepting more papers (and having more presentations) might result in me dropping out of even more sessions and talks...
At the moment, it looks like our group will not be present at Asiacrypt in Korea, but we will be back in full force at Eurocrypt in Cambridge next year. Moreover, there are several other events in the mean time (CHES, Dagstuhl seminars and Nigel's Cambridgean adventures) plus, from October onwards, weekly blogging related to our study group.
Friday, August 19, 2011
Crypto Day 3, Session 10
With some delay due to a busy BBQ schedule, here is a quick report of yesterdays last session on symmetric schemes. It started with a talk by Jooyoung Lee on the collision security of Tandem-DM. Since I'm a coauthor on this paper, I obviously know it quite well. The bottom line is that previous proofs of the collision security of Tandem-DM were flawed and that we gave a whole new proof, relying on a nice technique. For details please see the paper (and for those interested, there is an interesting follow up related to the preimage security of Tandem-DM accepted for Asiacrypt'11).
The second talk of the session was by Nathan Chenette on order-preserving encryption. This type of encryption was formally introduced at Eurocrypt'09 by a superset of the current authors. As the name indicates, the idea is that if for two different plaintexts it holds that x < y, then it also holds for the ciphertexts that E_K(x) < E_K(y). This notion is useful for efficient range checking, but unfortunately it immediately implies that the encryption scheme cannot be IND-CPA secure. Originally, a superset of the authors proposed a security notion by comparing the encryption scheme with an ideal order preserving function. While in a way this is the best one can hope for, the object 'ideal order preserving function' was not at all well understood. Chenette and his coauthors shed light on these functions, for instance by examining how one-way these functions can be. The analysis required mathematics of a nature that one not often encounters in crypto talks or papers. An interesting topic and it felt like there was still more to be said.
Kan Yasuda gave a new blockcipher-based MAC inspired on PMAC. His construction uses a single blockcipher call per message block to be authenticated, yet it achieves (unforgeability) security up to 2^{2n/3} queries. To me this result was rather surprising: firstly, from a technical perspective the PRP-PRF switching lemma cannot be used (as it breaks down at around 2^{n/2} queries) and secondly, it shows that there is also a considerably gap between the collision resistance and the weak collision resistance of blockcipher based compression functions. It makes me wonder how far one can go with weak collision resistance...
The final talk of the session was by Keelveedhi who discussed the key-dependent message security of authenticated encryption (AE). He pointed out that there is a mismatch with the theoretical treatment of AE and its actual use in practice, where typically the randomness is replaced by a nonce and the message is extended by a header that needs to be authenticated but not encrypted. Since the nonce can be either random or adversarially chosen (without repeats) and both message and header could be key dependent, several different notions occur. It turns out that security cannot be achieved if the nonces are adversarially chosen or if the header is key dependent. For the remaining scenario, random nonces and key-independent header, there is a relatively simple KDM-secure scheme in the ROM.
The second talk of the session was by Nathan Chenette on order-preserving encryption. This type of encryption was formally introduced at Eurocrypt'09 by a superset of the current authors. As the name indicates, the idea is that if for two different plaintexts it holds that x < y, then it also holds for the ciphertexts that E_K(x) < E_K(y). This notion is useful for efficient range checking, but unfortunately it immediately implies that the encryption scheme cannot be IND-CPA secure. Originally, a superset of the authors proposed a security notion by comparing the encryption scheme with an ideal order preserving function. While in a way this is the best one can hope for, the object 'ideal order preserving function' was not at all well understood. Chenette and his coauthors shed light on these functions, for instance by examining how one-way these functions can be. The analysis required mathematics of a nature that one not often encounters in crypto talks or papers. An interesting topic and it felt like there was still more to be said.
Kan Yasuda gave a new blockcipher-based MAC inspired on PMAC. His construction uses a single blockcipher call per message block to be authenticated, yet it achieves (unforgeability) security up to 2^{2n/3} queries. To me this result was rather surprising: firstly, from a technical perspective the PRP-PRF switching lemma cannot be used (as it breaks down at around 2^{n/2} queries) and secondly, it shows that there is also a considerably gap between the collision resistance and the weak collision resistance of blockcipher based compression functions. It makes me wonder how far one can go with weak collision resistance...
The final talk of the session was by Keelveedhi who discussed the key-dependent message security of authenticated encryption (AE). He pointed out that there is a mismatch with the theoretical treatment of AE and its actual use in practice, where typically the randomness is replaced by a nonce and the message is extended by a header that needs to be authenticated but not encrypted. Since the nonce can be either random or adversarially chosen (without repeats) and both message and header could be key dependent, several different notions occur. It turns out that security cannot be achieved if the nonces are adversarially chosen or if the header is key dependent. For the remaining scenario, random nonces and key-independent header, there is a relatively simple KDM-secure scheme in the ROM.
Thursday, August 18, 2011
Crypto, day 4
My favourite session today was undoubtedly the first one, also because it was related to past work by myself. The first talk was given jointly by Dominique Schröder and Sanjam Garg, which reflected the merged nature of the paper, and was about round-optimal blind signatures. Blind signatures allow a user to obtain a signature on a message which remains hidden from the signer. However, the user cannot get signatures on more messages than he requested; signatures are thus unforgeable. We call such a scheme round optimal if the communication between the user and the signer consists of only one message each.
Applications of blind signatures include e-cash (which is becoming more and more practically relevant), anonymous credentials and e-voting. For example, FIFA used Votopia to determine the most valuable player (not sure whether FIFA is a good testimonial though)...
There is an abundance of literature on the topic of blind signatures and a few of the schemes are round optimal. However, each one has its restrictions: it is proven secure in the random-oracle model, under interactive assumptions, or it requires a trusted setup. The presented paper gives the first (though not practical) scheme that does not have any of these restrictions.
An interesting detail is that their scheme seemingly contradicts a result presented at EUROCRYPT last year which established the impossibility of even 3-move blind signatures. However, this statement only holds for a class of signatures satisfying some technical properties -- in which most schemes fall.
The second talk, given by Jens Groth, also achieved something optimal: the shortest possible structure-preserving signatures. I presented this concept at CRYPTO last year and we showed how it leads to efficient instantiations of numerous primitives when combined with non-interactive zero-knowledge proofs. Examples are group signatures and blind signatures, and consequently their applications as mentioned above.
Structure-preserving signatures are defined over bilinear groups (think elliptic curves); their messages, signatures and verification keys consist of group elements and verification is done by applying the bilinear map to these elements.
If one assumes that the signing algorithm can only perform generic group operations (which implies for example that it does not look at the bit representation of the elements) then due to their rich structure, one can show that it is actually impossible to have signatures consisting of fewer than three group elements, and that there must be at least two verification equations. Having shown this, the authors construct a scheme that achieves this minimal requirements and is thus optimal.
Applications of blind signatures include e-cash (which is becoming more and more practically relevant), anonymous credentials and e-voting. For example, FIFA used Votopia to determine the most valuable player (not sure whether FIFA is a good testimonial though)...
There is an abundance of literature on the topic of blind signatures and a few of the schemes are round optimal. However, each one has its restrictions: it is proven secure in the random-oracle model, under interactive assumptions, or it requires a trusted setup. The presented paper gives the first (though not practical) scheme that does not have any of these restrictions.
An interesting detail is that their scheme seemingly contradicts a result presented at EUROCRYPT last year which established the impossibility of even 3-move blind signatures. However, this statement only holds for a class of signatures satisfying some technical properties -- in which most schemes fall.
The second talk, given by Jens Groth, also achieved something optimal: the shortest possible structure-preserving signatures. I presented this concept at CRYPTO last year and we showed how it leads to efficient instantiations of numerous primitives when combined with non-interactive zero-knowledge proofs. Examples are group signatures and blind signatures, and consequently their applications as mentioned above.
Structure-preserving signatures are defined over bilinear groups (think elliptic curves); their messages, signatures and verification keys consist of group elements and verification is done by applying the bilinear map to these elements.
If one assumes that the signing algorithm can only perform generic group operations (which implies for example that it does not look at the bit representation of the elements) then due to their rich structure, one can show that it is actually impossible to have signatures consisting of fewer than three group elements, and that there must be at least two verification equations. Having shown this, the authors construct a scheme that achieves this minimal requirements and is thus optimal.
IACR Business Meeting @ Crypto 2011
As usual this started with the standard slides about what IACR is, what it does etc etc. Greg Rose announced that the funds are bouyant, despite fluctuation in the dollar. We now have 1521 members, of which 348 are students.
This Crypto had a record number of submissions, 230, of which 42 were accepted. Phil reported on his techniques at promoting a balanced a good programme, including trying to get more CHES related papers into the programme. Given Bristol had one such paper accepted, it is clear that his efforts succeeded; at least for us.
Announcements at the meeting where that Helena Handschuh will be General Chair for Crypto 2013. Phong Nguyen will be PC co-chair for Eurocrypt 2013, with Arjen Lenstra being the IACR Distinguished Lecturer.
In future LNCS proceedings will be made optional to attendees, in addition to opting out of paper delivery of the journal, which has already been decided.
Future elections, for directors, will be done by approval voting. This is to enable a more expressive form of election.
The main (possibly contentious) issue was the move towards Open Access publication for our conferences. This is starting to become more and more important, as funding bodies are now starting to insist that all govenment sponsored research is made available in one of a number of Open Access methods. In some sense this is only correct, since we do research and it will only become impactful if people are able to read it. Whilst we are lucky at Bristol to have access to all of Springer LNCS, not all universities and companies are so lucky.
The IACR board put forward a plan to move to full Open Access for conference proceedings, i.e. the publication is available immediately world wide for all people. The IACR is looking at a number of offers from both Springer-Verlag and other organizations (e.g. Usenix). The main proposal is to use a variant in which the cost is born by the IACR and not the authors. A vote was taken which was overwhelming in its support for moving to Open Access, indeed no-one was against.
Finally the meeting ended with an idea to move to a different mode of reviewing; for example something the database community are currently doing. For example as explained at http://www.vldb.org/pvldb/. This is certainly something which is worth considering for the future.
This Crypto had a record number of submissions, 230, of which 42 were accepted. Phil reported on his techniques at promoting a balanced a good programme, including trying to get more CHES related papers into the programme. Given Bristol had one such paper accepted, it is clear that his efforts succeeded; at least for us.
Announcements at the meeting where that Helena Handschuh will be General Chair for Crypto 2013. Phong Nguyen will be PC co-chair for Eurocrypt 2013, with Arjen Lenstra being the IACR Distinguished Lecturer.
In future LNCS proceedings will be made optional to attendees, in addition to opting out of paper delivery of the journal, which has already been decided.
Future elections, for directors, will be done by approval voting. This is to enable a more expressive form of election.
The main (possibly contentious) issue was the move towards Open Access publication for our conferences. This is starting to become more and more important, as funding bodies are now starting to insist that all govenment sponsored research is made available in one of a number of Open Access methods. In some sense this is only correct, since we do research and it will only become impactful if people are able to read it. Whilst we are lucky at Bristol to have access to all of Springer LNCS, not all universities and companies are so lucky.
The IACR board put forward a plan to move to full Open Access for conference proceedings, i.e. the publication is available immediately world wide for all people. The IACR is looking at a number of offers from both Springer-Verlag and other organizations (e.g. Usenix). The main proposal is to use a variant in which the cost is born by the IACR and not the authors. A vote was taken which was overwhelming in its support for moving to Open Access, indeed no-one was against.
Finally the meeting ended with an idea to move to a different mode of reviewing; for example something the database community are currently doing. For example as explained at http://www.vldb.org/pvldb/. This is certainly something which is worth considering for the future.
Wednesday, August 17, 2011
CRYPTO '11: Invited talk - Tor and Circumvention
Roger Dingledine's invited talk on "Tor and Circumvention" was a refreshing break from the technical program, providing a fascinating insight into a real-world application which is particularly relevant to recent world events.
Tor is an anonymity network which aims to provide safe, anonymous web browsing to anyone concerned with protecting themselves from Internet surveillance. The community of users is very diverse, ranging from citizens who simply value their privacy, businesses wanting to protect trade secrets, law enforcement agencies who wish to go undetected in their investigations, Internet users who are subject to harsh government firewalls, and journalists and activists trying to stay safe.
Of course, 'bad guys' can also use Tor to access material and engage in covert suspect/criminal activity. An interesting theme of the talk was the importance of separating the potential (mis/ab)uses of the technology from the role of Tor as service provider. This was nicely illustrated in the observation that organisations such as the Internet Watch Foundation themselves use Tor to track down 'bad guys' without being detected.
A different but related point was made to the effect that it is not for Tor to decide how governments should 'behave'; rather to facilitate actions by citizens. Much of the talk was devoted to the 'arms race' between Tor and regimes intent on restricting internet access. Dingledine presented some fascinating graphs on the responsiveness of Tor usage to government blocking and social/political events in places such as China, Iran, Tunisia, Libya, Egypt. Unfortunately these also show the drops in usage as governments respond by blocking Tor (e.g. by taking down all relay IP addresses as found in the centralised directory, or by simply blocking the Tor website so that people give up trying to use it). Tor's countermeasures include bridges - hidden relays by which blocked users can access the network. These are distributed in small lists (to make it harder for the adversary to disable the entire network) via email, social networking, or by private individuals. The challenge is have enough bridges and to change them sufficiently frequently to outpace the blocking rate.
Dingledine was keen to stress that Tor does not protect against every possible tracking method. At the application level, Javascript, cookies, history and flash all cause undesirable identity leakage and so are incompatible with (effective) Tor usage. Tor is also powerless against spyware and compromised systems, and such extreme measures as screen-directed cameras in internet cafes (as is mandatory in some countries).
The talk ended with a discussion of the extent to which Tor uptake and blockage act as early warning signals for political and social change or unrest. The observation of recent 'mysterious' peaks and crashes in (e.g.) Ghana, Venezeula and Chile invite attention.
Tor is an anonymity network which aims to provide safe, anonymous web browsing to anyone concerned with protecting themselves from Internet surveillance. The community of users is very diverse, ranging from citizens who simply value their privacy, businesses wanting to protect trade secrets, law enforcement agencies who wish to go undetected in their investigations, Internet users who are subject to harsh government firewalls, and journalists and activists trying to stay safe.
Of course, 'bad guys' can also use Tor to access material and engage in covert suspect/criminal activity. An interesting theme of the talk was the importance of separating the potential (mis/ab)uses of the technology from the role of Tor as service provider. This was nicely illustrated in the observation that organisations such as the Internet Watch Foundation themselves use Tor to track down 'bad guys' without being detected.
A different but related point was made to the effect that it is not for Tor to decide how governments should 'behave'; rather to facilitate actions by citizens. Much of the talk was devoted to the 'arms race' between Tor and regimes intent on restricting internet access. Dingledine presented some fascinating graphs on the responsiveness of Tor usage to government blocking and social/political events in places such as China, Iran, Tunisia, Libya, Egypt. Unfortunately these also show the drops in usage as governments respond by blocking Tor (e.g. by taking down all relay IP addresses as found in the centralised directory, or by simply blocking the Tor website so that people give up trying to use it). Tor's countermeasures include bridges - hidden relays by which blocked users can access the network. These are distributed in small lists (to make it harder for the adversary to disable the entire network) via email, social networking, or by private individuals. The challenge is have enough bridges and to change them sufficiently frequently to outpace the blocking rate.
Dingledine was keen to stress that Tor does not protect against every possible tracking method. At the application level, Javascript, cookies, history and flash all cause undesirable identity leakage and so are incompatible with (effective) Tor usage. Tor is also powerless against spyware and compromised systems, and such extreme measures as screen-directed cameras in internet cafes (as is mandatory in some countries).
The talk ended with a discussion of the extent to which Tor uptake and blockage act as early warning signals for political and social change or unrest. The observation of recent 'mysterious' peaks and crashes in (e.g.) Ghana, Venezeula and Chile invite attention.
Crypto, day 2
The second session today was on leakage and side channels and it combined theoretical and practical aspects of this subject, the former of which I will concentrate on here.
The first talk was on leakage-resilient zero knowledge. A zero-knowledge proof allows a prover to convince a verifier of the validity of a statement without revealing anything else. This is formalised by requiring that there exist a simulator which can simulate the prover without knowing the witness for the statement. The witness is some information that makes it easy to check the validity of the statement, but which the prover does not want to reveal.
The question asked here is what if that prover leaks information on this witness, which could happen in an actual execution due to side-channel attacks. The authors formalise leakage-resilient (LR) zero-knowledge (ZK) by requiring that the leakage in the real world is bounded by the leakage in the ideal (simulated) world.
The results are LR non-interactive (NI) ZK proofs (which follow from adaptive NIZKs), and more importantly LR ZK proofs (which do not require a trusted setup), where the situation is trickier: when the adversary queries for leakage, the simulator must consistently explain its previous behaviour (as for adaptive security), but here even future messages must be consistent.
The last talk of the session was on cryptography with tamperable and leaky memory. The motivation is that leaking of secret information can be bad, but an adversary actually tampering with it could even be worse. Previous work on this topic had some restrictions, such as requiring non-tamperable memory, or limiting the functions the adversary could apply to modify secret information. This work only assumes a non-tamperable common reference string, which the authors argue could be hardwired anyway and a true source of randomness.
The first result is a generic transformation from any scheme (which has a secret key that is uniformly random) that is resilient against bounded leakage into a scheme which in addition is resilient to continual tampering using fully homomorphic encryption and NIZKs. The second result are encryption and signature schemes which are resilient to continual leakage and tampering based on assumptions in bilinear groups.
The first talk was on leakage-resilient zero knowledge. A zero-knowledge proof allows a prover to convince a verifier of the validity of a statement without revealing anything else. This is formalised by requiring that there exist a simulator which can simulate the prover without knowing the witness for the statement. The witness is some information that makes it easy to check the validity of the statement, but which the prover does not want to reveal.
The question asked here is what if that prover leaks information on this witness, which could happen in an actual execution due to side-channel attacks. The authors formalise leakage-resilient (LR) zero-knowledge (ZK) by requiring that the leakage in the real world is bounded by the leakage in the ideal (simulated) world.
The results are LR non-interactive (NI) ZK proofs (which follow from adaptive NIZKs), and more importantly LR ZK proofs (which do not require a trusted setup), where the situation is trickier: when the adversary queries for leakage, the simulator must consistently explain its previous behaviour (as for adaptive security), but here even future messages must be consistent.
The last talk of the session was on cryptography with tamperable and leaky memory. The motivation is that leaking of secret information can be bad, but an adversary actually tampering with it could even be worse. Previous work on this topic had some restrictions, such as requiring non-tamperable memory, or limiting the functions the adversary could apply to modify secret information. This work only assumes a non-tamperable common reference string, which the authors argue could be hardwired anyway and a true source of randomness.
The first result is a generic transformation from any scheme (which has a secret key that is uniformly random) that is resilient against bounded leakage into a scheme which in addition is resilient to continual tampering using fully homomorphic encryption and NIZKs. The second result are encryption and signature schemes which are resilient to continual leakage and tampering based on assumptions in bilinear groups.
Tuesday, August 16, 2011
SAC
Alfred Menezes talk (one of two invited talks at SAC this year) focused on "tightness" in security proofs. Loosely speaking when one proves the security of a protocol one tries to show that breaking the security of the protocol e.g forging a signature, is as hard as breaking some related problem in number theory which is believed to be hard such as the Diffie-Hellman problem. In practice this reduction can potentially increase the advantage of an adversary trying to break the protocol.
More precisely suppose A is an attacker which in time T succeeds in breaking the desired security of the scheme with probability e. Then if R is an algorithm attacking the underlying hard problem a breaking it in time T' with probability e', but which has access to A as an oracle then the tightness gap is T * e' / T' * e. If T is approximately equal to T (and e is approx. e') the reduction is tight. A particularly striking example given by Menezes, was the reduction obtained in the proof of security (in the random oracle model) of full domain hash RSA signatures. Here, if an adversary can forge an RSA signature in time T with success probability e, then one can solve the RSA problem in time T with success probability e/q where q is the number of queries made by the adversary to the random oracle. Note the tightness gap is q here. For 80 bit security we require T/e <= 2^80 thus if an attacker made say 2^60 queries to the random oracle, because of the loss of tightness in the reduction we require T'/e' <= 2^140 which requires a modulus of 4000 bits, far larger than is ever used in practice.
Even this simple example illustrates extra terms appearing in reductions can make the proofs redundant in practice. See here for further details: http://anotherlook.ca/
Crypto Day 1, Session 4
The final session of today dealt with symmetric constructions and the cryptanalysis thereof. Charles Bouillaget kicked off with a talk about a special kind of attack on AES (or the AES, as some people prefer). The underlying philosophy of the attack was twofold. Firstly, it should be an attack of low data complexity, say using only four known or chosen plaintext--ciphertext pairs. Secondly, the attack should be largely automated. Concretely this meant that a tool was used that recursively exploited algebraic properties of (reduced-round) AES to find an attack as good as possible. C++ code was generated automatically to run the attack. Quite impressive.
Next up was Maria Naya-Plasencia who told us all how to improve rebound attacks. The main underlying problem was one of list merging. Given several lists L_1...L_n and some relation t, create a new list of all elements (a_1,...,a_n) in L_1 x ... x L_n for which t(a_1,...,a_n)=1 holds. In general, this will take time|L_1| x ... x |L_n|, but Maria showed that if t has a special form considerable gains can be made by a combination of divide-and-conquer and clever filtering. The new algorithms she presented subsequently lead to faster rebound attacks on some of the AES-based SHA-3 candidates.
Gregor Leander gave a very elegant talk on attacking PrintCipher. This is a lightweight blockcipher and its structure is somewhat peculiar in that it uses the same key in all of its round and it uses very small S boxes. The rather neat observation was that for a large class of weak keys there are certain bitpatterns that are invariant under the application of the round function (this is formalized as an invariant subspace). As a result, for 2^50 out of 2^80 keys, a high-probability distinguisher exists.
The program for the day was finished by Jian Guo, who introduced a new lightweight hash function based on the sponge family. An advantage of using sponges over for instance Davies--Meyer, is that it requires no feedforward (which would cost around 6 GE per bit). One funny innovation to reduce the footprint even further was the use of 'roots' of an MDS matrix. Whereas for example AES uses an MDS matrix with small entries throughout, Guo and his coauthors propose a LFSR whose step matrix A is such that A^d is MDS, allowing them to implement the MDS multiplication with fewer gates than would normally be the case.
Overall it was a fun first day!
Next up was Maria Naya-Plasencia who told us all how to improve rebound attacks. The main underlying problem was one of list merging. Given several lists L_1...L_n and some relation t, create a new list of all elements (a_1,...,a_n) in L_1 x ... x L_n for which t(a_1,...,a_n)=1 holds. In general, this will take time|L_1| x ... x |L_n|, but Maria showed that if t has a special form considerable gains can be made by a combination of divide-and-conquer and clever filtering. The new algorithms she presented subsequently lead to faster rebound attacks on some of the AES-based SHA-3 candidates.
Gregor Leander gave a very elegant talk on attacking PrintCipher. This is a lightweight blockcipher and its structure is somewhat peculiar in that it uses the same key in all of its round and it uses very small S boxes. The rather neat observation was that for a large class of weak keys there are certain bitpatterns that are invariant under the application of the round function (this is formalized as an invariant subspace). As a result, for 2^50 out of 2^80 keys, a high-probability distinguisher exists.
The program for the day was finished by Jian Guo, who introduced a new lightweight hash function based on the sponge family. An advantage of using sponges over for instance Davies--Meyer, is that it requires no feedforward (which would cost around 6 GE per bit). One funny innovation to reduce the footprint even further was the use of 'roots' of an MDS matrix. Whereas for example AES uses an MDS matrix with small entries throughout, Guo and his coauthors propose a LFSR whose step matrix A is such that A^d is MDS, allowing them to implement the MDS multiplication with fewer gates than would normally be the case.
Overall it was a fun first day!
CRYPTO 11 Session 3
Session 3 - Outsourcing and Cloud Computation - looked at models and techniques for secure delegated computation. Imagine you store your e-mails on gmail or your data on amazon (all the talks started with an example like this) and you want to compute some function (how many e-mails have I sent Bob in the last month?)
As efficient fully homomorphic encryption is "not quite ready yet", the first talk presented protocols for efficient set intersection - the innovation here is that the proofs of correct computation by the server are bounded by the size of the result set, so if you take the intersection of two huge sets and it turns out to be empty then the corresponding proof is a constant size and not huge like the sets. The protocol works for union, intersection and other set operations and uses bilinear groups together with a Merkle-tree like construction.
Talks two and four both deal with computation over large datasets where the client shoud only require a witness sublinear in the data size. Usually one imagines the server/cloud computing some task that is verifiable in poly-time (for example factoring integers) where the client uses the cloud for computing power. But here we're dealing with the case of an efficient (perhaps even linear) operation on a huge database stored on the server - think of a simple search in an address book of the entire USA. Both talks looked at verification and presented models in which the client only requires a witness sublinear or even logarithmic in the size of the data set.
Talk three was my favourite: Computing Without Simultaneous Interaction.
Consider an online election: unlike "traditional" MPC, we can't ask all voters to be online at once let alone cooperate to compute the result. Rather, there's a fixed server and each voter only interacts with it once then goes away. Voter two may not appear until voter 1 is already gone. The speaker, Yehuda Lindell, called the resulting model "one-pass". Their protocol for binary symmetric functions (AND, OR, MAJORITY etc. - baiscally anything that takes binary inputs and only depends on the total number of 1-inputs provides) is secure in a sense even when the server and most voters are corrupt: they cannot compute more than what follows from the joint inputs of the honest voters (i.e. the number of honest 1-votes, but not their order or who cast them). However, each voter has to do work that may be proportional to the total number of voters in the system.
As an aside, after Ron Rivest talked about how often to change passwords earlier today, I've just been made aware of some professional advice on how to pick them.
As efficient fully homomorphic encryption is "not quite ready yet", the first talk presented protocols for efficient set intersection - the innovation here is that the proofs of correct computation by the server are bounded by the size of the result set, so if you take the intersection of two huge sets and it turns out to be empty then the corresponding proof is a constant size and not huge like the sets. The protocol works for union, intersection and other set operations and uses bilinear groups together with a Merkle-tree like construction.
Talks two and four both deal with computation over large datasets where the client shoud only require a witness sublinear in the data size. Usually one imagines the server/cloud computing some task that is verifiable in poly-time (for example factoring integers) where the client uses the cloud for computing power. But here we're dealing with the case of an efficient (perhaps even linear) operation on a huge database stored on the server - think of a simple search in an address book of the entire USA. Both talks looked at verification and presented models in which the client only requires a witness sublinear or even logarithmic in the size of the data set.
Talk three was my favourite: Computing Without Simultaneous Interaction.
Consider an online election: unlike "traditional" MPC, we can't ask all voters to be online at once let alone cooperate to compute the result. Rather, there's a fixed server and each voter only interacts with it once then goes away. Voter two may not appear until voter 1 is already gone. The speaker, Yehuda Lindell, called the resulting model "one-pass". Their protocol for binary symmetric functions (AND, OR, MAJORITY etc. - baiscally anything that takes binary inputs and only depends on the total number of 1-inputs provides) is secure in a sense even when the server and most voters are corrupt: they cannot compute more than what follows from the joint inputs of the honest voters (i.e. the number of honest 1-votes, but not their order or who cast them). However, each voter has to do work that may be proportional to the total number of voters in the system.
As an aside, after Ron Rivest talked about how often to change passwords earlier today, I've just been made aware of some professional advice on how to pick them.
Monday, August 15, 2011
CRYPTO Session 2 & Invited Talk
Session 2 this morning had only one talk and it was the winner of the best paper award - "Computer-Aided Security Proofs for the Working Cryptographer". Can we automate proof checking in cryptography? There are existing tools but they require a lot of specialist knowledge. The authors presented EasyCrypt which builds on top of these tools but as the title hints, should be accessible to cryptographers rather than specialists in formal verification. It is also designed specifically for cryptograhpers' needs (security games, indistinguishability etc.)
Ron Rivest gave the IACR distinguished lecture, taking a game-theoretic perspective. We cannot prevent the occasional key compromise or system failure in practice. We don't even always know when a key has been compromised. Ron presented a model and developed the mathematical theory for questions such as: how often should I change passwords? Regularly or at unpredictable intervals?
Ron Rivest gave the IACR distinguished lecture, taking a game-theoretic perspective. We cannot prevent the occasional key compromise or system failure in practice. We don't even always know when a key has been compromised. Ron presented a model and developed the mathematical theory for questions such as: how often should I change passwords? Regularly or at unpredictable intervals?
CRYPTO 11 Session 1 - Randomness and its Use
Welcome to our (almost) live coverage of CRYPTO 11.
Tom Shrimpton opened Crypto 11 announcing that all talks will be filmed an put on the web. We have a record number of attendees, well over 400.
Session 1 is on "Randomness and Its Use" chaired by Tal Rabin. I'll summarise two of the talks that I found particularly interesting - well worth viewing again online.
First talk - "Leftover Hash Lemma, Revisited"
You have some short imperfect randomness and want to derive an AES key. In theory, you can use a family of universal hash functions (with a public random seed). In the random oracle model, you can use the oracle;
in practice, you often use SHA or similar. This is not an efficiency issue (universal hash functions are very fast) but they need a much longer seed, which often makes their use impractical. Here's the new idea: take a short seed, pass it through a PRG to lengthen and then invoke a construction with the long seed. (Unfortunately this construction is not "obviously" secure.) The authors use the minicrypt model - in which there is secret key encryption but no public key encryption - to derive security for any PRG G that does not imply public-key encryption if you pick a seed s as secret key and use G(S) as the public key. (This is very likely to be the case for AES.) The construction does not work for all primitives but it does for "constructibility" ones (including signatures) and some indistinguishability ones; you can derive AES keys (for example from DH key exchange). The authors can also construct a weak PRF with their techniques which opens up many more examples. However, the one-time pad is an example of something that does not work with this construction.
Time-Lock Puzzles in the Random Oracle Model
A time-lock puzzle allows you (or, according to the speaker, your toaster) to send an encrypted puzzle to the future so that an average computer in 25 years' time can solve it easily but today's computers need 20 years to solve it. A problem is that today's attacker may use massively parallel computers (a botnet) so the puzzle should have some inherently serial characteristics. The paper uses the random oracle model and gives two results.A negative result: there is an efficient algorithm to find "intersection queries" - queries asked both by the puzzle setter and solver - ruling out many types of constructions. A positive result: the authors give a chaining construction (repeatedly asking the oracle at inputs depending on its previous output). (The authors do not solve the problem of your toaster burning the toast while it is working on time-lock puzzles.)
Tom Shrimpton opened Crypto 11 announcing that all talks will be filmed an put on the web. We have a record number of attendees, well over 400.
Session 1 is on "Randomness and Its Use" chaired by Tal Rabin. I'll summarise two of the talks that I found particularly interesting - well worth viewing again online.
First talk - "Leftover Hash Lemma, Revisited"
You have some short imperfect randomness and want to derive an AES key. In theory, you can use a family of universal hash functions (with a public random seed). In the random oracle model, you can use the oracle;
in practice, you often use SHA or similar. This is not an efficiency issue (universal hash functions are very fast) but they need a much longer seed, which often makes their use impractical. Here's the new idea: take a short seed, pass it through a PRG to lengthen and then invoke a construction with the long seed. (Unfortunately this construction is not "obviously" secure.) The authors use the minicrypt model - in which there is secret key encryption but no public key encryption - to derive security for any PRG G that does not imply public-key encryption if you pick a seed s as secret key and use G(S) as the public key. (This is very likely to be the case for AES.) The construction does not work for all primitives but it does for "constructibility" ones (including signatures) and some indistinguishability ones; you can derive AES keys (for example from DH key exchange). The authors can also construct a weak PRF with their techniques which opens up many more examples. However, the one-time pad is an example of something that does not work with this construction.
Time-Lock Puzzles in the Random Oracle Model
A time-lock puzzle allows you (or, according to the speaker, your toaster) to send an encrypted puzzle to the future so that an average computer in 25 years' time can solve it easily but today's computers need 20 years to solve it. A problem is that today's attacker may use massively parallel computers (a botnet) so the puzzle should have some inherently serial characteristics. The paper uses the random oracle model and gives two results.A negative result: there is an efficient algorithm to find "intersection queries" - queries asked both by the puzzle setter and solver - ruling out many types of constructions. A positive result: the authors give a chaining construction (repeatedly asking the oracle at inputs depending on its previous output). (The authors do not solve the problem of your toaster burning the toast while it is working on time-lock puzzles.)
Crypto Day 0.
Crypto 2011 has started with a nice long Board meeting yesterday, which was attended by both Nigel and me. The Board meeting was extra special, as it happened to coincide with the birthday of General Chair Tom Shrimpton. Overall, the meeting was rather fruitful. Some of the less surprising
decisions taken were to extend Matt Franklin's tenure as Editor in Chief of the Journal of Cryptology and to have Crypto 2013 in August at UCSB. This is of course just a tip of the iceberg, for those interested, on Tuesday there is the General Membership Meeting where more information will be made available (and Nigel has promised to blog about it).
The evening party was a happy reunion with friends and colleagues, but the academic part of the conference will only start in half an hour or so. Before I head of to Campbell Hall, just a small anecdote. On my to Crypto, I had a change over in Detroit. After some queuing, I arrived with my suitcase at the Custom Officer. He asked where I was going and what I was doing. After I told him I was going to attend Crypto, he was wondering whether perhaps I might be carrying an Enigma machine! Unfortunately not, but it did put a big smile on my face.
decisions taken were to extend Matt Franklin's tenure as Editor in Chief of the Journal of Cryptology and to have Crypto 2013 in August at UCSB. This is of course just a tip of the iceberg, for those interested, on Tuesday there is the General Membership Meeting where more information will be made available (and Nigel has promised to blog about it).
The evening party was a happy reunion with friends and colleagues, but the academic part of the conference will only start in half an hour or so. Before I head of to Campbell Hall, just a small anecdote. On my to Crypto, I had a change over in Detroit. After some queuing, I arrived with my suitcase at the Custom Officer. He asked where I was going and what I was doing. After I told him I was going to attend Crypto, he was wondering whether perhaps I might be carrying an Enigma machine! Unfortunately not, but it did put a big smile on my face.
Subscribe to:
Posts (Atom)