Dmitry Khovratovich of Microsoft Research UK gave an interesting talk in the Eurocrypt session on symmetric cryptanalysis entitled Narrow-Bicliques: Cryptanalysis of Full IDEA, which is joint work with Gaetan Leurent, and Christian Rechberger. The work is an extension of previous research from Khovratovich on attacking AES (pdf), both in terms of strategy, by discussing an enhancement in the application of the bicliques, and in terms of target; shifting focus from AES to the block cipher IDEA. The initial result on AES was notable for providing the first faster-than-brute-force attack on the full versions of AES-128,-192,and -256. The strategy defines a meet-in-the-middle attack on the block cipher, enhanced in power by the concept of bicliques.
Meet-in-the-middle (MITM) attacks using bicliques were initially evaluated in the context of hash function cryptanalysis. The basic premise is to split an invertible transformation into two parts, with selected parameters affecting only one of the two parts. The parameter space can be searched exhaustively, with matching 'middle' values for the two parts indicating that the parameters were chosen correctly (see the MITM attack on double-DES as a basic example). The difficulty in an application to block cipher rounds is identifying a sequence of rounds that are not dependent on a particular key bit; prominent modern block ciphers typically utilise the full key in the first few rounds.
The application of the biclique essentially extends the beginning or end of the meet-in-the-middle attack to help overcome this difficulty. A biclique itself is a set of internal states constructed in the first or last rounds of the block cipher, mapped to each other by specific keys. The speed-up over brute-force comes from being able to split up the key-space into equally-sized groups, which are tested using a meet-in-the-middle strategy. The 'narrowness' lending to this paper's name comes from an observations that allows the authors to reduce the diffusion of differential trails in the cipher, and as a result reduce the data complexity of an attack.
IDEA was designed in 1991 by Lai and Massey. It has an 128-bit key, and has 8.5 rounds in full. There are two narrow biclique attacks presented that target the full 8.5 rounds, the first with data complexity of 252 and complexity 2126.06, and the second with data complexity of 259 and complexity 2125.97. Both require negiible amounts of memory.
An interesting observation is that whilst the biclique attacks on IDEA presented here do not in any way threaten the practical use of the cipher, they do surpass the efficiency of known attacks so far, and so indicate that there may be potential for similar approaches to do significantly better than a brute-force attack. A second notable observation is that a countermeasure against such attacks is likely to require expensive key-scheduling designs.
A blog for the cryptography group of the University of Bristol. To enable discussion on cryptography and other matters related to our research.
Wednesday, April 18, 2012
Tuesday, April 17, 2012
Eurocrypt 2012: A Tutorial on High Performance Computing applied to Cryptanalysis (invited talk)
Antoine Joux gave the first invited talk of the conference, on High Performance Computing for Cryptanalysis. The focus was on best practice, and he evidently spoke from experience, with much advice on lessons learnt from problematic as well as fruitful past endeavours.
The unique goals of HPC projects in the field (usually motivated by record breaking or benchmark setting) set it apart from typical programming tasks, and priorities are accordingly very different. Portability and re-usability, for example, are not important (and may even be counter-productive) because the code is generally intended for one-time use against a specific task. What is crucial is that the code be simple, easy to modify and not rely on libraries or featurism. One nice feature specific to cryptanalytic challenges is that correctness of the eventual output is generally easy to check, because the answer is known a priori.
Antoine identified four steps in applying HPC in a cryptanalytic context:
The choice of computing power is generally motivated by whatever is available, and of course there are different advantages and disadvantages inherent in the various options.
Researchers with the means to source their own dedicated machines have full control over architectural choices and do not have to compete with other users for resources. However, such an approach does not scale well, and the size of the possible undertaking will eventually hit a limit.
One alternative Antoine mentioned was to get multiple computer users to run computations in the background on their machines -- so exploiting idle cycles on desktops. The total available power would potentially be huge and much more easily scaled. However, the experimenters would have no control over choice of architecture, they would eventually be constrained by limited communication bandwidth, the possible presence of 'adversary' resources would need to be mitigated against, and the code itself would need to be extremely user-friendly. This did sound like a very challenging scenario to work in (Antoine himself did not seem keen on the idea).
The third option mentioned was to apply for power on HPC resources -- so gaining access to high-end, dedicated computing power with fast communications. Challenges remain, though: one is then constrained to the existing architecture, and one has to factor in the new difficulty of job management in a multi-user context. Lastly, if physical access to a HP computer is not available, then HPC "in the cloud" could be an option.
Once appropriate resources have been identified, the researcher can progress to the task of writing the code. Antoine stressed that the goal here should be simple, low-level code, avoiding libraries and prioritising on-the-fly adaptability over re-usability or portability. Optimise where possible but not at the risk of introducing nasty surprises later down the line.
The final stage, of actually running and managing the computation, is often seen as tedious and difficult. Antoine offered many helpful tips from personal experience. Here are a few of them:
The unique goals of HPC projects in the field (usually motivated by record breaking or benchmark setting) set it apart from typical programming tasks, and priorities are accordingly very different. Portability and re-usability, for example, are not important (and may even be counter-productive) because the code is generally intended for one-time use against a specific task. What is crucial is that the code be simple, easy to modify and not rely on libraries or featurism. One nice feature specific to cryptanalytic challenges is that correctness of the eventual output is generally easy to check, because the answer is known a priori.
Antoine identified four steps in applying HPC in a cryptanalytic context:
- Identify an algorithmic starting point.
- Find the requisite computing power.
- Program (and debug/optimise as appropriate).
- Run and manage the actual computation.
The choice of computing power is generally motivated by whatever is available, and of course there are different advantages and disadvantages inherent in the various options.
Researchers with the means to source their own dedicated machines have full control over architectural choices and do not have to compete with other users for resources. However, such an approach does not scale well, and the size of the possible undertaking will eventually hit a limit.
One alternative Antoine mentioned was to get multiple computer users to run computations in the background on their machines -- so exploiting idle cycles on desktops. The total available power would potentially be huge and much more easily scaled. However, the experimenters would have no control over choice of architecture, they would eventually be constrained by limited communication bandwidth, the possible presence of 'adversary' resources would need to be mitigated against, and the code itself would need to be extremely user-friendly. This did sound like a very challenging scenario to work in (Antoine himself did not seem keen on the idea).
The third option mentioned was to apply for power on HPC resources -- so gaining access to high-end, dedicated computing power with fast communications. Challenges remain, though: one is then constrained to the existing architecture, and one has to factor in the new difficulty of job management in a multi-user context. Lastly, if physical access to a HP computer is not available, then HPC "in the cloud" could be an option.
Once appropriate resources have been identified, the researcher can progress to the task of writing the code. Antoine stressed that the goal here should be simple, low-level code, avoiding libraries and prioritising on-the-fly adaptability over re-usability or portability. Optimise where possible but not at the risk of introducing nasty surprises later down the line.
The final stage, of actually running and managing the computation, is often seen as tedious and difficult. Antoine offered many helpful tips from personal experience. Here are a few of them:
- Scale up slowly to the intended size.
- Expect software failure -- where phases are found to not scale well, be prepared to re-program on-the-fly.
- Rare bugs can be hard to detect, so check intermediate data carefully.
- Expect hardware failure -- there will always be a power-down risk, so make sure your code is restartable. Faults can damage computations, so, again, check intermediate data.
- Remember to factor in fluctuating availability of shared resources (avoid tight schedules).
EuroCrypt'12, Day 2: Collisions and Statistics
The EuroCrypt conference has a rather theoretic reputation and only very few papers on Side Channel (SC) Attacks have been accepted to the conference over the last decade. This year, there is exactly one paper covering Side Channel Attacks, namely "Statistical Tools Flavor Side-Channel Collision Attacks" by Amir Moradi, compared e.g. to four papers that refer to Fully Homomorphic Encryption in their titles.
Simple SC Collision Attacks have been known since 2003 but have never really been in the focus of attention of the Side Channel community. They aren't the most efficient Side Channel Attacks as long as a reasonably good power leakage model for the device under attack (DUA) is known. However, Simple SC Collision Attacks do not require a power leakage model at all so they are useful if the attacker has no power leakage model of the DUA. Without a power model, one could also use Mutual Information Analysis (MIA) but the quality of MIA results depend very much on the attacker's choice of distinguishers which complicates MIA a lot. But SC Collision Attacks hold the promise to be a more efficient solution to the problem.
Simple SC Collision Attacks focus only on one component of a cipher, i.e. a key-XOR followed by a sbox [Notation: sbox(m XOR k)] as they exist in the AES and PRESENT ciphers. Thus, like the classical Side Channel Attacks, they recover the secret key in small chunks one at a time (divide-and-conquer), reducing the complexity of all necessary brute forcing. Simple Collision Attacks operate (extremely simpliefied) as follows:
So the next logical step is to explore the usefulness of higher order moments in attacking implementations with these protections and that is just what Amir does in his paper. He shows how to compute the comparisons using the variance, skewness, kurtosis and probability density functions of the traces. This isn't obvious -- you have to keep in mind that my above description of simple SC Collision Attacks is quite simplified. He then tested these theoretic results by attacking Threshold Implementations of AES and PRESENT implemented on a Xilinx Virtex II FPGA and a masked s-box implementation of AES on a Atmel Microcontroller as you might find it on a smart card. All three implementations would have withstood a simple SC Collision Attacks and first order DPA but using his new methods, Amir was able (given enough traces) to recover the secret keys.
So this is indeed an impressive achievement and will most certainly influence the development and evaluation of security-certified devices such as credit cards or authentication tokens.
Simple SC Collision Attacks have been known since 2003 but have never really been in the focus of attention of the Side Channel community. They aren't the most efficient Side Channel Attacks as long as a reasonably good power leakage model for the device under attack (DUA) is known. However, Simple SC Collision Attacks do not require a power leakage model at all so they are useful if the attacker has no power leakage model of the DUA. Without a power model, one could also use Mutual Information Analysis (MIA) but the quality of MIA results depend very much on the attacker's choice of distinguishers which complicates MIA a lot. But SC Collision Attacks hold the promise to be a more efficient solution to the problem.
Simple SC Collision Attacks focus only on one component of a cipher, i.e. a key-XOR followed by a sbox [Notation: sbox(m XOR k)] as they exist in the AES and PRESENT ciphers. Thus, like the classical Side Channel Attacks, they recover the secret key in small chunks one at a time (divide-and-conquer), reducing the complexity of all necessary brute forcing. Simple Collision Attacks operate (extremely simpliefied) as follows:
- Many (millions!) measurements of the power consumption (only the attacked part of the cipher) have to be recorded. E.g. 20 million measurements of sbox(m XOR k).
- For all those measurements, try to find instances ofsbox(m1 XOR k1) = sbox(m2 XOR k2) <=> m1 XOR k1 = m2 XOR k2by comparing their power traces; if the computations are equal, the power traces should be equal (up to noise), if the results are unequal, the power traces should be different. From each of these instances we learn something about the keys k1, k2:m1 XOR m2 = k1 XOR k2(We already know m1 and m2.)
- When a sufficient amount of relations between various ki,j (i = rounds, j = full chunk of the round key) have been found, a fully determined system of equations can be built and solved to find the correct key used for en-/decryption by the DUA.
So the next logical step is to explore the usefulness of higher order moments in attacking implementations with these protections and that is just what Amir does in his paper. He shows how to compute the comparisons using the variance, skewness, kurtosis and probability density functions of the traces. This isn't obvious -- you have to keep in mind that my above description of simple SC Collision Attacks is quite simplified. He then tested these theoretic results by attacking Threshold Implementations of AES and PRESENT implemented on a Xilinx Virtex II FPGA and a masked s-box implementation of AES on a Atmel Microcontroller as you might find it on a smart card. All three implementations would have withstood a simple SC Collision Attacks and first order DPA but using his new methods, Amir was able (given enough traces) to recover the secret keys.
So this is indeed an impressive achievement and will most certainly influence the development and evaluation of security-certified devices such as credit cards or authentication tokens.
Eurocrypt 2012, Day 1: Unconditionally-Secure Robust Secret Sharing with Compact Shares
The last talk of the first day of Eurocrypt 2012 was given by Serge Fehr from CWI, who presented a simple but really cute result. The problem is related to secret sharing, one of the fundamental problems in Cryptography. In its original form, (threshold) secret sharing problem is the following: there exists a dealer with secret s, which he wants to share among n participant, say P1, ..., Pn, such that the following two properties should be satisfied:
However, a weakness in the above model is that it is assumed that during the reconstruction, all parties will provide the correct shares. This is a strong assumption because in a real life scenario, a party may be actively corrupted, who may produce an incorrect share. The notion of robust secret sharing deals with this problem. The secrecy property in a robust secret sharing scheme is the same as in the secret sharing problem. However, the reconstruction property is now enhanced (informally) to robust reconstruction property as follows:
Now a natural question is what would be the overhead involved with respect to the share size in a robust secret sharing scheme, where we (informally) define the overhead as the difference between the maximum size of the share and the size of the secret. McEliece and Sarwate first observed that if n = 3t+1 then Shamir secret sharing scheme becomes a robust secret sharing scheme. This is because in this case, the set of Shamir shares is nothing but a Reed-Solomon (RS) code of dimension t and distance 2t+1, which can correct t errors. So by applying standard RS decoding algorithm to the list of n shares, of which at most t could be corrupted, we can easily reconstruct back the original polynomial and hence the secret. Moreover, there will be no overhead involved in the share size, as the size of each share will be same as the size of the secret. On the other hand, it is also known that if n < 2t, then it is impossible to construct a robust secret sharing scheme. Intuitively, this is because if there is a dis-honest majority, then the corrupted parties can simulate the shares corresponding to an incorrect secret and it will be impossible to distinguish the correct shares from the incorrect shares.
The interesting case for the robust secret sharing is when n / 3 < t < n / 2; specifically when n = 2t+1. In this setting it is possible to design robust secret sharing which may involve negligible error in the reconstruction (i.e. may output an incorrect secret or the reconstruction may output NULL). However, now there will be additional overhead with respect to the share size. Moreover, the reconstruction procedure will involve additional steps (other than the standard Lagrange interpolation) to identify the incorrect shares. Suppose k is the security parameter and we want the error in the reconstruction to be at most 2-k. The two extreme ends in this context are as follows:
The idea behind the current paper is to select the MAC keys from a smaller field, which implies shorter MAC keys and authentication tags. This reduces the size of the MAC keys and the authentication tags in comparison to the RB scheme. However, the resultant MAC no longer provides negligible error probability; rather it has non-negligible error. That is, an incorrect share might now be considered as a valid share during the reconstruction procedure with non-negligible probability. To deal with this problem, the authors suggested a new sophisticated reconstruction phase. Specifically, instead of blindly considering a share to be valid if it is approved with respect to the MAC keys of any t+1 parties, we also now consider the "status" of these t+1 parties. Specifically, if the share of a party is found to be incorrect then that party is discarded and later it is not considered at all while checking the validity of the share of other parties. The idea is to create a win-win situation with the adversary as follows: if the corrupted parties submit too many incorrect shares then most of them will be identified and discarded. This will make it difficult for the remaining shares under the control of the adversary to be considered as valid. On the other hand, if the corrupted parties submit many correct shares then the remaining corrupted shares may be considered as valid. But then they can be corrected easily by applying the RS decoding algorithm on the valid shares, as now we will have very few corrupted valid shares.
- Secrecy: any set of t or less number of parties receive no information about the secret. Moreover this should hold even if the parties are computationally unbounded. This is the popular notion of unconditional security, also known as the information theoretic security.
- Reconstruction: any set of t+1 or more parties can correctly reconstruct the secret from their shares.
However, a weakness in the above model is that it is assumed that during the reconstruction, all parties will provide the correct shares. This is a strong assumption because in a real life scenario, a party may be actively corrupted, who may produce an incorrect share. The notion of robust secret sharing deals with this problem. The secrecy property in a robust secret sharing scheme is the same as in the secret sharing problem. However, the reconstruction property is now enhanced (informally) to robust reconstruction property as follows:
- Robust reconstruction: given the set of n shares, of which at most t could be corrupted, there exists an efficient reconstruction algorithm, which identifies the correct shares and reconstructs the secret with very high probability.
Now a natural question is what would be the overhead involved with respect to the share size in a robust secret sharing scheme, where we (informally) define the overhead as the difference between the maximum size of the share and the size of the secret. McEliece and Sarwate first observed that if n = 3t+1 then Shamir secret sharing scheme becomes a robust secret sharing scheme. This is because in this case, the set of Shamir shares is nothing but a Reed-Solomon (RS) code of dimension t and distance 2t+1, which can correct t errors. So by applying standard RS decoding algorithm to the list of n shares, of which at most t could be corrupted, we can easily reconstruct back the original polynomial and hence the secret. Moreover, there will be no overhead involved in the share size, as the size of each share will be same as the size of the secret. On the other hand, it is also known that if n < 2t, then it is impossible to construct a robust secret sharing scheme. Intuitively, this is because if there is a dis-honest majority, then the corrupted parties can simulate the shares corresponding to an incorrect secret and it will be impossible to distinguish the correct shares from the incorrect shares.
The interesting case for the robust secret sharing is when n / 3 < t < n / 2; specifically when n = 2t+1. In this setting it is possible to design robust secret sharing which may involve negligible error in the reconstruction (i.e. may output an incorrect secret or the reconstruction may output NULL). However, now there will be additional overhead with respect to the share size. Moreover, the reconstruction procedure will involve additional steps (other than the standard Lagrange interpolation) to identify the incorrect shares. Suppose k is the security parameter and we want the error in the reconstruction to be at most 2-k. The two extreme ends in this context are as follows:
- In a classical result, Rabin and Ben-Or presented a robust secret sharing scheme, where each share has an overhead of Ω(kn) in the share size and where the reconstruction algorithm takes time polynomial in n.
- In another significant work, Cramer, Damgard and Fehr presented a robust secret sharing scheme, where each share has an overhead of O(k + n) in the share size. However, the reconstruction algorithm requires exponential (in n) time.
The idea behind the current paper is to select the MAC keys from a smaller field, which implies shorter MAC keys and authentication tags. This reduces the size of the MAC keys and the authentication tags in comparison to the RB scheme. However, the resultant MAC no longer provides negligible error probability; rather it has non-negligible error. That is, an incorrect share might now be considered as a valid share during the reconstruction procedure with non-negligible probability. To deal with this problem, the authors suggested a new sophisticated reconstruction phase. Specifically, instead of blindly considering a share to be valid if it is approved with respect to the MAC keys of any t+1 parties, we also now consider the "status" of these t+1 parties. Specifically, if the share of a party is found to be incorrect then that party is discarded and later it is not considered at all while checking the validity of the share of other parties. The idea is to create a win-win situation with the adversary as follows: if the corrupted parties submit too many incorrect shares then most of them will be identified and discarded. This will make it difficult for the remaining shares under the control of the adversary to be considered as valid. On the other hand, if the corrupted parties submit many correct shares then the remaining corrupted shares may be considered as valid. But then they can be corrected easily by applying the RS decoding algorithm on the valid shares, as now we will have very few corrupted valid shares.
Monday, April 16, 2012
EuroCrypt'12, Day 1: Provable Security for Key-Alternating Ciphers
If you XOR a key k0 to a message m, apply a random permutation to the XOR-sum and XOR the result with another key k1, then you get a one round key-alternating cipher. If you want to have 2 rounds, add another permutation and a key k2. If you want even more rounds, just follow this pattern. AES is an example for such a key-alternating cipher with 10 (in case of AES-128) rounds. For such a simple and prominent design, you would expect it to be very well understood, wouldn't you?
Indeed, if you ask an arbitrary cryptographer which area - block ciphers, public-key cryptography, hash functions or protocols - is the best understood the answer would probably be public-key cryptography. We do have a reliable armory of recipes to build secure block ciphers and the theory to distinguish a secure block cipher from an insecure one is well developed. Despite a few recent attacks on AES, AES stands pretty firm and can be used in many states for encryption of the most secret documents. All recent attacks were extremely involved and haven't had a significant impact on the practical security of AES. However, the gold standard for a cryptographic primitive nowadays is a security proof, i.e. a proof that breaking the block cipher is at least as difficult as solving a very difficult problem.
When AES (Rijndael by its original name) was designed and standardized, the field of provable security was still at its beginnings and none of the AES-finalists had such a security proof when Rihndael was chosen to be the next AES. Having a reliable block cipher standardized, the focus of cryptographic research shifted to the other three areas - public key cryptography, hash functions and protocols - and if you look for example at the current SHA3 candidates, they all have a proof of security in their submission documentation. (This doesn't necessarily imply that we trust them as much as we trust AES.) But for AES, the trusted workhorse of cryptography, we still don't have such a security proof.
The authors of the paper "Key-Alternating Ciphers in a Provable Setting: Encryption Using a Small Number of Public Permutations (Extended Abstract)" (the full version is to appear in the Journal of Crytpology), A. Bogdanov, L. R. Knudsen, G. Leander, F.-X. Standaert, J. Steinberger and E. Tischhauser, stumbled across this oddity when they tried to design an AES variant that is secure against related-key attacks. They started with an old result which gives that a generic attack on a key-alternating cipher with just one round (permutation) requires at least 2n/2 operations. (n is the block size.) One of the well known rules in block-cipher design is that the security of a key-alternating cipher grows with the number of rounds. So in their paper, the authors try to show an improved lower bound for arbitrary many rounds. They show that for t>=3 rounds, a generic attack on such a cipher has to use at least 23n/4 operations in order to break the cipher and they conjecture that a more precise lower bound will be 2(tn/(t+1)).
Can these results be applied to AES? No, at least not directly. We already know that AES is vulnerable against related key attacks. Is the bound tight? All previous research suggests, that the bound is only tight for the special case of t=2. What can it be used for? The authors propose a construction of a cipher that should be secure against related key attacks (unless there is an AES-specfic underlying vulnerability): It is a 2-round key-alternating cipher that uses AES with two different constant keys as permutation.
From their work it becomes clear that the road towards provably secure block ciphers won't be an easy one and that it is far more difficult to prove security of block cipher than, for example proving the security of public-key based crypto schemes. However, the time might have arrived where the techniques of provable security have matured enough to tackle block ciphers.
Indeed, if you ask an arbitrary cryptographer which area - block ciphers, public-key cryptography, hash functions or protocols - is the best understood the answer would probably be public-key cryptography. We do have a reliable armory of recipes to build secure block ciphers and the theory to distinguish a secure block cipher from an insecure one is well developed. Despite a few recent attacks on AES, AES stands pretty firm and can be used in many states for encryption of the most secret documents. All recent attacks were extremely involved and haven't had a significant impact on the practical security of AES. However, the gold standard for a cryptographic primitive nowadays is a security proof, i.e. a proof that breaking the block cipher is at least as difficult as solving a very difficult problem.
When AES (Rijndael by its original name) was designed and standardized, the field of provable security was still at its beginnings and none of the AES-finalists had such a security proof when Rihndael was chosen to be the next AES. Having a reliable block cipher standardized, the focus of cryptographic research shifted to the other three areas - public key cryptography, hash functions and protocols - and if you look for example at the current SHA3 candidates, they all have a proof of security in their submission documentation. (This doesn't necessarily imply that we trust them as much as we trust AES.) But for AES, the trusted workhorse of cryptography, we still don't have such a security proof.
The authors of the paper "Key-Alternating Ciphers in a Provable Setting: Encryption Using a Small Number of Public Permutations (Extended Abstract)" (the full version is to appear in the Journal of Crytpology), A. Bogdanov, L. R. Knudsen, G. Leander, F.-X. Standaert, J. Steinberger and E. Tischhauser, stumbled across this oddity when they tried to design an AES variant that is secure against related-key attacks. They started with an old result which gives that a generic attack on a key-alternating cipher with just one round (permutation) requires at least 2n/2 operations. (n is the block size.) One of the well known rules in block-cipher design is that the security of a key-alternating cipher grows with the number of rounds. So in their paper, the authors try to show an improved lower bound for arbitrary many rounds. They show that for t>=3 rounds, a generic attack on such a cipher has to use at least 23n/4 operations in order to break the cipher and they conjecture that a more precise lower bound will be 2(tn/(t+1)).
Can these results be applied to AES? No, at least not directly. We already know that AES is vulnerable against related key attacks. Is the bound tight? All previous research suggests, that the bound is only tight for the special case of t=2. What can it be used for? The authors propose a construction of a cipher that should be secure against related key attacks (unless there is an AES-specfic underlying vulnerability): It is a 2-round key-alternating cipher that uses AES with two different constant keys as permutation.
From their work it becomes clear that the road towards provably secure block ciphers won't be an easy one and that it is far more difficult to prove security of block cipher than, for example proving the security of public-key based crypto schemes. However, the time might have arrived where the techniques of provable security have matured enough to tackle block ciphers.
Sunday, April 15, 2012
Zero Knowledge Proofs and Nuclear Disarmament
On the last (but not least) day of the workshop on Formal and Computational Cryptographic Proofs at the Newton Institute in Cambridge, Boaz Barak gave a talk on a connection between Zero Knowledge Proofs and Nuclear Disarmament. In Cryptography, we usually try to create digital analogs of physical concepts, such as locks and signatures. It was refreshing to see the converse for a change, as Boaz detailed how Zero Knowledge Proofs, a theoretical concept used in Cryptography, could be used in the physical process of verifying that a nuclear warhead is really a nuclear warhead.
The goal of nuclear disarmament is to reduce (and ultimately eliminate) nuclear weapons. Of course, this is only possible if everyone gets rid of their nuclear weapons, which has led to several treaties (mostly between the US and Russia) detailing how many nukes everyone is allowed to keep. This means that they have to get rid of all their other nukes. As trust is in short supply among powerful nations, the concerning nations would like to be able to verify that the others are dismantling their actual nuclear warheads, rather than dummy warheads. However, no one wants to allow the others to look too closely at their warheads, lest they steal the top-secret design.
To verify that a warhead is an actual nuke of a given design, the US picks a random warhead of the same design as a template (to prevent Russia from giving them a dummy template) and perform measurements to see if they are equivalent. It turns out that even seemingly innocuous measurements leak information about the design of the warheads. This has led to the intuition that "such measurements are useful and must therefore contain sensitive information".
As a result, the current approach uses a so-called information barrier: they put both warheads in a black box. This black box then compares the two by performing some measurements and outputs yes/no depending on whether the warhead is equivalent to the template warhead, without revealing any information about the design of the warheads. However, this information barrier approach is very cumbersome, because both parties need to be heavily involved in the design and construction of this black box.
So how can one work around this intuition of measurements containing sensitive information? This is exactly the problem that Zero Knowledge Proofs in Cryptography attempt to address! So all they need is a "physical" zero knowledge proof. How can this be done? Boaz first described a "cartoon version" of their proposed protocol. In this protocol, the Russian Anastasia has two pouches that contain x ∈ [0, n) marbles and she wants to prove to the American Bob that her pouches both contain the same number of marbles, without Bob finding out how many marbles she was storing in her pouches. She proves this using the following steps:
1) Anastasia prepares 10 pairs of buckets by putting a random numberRi ∈ [0,N) of marbles in both buckets of pair i and presents the buckets to Bob.
2) Bob chooses 9 pairs and verifies that for each pair, both buckets contains the same number of marbles. He returns the remaining pair of buckets to Anastasia without counting the number of marbles inside.
3) Anastasia empties one bucket in both of her pouches and presents them to Bob.
4) Bob verifies that both pouches contain the same number of marbles.
As Boaz mentioned, this would be very useful if both nations wanted to eliminate their stockpiles of marbles, but they are working with nuclear warheads instead. So how can you compare two warheads? It turns out that if you set up detectors around a warhead and fire neutrons at it, you will be able to measure a predictable pattern (according to computer simulations). However, knowing the full pattern would once again give you information about the design. Thus, they only measure certain random positions around the warheads and compare the number of neutrons here. The computer simulations suggest that the number of neutrons measured in a certain position should be roughly equal for both warheads. Still, this number might give away information as well. Anastasia and Bob will therefore go through the following protocol to hide this:
1) Anastasia prepares 10 pairs of neutron detectors by shooting a random numberRi ∈ [0,N) of neutrons at both detectors of pair i and presents the detectors to Bob.
2) Bob chooses 9 pairs and verifies that for each pair, both detectors have measured the same number of neutrons. He returns the remaining pair of detectors to Anastasia without reading out the number of measured neutrons.
3) Anastasia puts the detectors in the appropriate positions near the nuclear warheads and lets Bob fire neutrons at them.
4) Bob verifies that both detectors have measured the same number of neutrons.
The number of measured neutrons in the real protocol correspond to the marbles in the cartoon protocol. For both protocols Anastasia can cheat with probability no more than 90% (which can be increased by increasing the number of buckets/detectors). Furthermore, the protocols are ε-Zero Knowledge, where ε is non-negligible. This means that Bob does learn a fraction ε about the distribution of the actual measured neutrons (although ε can be improved by increasing N). Boaz noted that this ε could further be reduced if some extra physical assumptions were made about the warheads but that other info was going to leak in this case.
Boaz also mentioned that experiments will start in a few months, noting that "if it works for coffee makers it will probably work for nuclear warheads". In the rest of the talk, Boaz answered questions about possible issues with the protocol. For example, Nigel suggested that these warheads might actually behave like Physically Unclonable Functions (PUF), in contrast to what the simulations suggest. On a positive sidenote, this would mean that they discovered a new version of PUFs. In conclusion, Boaz said that their work is only one component of a much larger process and it is exclusively based on unclassified information. Thus, he concluded that while many questions and concerns remain, it is interesting to apply theoretical cryptographic constructs in the physical world.
The goal of nuclear disarmament is to reduce (and ultimately eliminate) nuclear weapons. Of course, this is only possible if everyone gets rid of their nuclear weapons, which has led to several treaties (mostly between the US and Russia) detailing how many nukes everyone is allowed to keep. This means that they have to get rid of all their other nukes. As trust is in short supply among powerful nations, the concerning nations would like to be able to verify that the others are dismantling their actual nuclear warheads, rather than dummy warheads. However, no one wants to allow the others to look too closely at their warheads, lest they steal the top-secret design.
To verify that a warhead is an actual nuke of a given design, the US picks a random warhead of the same design as a template (to prevent Russia from giving them a dummy template) and perform measurements to see if they are equivalent. It turns out that even seemingly innocuous measurements leak information about the design of the warheads. This has led to the intuition that "such measurements are useful and must therefore contain sensitive information".
As a result, the current approach uses a so-called information barrier: they put both warheads in a black box. This black box then compares the two by performing some measurements and outputs yes/no depending on whether the warhead is equivalent to the template warhead, without revealing any information about the design of the warheads. However, this information barrier approach is very cumbersome, because both parties need to be heavily involved in the design and construction of this black box.
So how can one work around this intuition of measurements containing sensitive information? This is exactly the problem that Zero Knowledge Proofs in Cryptography attempt to address! So all they need is a "physical" zero knowledge proof. How can this be done? Boaz first described a "cartoon version" of their proposed protocol. In this protocol, the Russian Anastasia has two pouches that contain x ∈ [0, n) marbles and she wants to prove to the American Bob that her pouches both contain the same number of marbles, without Bob finding out how many marbles she was storing in her pouches. She proves this using the following steps:
1) Anastasia prepares 10 pairs of buckets by putting a random number
2) Bob chooses 9 pairs and verifies that for each pair, both buckets contains the same number of marbles. He returns the remaining pair of buckets to Anastasia without counting the number of marbles inside.
3) Anastasia empties one bucket in both of her pouches and presents them to Bob.
4) Bob verifies that both pouches contain the same number of marbles.
As Boaz mentioned, this would be very useful if both nations wanted to eliminate their stockpiles of marbles, but they are working with nuclear warheads instead. So how can you compare two warheads? It turns out that if you set up detectors around a warhead and fire neutrons at it, you will be able to measure a predictable pattern (according to computer simulations). However, knowing the full pattern would once again give you information about the design. Thus, they only measure certain random positions around the warheads and compare the number of neutrons here. The computer simulations suggest that the number of neutrons measured in a certain position should be roughly equal for both warheads. Still, this number might give away information as well. Anastasia and Bob will therefore go through the following protocol to hide this:
1) Anastasia prepares 10 pairs of neutron detectors by shooting a random number
2) Bob chooses 9 pairs and verifies that for each pair, both detectors have measured the same number of neutrons. He returns the remaining pair of detectors to Anastasia without reading out the number of measured neutrons.
3) Anastasia puts the detectors in the appropriate positions near the nuclear warheads and lets Bob fire neutrons at them.
4) Bob verifies that both detectors have measured the same number of neutrons.
The number of measured neutrons in the real protocol correspond to the marbles in the cartoon protocol. For both protocols Anastasia can cheat with probability no more than 90% (which can be increased by increasing the number of buckets/detectors). Furthermore, the protocols are ε-Zero Knowledge, where ε is non-negligible. This means that Bob does learn a fraction ε about the distribution of the actual measured neutrons (although ε can be improved by increasing N). Boaz noted that this ε could further be reduced if some extra physical assumptions were made about the warheads but that other info was going to leak in this case.
Boaz also mentioned that experiments will start in a few months, noting that "if it works for coffee makers it will probably work for nuclear warheads". In the rest of the talk, Boaz answered questions about possible issues with the protocol. For example, Nigel suggested that these warheads might actually behave like Physically Unclonable Functions (PUF), in contrast to what the simulations suggest. On a positive sidenote, this would mean that they discovered a new version of PUFs. In conclusion, Boaz said that their work is only one component of a much larger process and it is exclusively based on unclassified information. Thus, he concluded that while many questions and concerns remain, it is interesting to apply theoretical cryptographic constructs in the physical world.
Saturday, April 14, 2012
Leakage-Resilient Zero Knowledge
On Thursday at the workshop on Formal and Computational Cryptographic Proofs, Amit Sahai from UC Los Angeles presented his work on leakage-resilient zero-knowledge proofs and their applications. He started with addressing some criticisms leakage-resilient cryptography has received, and stated that he was a theoretician, and that the goal was not to make a system which is secure against all possible physical attacks, but rather to find out what the minimal physical assumptions are to do interesting cryptography.
Leakage resilience (LR) cryptography is an attempt to construct systems which are, to some extent, secure against side-channel attacks. To model this, one augments an existing security notion by allowing the adversary to make leakage queries: the adversary picks a function and receives the image of the state of the system he is attacking under that function; it is assumed that the function has to be bounded in some sense not to completely rule out security. LR cryptography is a very active area, though the focus has so far been on non-interactive primitives. Sahai's talk on the other hand focused on interactive zero-knowledge proofs and their applications to LR universally composable multiparty computation.
In zero-knowledge (ZK) proofs, a prover tries to convince a verifier of the validity of a statement (for which he has a witness) without revealing anything else. This is formalised by requiring that there exist a simulator which simulates the view of the verifier without knowing a witness. The problem with defining a LR version of zero knowledge is that leakage of the witness violates the notion of ZK itself. The best we can hope for is therefore that the verifier learns nothing beyond what is leaked of the witness. Different relaxations of LR include the assumption that only computation leaks information or allowing a leakage-free pre-processing phase. In contrast, Sahai's guiding principle is: "leak any time anywhere", in particular on the entire state of the prover and throughout the entire protocol.
Leakage-resilient ZK is defined by requiring that there exist a simulator, which for the verifier is indistinguishable from a prover. Moreover, in the ideal world this simulator is also allowed to make leakage queries. The difficultly is that the simulator has to simulate a view which is consistent even after the leakage queries. There are parallels to "adaptive security without erasure" for ZK, where the adversary is allowed to corrupt the prover (or simulator) and thus learn its full state. However, for LR-ZK matters are even worse, as the simulator must continue its simulation after leakage queries.
A technique to achieve adaptive security are equivocable commitments, which can be opened to several values. Combining this with another trick consisting in making the verifier commit to its challenge ahead of time (and letting the simulator extract this value in the beginning of the simulation), Sahai et al. construct LR-ZK proofs satisfying the following. If the adversary's bound on the leakage function is λ then for any ε there exists a simulator which may make leakage queries bounded by (1+ε)λ. Sahai concluded with showing applications to LR multiparty computation and discussing additional issues one faces there.
Leakage resilience (LR) cryptography is an attempt to construct systems which are, to some extent, secure against side-channel attacks. To model this, one augments an existing security notion by allowing the adversary to make leakage queries: the adversary picks a function and receives the image of the state of the system he is attacking under that function; it is assumed that the function has to be bounded in some sense not to completely rule out security. LR cryptography is a very active area, though the focus has so far been on non-interactive primitives. Sahai's talk on the other hand focused on interactive zero-knowledge proofs and their applications to LR universally composable multiparty computation.
In zero-knowledge (ZK) proofs, a prover tries to convince a verifier of the validity of a statement (for which he has a witness) without revealing anything else. This is formalised by requiring that there exist a simulator which simulates the view of the verifier without knowing a witness. The problem with defining a LR version of zero knowledge is that leakage of the witness violates the notion of ZK itself. The best we can hope for is therefore that the verifier learns nothing beyond what is leaked of the witness. Different relaxations of LR include the assumption that only computation leaks information or allowing a leakage-free pre-processing phase. In contrast, Sahai's guiding principle is: "leak any time anywhere", in particular on the entire state of the prover and throughout the entire protocol.
Leakage-resilient ZK is defined by requiring that there exist a simulator, which for the verifier is indistinguishable from a prover. Moreover, in the ideal world this simulator is also allowed to make leakage queries. The difficultly is that the simulator has to simulate a view which is consistent even after the leakage queries. There are parallels to "adaptive security without erasure" for ZK, where the adversary is allowed to corrupt the prover (or simulator) and thus learn its full state. However, for LR-ZK matters are even worse, as the simulator must continue its simulation after leakage queries.
A technique to achieve adaptive security are equivocable commitments, which can be opened to several values. Combining this with another trick consisting in making the verifier commit to its challenge ahead of time (and letting the simulator extract this value in the beginning of the simulation), Sahai et al. construct LR-ZK proofs satisfying the following. If the adversary's bound on the leakage function is λ then for any ε there exists a simulator which may make leakage queries bounded by (1+ε)λ. Sahai concluded with showing applications to LR multiparty computation and discussing additional issues one faces there.
Friday, April 13, 2012
Pairing-based Succinct Non-interactive Zero-Knowledge Arguments
Jens Groth gave the third talk on Wednesday with this descriptive title. After motivating the construction with the correctness requirement for encrypted votes in an election, the speaker explained the basic properties of zero-knowledge arguments followed by the essential facts about his construction. It allows to prove satisfiability for any circuit consisting of NAND gates, which is NP-complete. The argument is publicly verifiable and sublinear in the number of gates, but it requires a quadratic non-uniform CRS. The presented table about related results suggests that there is a trade-off between the size of the argument and the size of the CRS. In terms of assumptions, the argument is based on the non-standard knowledge of exponent assumption and pairings but not the random oracle model. Finally, it achieves perfect completeness, computational soundness against an adaptive adversary, and perfect zero-knowledge.
A key component of the construction is an extended Pedersen commitment, where there is one commitment for a vector of values. Together with a second commitment of the same form for a related generator, one can create a so-called knowledge commitment that can be verified with a pairing. The knowledge of exponent assumption implies the knowledge of the committed values if the verification is successful.
There are three sub-arguments used as building blocks: a restriction argument, a product argument, and a permutation argument. The restriction argument shows that the values of a given subset in the commitment are zero, the product argument shows that the committed values are the component-wise product of the values of two other commitments, and the permutation argument shows that the values of one commitment are a permutation of the values of another.
The zero-knowledge argument now works as follows: The prover commits with three knowledge commitments to the two inputs and the outputs of all NAND gates. The restriction argument is used to show that the redundant positions in the commitments are set to zero, and the product argument is used that the square of all values equals the value, which implies that all values are 0 or 1, i.e., bits. For a NAND gates, the product of the input values equals one minus the output value. Therefore, the product argument can be used together with the homomorphic property of the commitment scheme to show that all gates are computed correctly. Finally, the permutation argument is used to show that the value on the wires are consistent.
A key component of the construction is an extended Pedersen commitment, where there is one commitment for a vector of values. Together with a second commitment of the same form for a related generator, one can create a so-called knowledge commitment that can be verified with a pairing. The knowledge of exponent assumption implies the knowledge of the committed values if the verification is successful.
There are three sub-arguments used as building blocks: a restriction argument, a product argument, and a permutation argument. The restriction argument shows that the values of a given subset in the commitment are zero, the product argument shows that the committed values are the component-wise product of the values of two other commitments, and the permutation argument shows that the values of one commitment are a permutation of the values of another.
The zero-knowledge argument now works as follows: The prover commits with three knowledge commitments to the two inputs and the outputs of all NAND gates. The restriction argument is used to show that the redundant positions in the commitments are set to zero, and the product argument is used that the square of all values equals the value, which implies that all values are 0 or 1, i.e., bits. For a NAND gates, the product of the input values equals one minus the output value. Therefore, the product argument can be used together with the homomorphic property of the commitment scheme to show that all gates are computed correctly. Finally, the permutation argument is used to show that the value on the wires are consistent.
How Provably Secure are Cryptographic Primitives used in Practice?
For the second time this year, many of the group have made the short trip to Cambridge for the Newton Institute workshop titled Formal and Computational Cryptographic Proofs. In the spirit of the workshop the focus of Eike Kiltz's talk was provable security of RSA-FDH (Full Domain Hash) signatures.
When looking at provable security it is of course vital to consider the following process:
1. Define the security model required: Unforgeability under Chosen Message Attack (UF-CMA)
2. Describe the scheme: [RSA-FDH]
3. Define and analyse a 'hard problem': the RSA assumption
4. Create a security reduction from the scheme to the hard problem.
So in this case one needs to show that breaking the UF-CMA property means that one can break the RSA assumption, to draw the conclusion that the scheme is secure. First off, the signature scheme is (tF, εF, qs, qh)-secure in the random oracle model is for all forgers F running in time tF making at most qs signature queries and qh random oracle queries, Pr[F wins] < εF where the event F wins is F creating a forgery on a message not previously asked.
In RSA-FDH the public key is (N, e, H:{0.1}k → Z*N ), the secret key is d = 1⁄e mod φ (N). Unique signature s = H(m)1⁄e mod N. Verification holds if se = H(m) mod N.
A variation of this scheme is used in practice, and it gives unique signatures from a deterministic algorithm. In 2000, Coron [pdf] showed that if we assume that RSA is (tA, εA)-hard to invert, then RSA-FDH is (tF, εF, qs, qh)-secure in the random oracle model where εF = qs ⋅ εA and tF ≈ tA. The reduction is not tight, as it has 'security loss' of qs. Thus for 80-bit security RSA-FDH, the "work factor(F)" = tF⁄εF ≥ 280 for qs = 230, requiring hardness of RSA against an adversary A with "work factor(A)" = tA / εA = (qs ˙ tF)⁄εF = 2110. Thus the size of N is required to be approximately 2000 bits, rather than 1000 if the reduction was tight.
In 2002 Coron [pdf] showed that if there exists a reduction R from security of RSA-FSH to the hardness of RSA losing a factor < qs then this reduction R can be used to break RSA. So given the reduction R, we have to simulate a forger Fsim that has some universe of hash queries M = {1,...,qs} and signing queries Q = {1,...,qs}. The forger gives M and receives back the hashes of the elements of M. Forger then selects an m* in M but not in Q, and gets back σ* = sign(m*). Then it gives Q and receives signatures on the elements of Q, then it rewinds to before m* was queried, so that after the rewind the reduction doesn't know what happened so m* is not in Q, and so it still counts as a valid forgery. This simulates the view of a forger used to break a hard problem.
Let Freal be a real forger making the same queries Q and outputs a forgery σ*real on m*. Then by definition:
Pr[RFreal (N,e,x^e) = x] = εR
Pr[RFsim (N,e,x^e) = x] = εI
|Pr[RFreal (N,e,x^e) = x] - Pr[RFsim (N,e,x^e)=x]| ≤ Pr[σ* incorrect AND S correct AND ¬Fail ] ≤ εF⁄qs
so we have εI ≥ εR - εF⁄qs .
RSA is referred to as 'certified' if given (N,e) one can verify in polynomial time that f(N,e)(x) : = xe mod N defines a permutation over Z∗N.
A folklore result states that for e > N and e prime, is certified. In a paper of Kiltz and Kakvi appearing next week at Eurocrypt it is showed that RSA is certified if N1⁄4 < e < N and e prime. Small exponent RSA, where e < N1⁄4, is not certified as it is 'lossy.' This work shows that it is possible to create a tight reduction of RSA-FDH usign the Phi-Hiding assumption, which is simply a factor of 2 loss in the reduction. Thus if Phi-Hiding and RSA are equally difficult assumptions (assumed to be the case) then we have a tight reduction.
In the standard model, Dodis et al. [pdf] gave the following theorem: If f is a homomorphic trapdoor permutation, there does not exist any blackbox reduction R from the security of f-FDH to any non-interactive hard problem of F. There is however a positive result due to programmable hash functions (Hofheinz, Kiltz 2008 [pdf]). A brief discussion of programmable hash functions included how if we assume H is a (m,1)-programmable hash function, then RSA is hard to invert. Then RSA-FDH is a secure m-time signature scheme, no doubt a weaker model.
In randomized RSA-FDH, the public key includes two hash functions H and P, and signatures are defnined as H(m)1⁄P(r) where r is a random t-bit string. The security is discussed in the paper of Hofheinz, Jager and Kiltz [pdf].
Thursday, April 12, 2012
Garbled Circuits
The first talk on Wednesday was given by Phillip Rogaway on Garbled Circuits. GC find motivation in the Two Millionaire Problem, and it is generalized to the case of (secure) evaluation of any given boolean circuit between two players. GC have been around on the literature for long time, since Yao in 1986 published his six-pages paper. Surprisingly, the protocol is not described in this paper but is based in Yao's oral presentations of his work. Foundations were not worked out back then: GC have been seen as a tool or technique instead of an object of study itself. In the talk, Rogaway first gave a brief overview of several proposed constructions along the last twenty years. The first instantiations grabbed some information within the circuit's gates and made use of the XOR operation to go through the garbled gates and eventually compute an encoding of the circuit's output. The legitimate user can then resolve the real output value from this encoding. The problem with this approach is that for certain configurations of the circuit (concretely, when several garbled gates share a common input wire) hidden values on the garbled gate can be recovered (see this paper).
The first attempt to establish its security appears in 2009 by Lindell and Pinkas. It is based in the latest protocol described in the paper How to play a mental game. This protocol uses double encryptions to garble the gates. The symmetric encryption used in the garbling has the followings three properties: 1) IND-CPA secure, 2) It has elusive range. That is, the range of the encryption function under two different keys (gate's input wires values) are likely to be disjoint with high probability. 3) Efficient verification range (to efficiently move along the garbled gates).
On the other hand, many implementations have been proposed, (e.g. Kolesnikov, Schneider 2008, Pinkas Schneider Smart Williams, 2009) but none of them are within Lindell-Pinkas paradigm, since they use hash functions and therefore belongs to the ROM world.
The goal that Rogaway wanted to achieve was to define a Garbling scheme in a way that it is suitable for most applications, and it is representation-independent (boolean or arithmetic circuits, lookup tables etc). Here is how it goes this formalization: we have a function f and want to securely evaluate it. We split f into a triple (e,F,d), where e (d) is an encoding (decoding) of the input (output) and F is the garbled circuit f. We see f as a string. Then, a Garbled scheme consists of a 5-tuple G=(Gb,En,De,Ev,ev). Gb generates (e,F,d) and Ev evaluates F on the encoded input X given by En, and outputs Y , from which y=f(x) is derived using De. It can be the case that G leaks information from f. He defined a function L, which on input f measures the leakage of the scheme (sometimes f itself is known, others only its topology or structure). L is suppose to be poly-time computable. Using this notion he defines several properties (better said, formalizes concepts used in the literature). For example, input privacy holds if from (F,X,d) we learn nothing but L(f). Similarly, obliviousness holds if we learn nothing but L(f) from (F,d).
He also talked about game-based security. He gave two different notions: indistinguishability, and simulation. In the IND game an attacker A has access to a garbling oracle which on queries (f,x,f',x') of A, returns an random garbled instance (F,X,d) either of (f,x) or of (f',x'). The goal of the attacker is to guess the right function/input. Here we must assume that L(f)=L(f'). The simulation scenario is similar. Just note that the simulator make up his instance (F,X,d) using only L(f). He noticed that SIM implies IND, but the converse is only true when (L,ev) is efficiently invertible (that is, given the pair (s=L(f*),y=ev(f*,ev*)), find (f,x) such that s=L(f) and y=ev(f,x)).
Last, he stressed that Garbling schemes meet applications on the Verifiable Computation area (introduced by Gennaro Gentry and Parno in 2010), since a garbled circuit can be seen as a one-time verifiable scheme.
The first attempt to establish its security appears in 2009 by Lindell and Pinkas. It is based in the latest protocol described in the paper How to play a mental game. This protocol uses double encryptions to garble the gates. The symmetric encryption used in the garbling has the followings three properties: 1) IND-CPA secure, 2) It has elusive range. That is, the range of the encryption function under two different keys (gate's input wires values) are likely to be disjoint with high probability. 3) Efficient verification range (to efficiently move along the garbled gates).
On the other hand, many implementations have been proposed, (e.g. Kolesnikov, Schneider 2008, Pinkas Schneider Smart Williams, 2009) but none of them are within Lindell-Pinkas paradigm, since they use hash functions and therefore belongs to the ROM world.
The goal that Rogaway wanted to achieve was to define a Garbling scheme in a way that it is suitable for most applications, and it is representation-independent (boolean or arithmetic circuits, lookup tables etc). Here is how it goes this formalization: we have a function f and want to securely evaluate it. We split f into a triple (e,F,d), where e (d) is an encoding (decoding) of the input (output) and F is the garbled circuit f. We see f as a string. Then, a Garbled scheme consists of a 5-tuple G=(Gb,En,De,Ev,ev). Gb generates (e,F,d) and Ev evaluates F on the encoded input X given by En, and outputs Y , from which y=f(x) is derived using De. It can be the case that G leaks information from f. He defined a function L, which on input f measures the leakage of the scheme (sometimes f itself is known, others only its topology or structure). L is suppose to be poly-time computable. Using this notion he defines several properties (better said, formalizes concepts used in the literature). For example, input privacy holds if from (F,X,d) we learn nothing but L(f). Similarly, obliviousness holds if we learn nothing but L(f) from (F,d).
He also talked about game-based security. He gave two different notions: indistinguishability, and simulation. In the IND game an attacker A has access to a garbling oracle which on queries (f,x,f',x') of A, returns an random garbled instance (F,X,d) either of (f,x) or of (f',x'). The goal of the attacker is to guess the right function/input. Here we must assume that L(f)=L(f'). The simulation scenario is similar. Just note that the simulator make up his instance (F,X,d) using only L(f). He noticed that SIM implies IND, but the converse is only true when (L,ev) is efficiently invertible (that is, given the pair (s=L(f*),y=ev(f*,ev*)), find (f,x) such that s=L(f) and y=ev(f,x)).
Last, he stressed that Garbling schemes meet applications on the Verifiable Computation area (introduced by Gennaro Gentry and Parno in 2010), since a garbled circuit can be seen as a one-time verifiable scheme.
Wednesday, April 11, 2012
Metareductions
Marc Fischlin gave a talk on metareductions today. Metareductions are a technique to prove that a notion can not be derived from another in a "black box" manner. The technique itself has been known for a long time and crops up every now and then in different forms, but the name metareduction is comparatively new and the technique has been enjoying a lot of interest recently.
Among other results, Marc has identified the "habitat" of metareductions in an environment bounded bounded by different properties of reductions such as whether they treat the adversary, the underlying primitive or the construction as black boxes or not. A first description of this "environment" was given in the RTV framework; Marc introduced a new nomenclature which he calls CAP that directly mirrors the relevant properties. Metareductions live in the "*BN" layer, that is around reductions that treat the adversary as a black box but not the primitive in question.
The basic idea is the following. Security of a construction is defined by a game G, as in the statement that for any adversary A something holds for the interaction A-G.
To prove this statement under an assumption defined by a game C, one gives a reduction R and switches from the game A-G to A-R-C, arguing that A cannot tell -G from -R-C and that if A- breaks G then (A-R)- breaks C.
To prove that no such R exists, first choose an A. It is common to choose an "all-powerful" A that can extract discrete logarithms in one step, for example.
Assume that R exists: then A-R- breaks C, which is not surprising.
Now construct a metareduction M and change the game from A-R-C to M-R-C. M is no longer all-powerful but can rewind R when necessary to make up for this. As long as R cannot detect that it is playing with M instead of A, (M-R)- now breaks C without the help of A. This is a contradiction unless C was easy to break in the first place.
Unfortunately there is a lot more to constructing M than this. In many reductions one can get away with just considering negligible versus non-negligible functions or giving generous estimates. To get a metareduction argument to work at all, one needs a detailed knowledge of the probabilities involved and usually extra tools like the Chernoff bound.
Intuitively, what is going on is that R is reducing a scheme S to a "slightly simpler" assumption C, the smaller this difference in complexities the easier the job of R and the harder for M to exploit the difference.
Among other results, Marc has identified the "habitat" of metareductions in an environment bounded bounded by different properties of reductions such as whether they treat the adversary, the underlying primitive or the construction as black boxes or not. A first description of this "environment" was given in the RTV framework; Marc introduced a new nomenclature which he calls CAP that directly mirrors the relevant properties. Metareductions live in the "*BN" layer, that is around reductions that treat the adversary as a black box but not the primitive in question.
The basic idea is the following. Security of a construction is defined by a game G, as in the statement that for any adversary A something holds for the interaction A-G.
To prove this statement under an assumption defined by a game C, one gives a reduction R and switches from the game A-G to A-R-C, arguing that A cannot tell -G from -R-C and that if A- breaks G then (A-R)- breaks C.
To prove that no such R exists, first choose an A. It is common to choose an "all-powerful" A that can extract discrete logarithms in one step, for example.
Assume that R exists: then A-R- breaks C, which is not surprising.
Now construct a metareduction M and change the game from A-R-C to M-R-C. M is no longer all-powerful but can rewind R when necessary to make up for this. As long as R cannot detect that it is playing with M instead of A, (M-R)- now breaks C without the help of A. This is a contradiction unless C was easy to break in the first place.
Unfortunately there is a lot more to constructing M than this. In many reductions one can get away with just considering negligible versus non-negligible functions or giving generous estimates. To get a metareduction argument to work at all, one needs a detailed knowledge of the probabilities involved and usually extra tools like the Chernoff bound.
Intuitively, what is going on is that R is reducing a scheme S to a "slightly simpler" assumption C, the smaller this difference in complexities the easier the job of R and the harder for M to exploit the difference.
Multikey Fully Homomorphic Encryption and Applications
Once again this year I find myself back in sunny Cambridge; this time for another workshop consisting of a series of invited talks, followed of course by Eurocrypt next week. The second talk of the first day was by Vinod Vaikuntanathan, presenting a new work on Multikey Fully Homomorphic Encryption and its applications.
The application scenario is one of outsourcing computation, whilst preserving privacy of inputs. The standard way to achieve this in a single user setting is with FHE. However, when faced with a large pool of users who wish to compute a function on all their private inputs, FHE cannot be applied so easily.
They create a new primitive, called Multi-key Fully Homomorphic Encryption, to deal with this situation. Each client has a key pair (ski, pki), and there is a single high-powered server used to evaluate the desired function, f. When given some ciphertexts c1 = Enc(pk1, x1), ..., cn = Enc(pkn, xn), the server can compute y = Eval(f,c1,...,cn). The clients then perform an interactive decryption protocol to jointly recover f(x1,...,xn) = Dec(sk1,...,skn,y). It is stressed that this decryption phase is the only part of the process requiring interactivity. The number of clients and the function to be evaluated are completely dynamic and can be chosen on-the-fly. This is pretty much the best outcome we can hope for, as it was shown recently (HLP'11) that a completely non-interactive solution to the problem is impossible.
The construction itself is based on the NTRU cryptosystem. Specifically, they adapt a recent variant due to Stehle and Steinfeld (2011) that guarantees hardness based on worst-case problems in ideal lattices. With a minor adjustment to this scheme, the speaker explained how NTRU is in fact multi-key somewhat homomorphic. Those interested in the specifics should look at the paper, but I must say that the technique for transforming it in this way is surprisingly elegant: given two ciphertexts encrypted under different secret keys, they can simply be added or multiplied together, giving a new ciphertext decryptable under the product of the two keys. (Note that, regardless of whether you add or multiply the ciphertexts, the keys are always multiplied - the final decryption key is simply the product of all secret keys.)
Vinod used a nice "reservoir analogy" to explain how to deal with the increase of error terms after homomorphic operations. By adapting the modulus switching technique from previous FHE schemes, the reservoir level can be kept nicely under control for an a priori bounded depth of circuits. To achieve true (unbounded depth) FHE, they give a multi-key analogue of Gentry's bootstrapping theorem.
However, it must be mentioned that allowing homomorphic operations comes with an added cost: a random polynomial chosen during encryption has to be much smaller than the lower bound in the security proof of Stehle & Steinfeld. This introduces an additional assumption (on top of ring-LWE), namely that the quotient of two small polynomials is indistinguishable from uniform. For large enough polynomials this is actually statistically true, but the nature of the problem is very much open for the situation at hand.
Two other open issues with the scheme were mentioned: firstly, security is only guaranteed against honest-but-curious adversaries. It would be interesting to see how this could be extended to the malicious case. Secondly, the final ciphertext must be decrypted in 'layers', passed around for each client to unpeel one layer of encryption. It would be nicer to have a true threshold decryption to reduce communication overhead.
Overall, this is a groundbreaking work in both the fields of MPC and FHE. The flexibility of the resulting protocol seems to have good potential, and the construction of FHE from NTRU is also an exciting new result. NTRU is known for its efficiency, and although of course the fully homomorphic transformation incurs significant overhead, it will be interesting to see how it stacks up against the current state of the art.
Tuesday, April 10, 2012
Newton Institute - Formal and Computational Cryptographic Proofs
Formal and Computational Cryptographic Proofs
Newton Institute, 10-13 April 2012
This week many of our Bristol students are in Cambridge for a seminar organised by our very own Nigel Smart at the Newton Institute as part of the Turing year.
In one of today's talks, Bogdan spoke about composition of game-based proofs. Games and simulations are the two main means of reasoning about security of protocols. Simulation-based notions claim to be conceptually simpler and especially to offer composition for free - for example the well-known Universal Composability (UC) framework.
Bogdan gave the example of a key-exchange protocol: just having a key is of little use on its own unless you can use that key, for example to transmit encrypted messages. So it is a very natural requirement that a key exchange protocol be composable with other protocols.
Simulation-based approaches have many disadvantages once one takes a closer look. Once one actually has to read or write a proof of a concrete protocol, any suggestion of simplicitly quickly evaporates. Consequently, real-world protocols such as TLS are rarely proved secure in simulation-based models.
Among the disadvantages of simulation, besides the number of conflicting frameworks (UC, JUC, RS, IITM, GNUC just to name some) Bogdan pointed out that the frameworks themselves duplicate features of the protocols such as session identifiers. There are also complexities involving joint state between sessions and particularly with adaptive corruption of parties.
Instead, Bogdan (joint work with several others) proposed to look at composition of game-based notions. Security games are in fact not wholly unamenable to compositional proofs, even generic ones: Bogdan gave a theorem that allowed composition of a key-exchange protocol secure in the Bellare-Rogaway (game-based) model with any other protocol as long as the key exchange has the property that partnered sessions are easy to identify.
This covers SSH, IKE and a few others but not TLS (which is not BR-secure). The reason is that the TLS key establishment protocol already uses the derived session key for an encryption, disallowing any clean separation between establishment and use of the session key.
Even this problem can be solved though and the technique is to consider protocols proved secure from primitives by means of a so-called "key independent reduction". This essentially means that the protocol only uses the key via the primitive, allowing the triple of key agreement, primitive and protocol in TLS (the primitive is length-hiding authenticated encryption with a minor restriction) to be proved secure in a generic and modular way from game-based notions.
I think this work is important in several ways. It is obvious that we need composable security notions. The claim that simulation-based notions are the best (or even only) way to reliably get composition has never convinced me personally. My main reason for liking game-based proofs is that they feel much more natural and intuitive to me: not only do I find it much easier both to read and write proofs in game-based models but they also provide better conceptual links between the techniques in the proofs themselves and informal or "intuitive" notions of security.
The techniques employed in game-based proofs, from reductions to hybrid arguments, are often compositional in nature (in a recent paper for example we composed DAA out of various cryptographic "building blocks" and proved the DAA scheme secure in a modular and compositional manner) - but until now each composition had to be done by hand. The line of work presented today could, so I hope, lead to much easier notions of composing game-based proofs.
Newton Institute, 10-13 April 2012
This week many of our Bristol students are in Cambridge for a seminar organised by our very own Nigel Smart at the Newton Institute as part of the Turing year.
In one of today's talks, Bogdan spoke about composition of game-based proofs. Games and simulations are the two main means of reasoning about security of protocols. Simulation-based notions claim to be conceptually simpler and especially to offer composition for free - for example the well-known Universal Composability (UC) framework.
Bogdan gave the example of a key-exchange protocol: just having a key is of little use on its own unless you can use that key, for example to transmit encrypted messages. So it is a very natural requirement that a key exchange protocol be composable with other protocols.
Simulation-based approaches have many disadvantages once one takes a closer look. Once one actually has to read or write a proof of a concrete protocol, any suggestion of simplicitly quickly evaporates. Consequently, real-world protocols such as TLS are rarely proved secure in simulation-based models.
Among the disadvantages of simulation, besides the number of conflicting frameworks (UC, JUC, RS, IITM, GNUC just to name some) Bogdan pointed out that the frameworks themselves duplicate features of the protocols such as session identifiers. There are also complexities involving joint state between sessions and particularly with adaptive corruption of parties.
Instead, Bogdan (joint work with several others) proposed to look at composition of game-based notions. Security games are in fact not wholly unamenable to compositional proofs, even generic ones: Bogdan gave a theorem that allowed composition of a key-exchange protocol secure in the Bellare-Rogaway (game-based) model with any other protocol as long as the key exchange has the property that partnered sessions are easy to identify.
This covers SSH, IKE and a few others but not TLS (which is not BR-secure). The reason is that the TLS key establishment protocol already uses the derived session key for an encryption, disallowing any clean separation between establishment and use of the session key.
Even this problem can be solved though and the technique is to consider protocols proved secure from primitives by means of a so-called "key independent reduction". This essentially means that the protocol only uses the key via the primitive, allowing the triple of key agreement, primitive and protocol in TLS (the primitive is length-hiding authenticated encryption with a minor restriction) to be proved secure in a generic and modular way from game-based notions.
I think this work is important in several ways. It is obvious that we need composable security notions. The claim that simulation-based notions are the best (or even only) way to reliably get composition has never convinced me personally. My main reason for liking game-based proofs is that they feel much more natural and intuitive to me: not only do I find it much easier both to read and write proofs in game-based models but they also provide better conceptual links between the techniques in the proofs themselves and informal or "intuitive" notions of security.
The techniques employed in game-based proofs, from reductions to hybrid arguments, are often compositional in nature (in a recent paper for example we composed DAA out of various cryptographic "building blocks" and proved the DAA scheme secure in a modular and compositional manner) - but until now each composition had to be done by hand. The line of work presented today could, so I hope, lead to much easier notions of composing game-based proofs.
Thursday, April 5, 2012
TCC Invited Talk II
The second invited talk at TCC 2012 was given by Dr. Sergey Yekhanin from the MIcrosoft Research. The talk was on Locally Decodable Codes (LCD), an important research topic in Coding theory as well in Cryptography. In the talk the speaker brilliantly explained the concept of LCD and its relation with one of the fundamental cryptographic primitives, namely Private Information Retrieval (PIR). This was followed by some technical constructions of the LCD codes.
To see the motivation of LCD codes, consider the well known problem of data storage, where we want to reliably store some data and at the same time, we would like the data to be readily available for the users. One simple and common approach would be to replicate the data at several places. This approach provides us with local recovery, as the data can be locally accessed by the users from any one of the several places, where the data is stored. But this will be an overkill for large sized data. An alternative approach would be to use erasure coding, where we encode the data using an erasure correcting code and store the various components of the codeword at various places. Though this approach is cheaper than the idea of replicating the whole data, it no longer provides us the local recovery property. This is because now to access the data, the users have to retrieve several components of the codeword to perform the decoding. What we would like now is to have an erasure code which can provide us with local decoding.
Consider the following example: let x be the message, consisting of 3 bits x1, x2 and x3. Suppose we encode the message into the 7 bit codeword C = (x1, x2, x3, x1 ⊕ x2, x1 ⊕ x3, x2 ⊕ x3, x1 ⊕ x2 ⊕ x3). This is nothing but the well known Hadamard code. An interesting feature of this code is that any message bit can be recovered with locality 2. For example, suppose we want to recover the bit x1. We can recover x1 from the codeword C if at least one of the following 4 pairs of components of the codeword is not erased:
Another interesting property of the above LCD is the following: each component of the codeword occurs uniformly in the decoding tuples, corresponding to each message bit. That is, each component of the codeword occurs same number of times in the decoding tuples of the message bits. For example, consider the fourth component of the codeword, namely x1 ⊕ x2 . It occurs in the decoding tuples of x1 once, in the decoding tuples of x2 once and in the decoding tuples of x3 once. The same holds for every other component of the codeword. This property of the LCD codes, called as the smoothness property makes it an important tool for cryptographic primitives like PIR.
The important parameters of the LCD codes are the codeword length n and the query complexity r. It is desirable to have both these parameters as small as possible. However, there is a trade-off between these parameters and determining the true shape of this trade-off is a major open problem in this area.
Now lets see how the LCD codes can be used to design information theoretically secure PIR scheme in the multiple server settings. We will again consider the above example. Before that, let me informally state the problem of PIR: there is a public database, replicated across several servers and available for the user access. A user can access any data in the database by querying the server. However, we would like to preserve the user privacy in that each server should
be completely oblivious of the data accessed by the user (we assume that the servers do not collude with each other). This problem can be considered as a weaker form of another important cryptographic primitive, namely Oblivious Transfer (OT). In OT, we also require that the user should not get any information about the other data items which he has not accessed; where as in a PIR scheme, the user can retrieve other items along with the item that he wants to access.
We can design a 2 server PIR scheme using the above example: let x be the data base, containing 3 bits and let S1 and S2 be the servers. Each server applies the LCD encoding on x and stores the codeword. Now suppose a user wants to access a particular data bit, say the bit x3 . The user randomly selects one of the 4 decoding tuples for retrieving x3. For example, suppose he selects the tuple {x3 ⊕ x2 , x2}. Then he queries the server S1 for the component x3 ⊕ x2 and the server S2 for the component x2. Once he obtains them, he can obtain x3 (notice that in this process he also obtains x2 but that is allowed in PIR). On the other hand, each server has no information about the data bit that the user has retrieved. From the view point of S1, the component x3 ⊕ x2 might have been queried for x1 or x2 or x3 with the same probability. Similarly, from the view point of S2, the component x2 might have been queried for x1 or x2 or x3 with the same probability.
There are several interesting open problems in the area of LCD. The interested readers can refer to the PhD thesis of Sergey (recently published as a book).
To see the motivation of LCD codes, consider the well known problem of data storage, where we want to reliably store some data and at the same time, we would like the data to be readily available for the users. One simple and common approach would be to replicate the data at several places. This approach provides us with local recovery, as the data can be locally accessed by the users from any one of the several places, where the data is stored. But this will be an overkill for large sized data. An alternative approach would be to use erasure coding, where we encode the data using an erasure correcting code and store the various components of the codeword at various places. Though this approach is cheaper than the idea of replicating the whole data, it no longer provides us the local recovery property. This is because now to access the data, the users have to retrieve several components of the codeword to perform the decoding. What we would like now is to have an erasure code which can provide us with local decoding.
Consider the following example: let x be the message, consisting of 3 bits x1, x2 and x3. Suppose we encode the message into the 7 bit codeword C = (x1, x2, x3, x1 ⊕ x2, x1 ⊕ x3, x2 ⊕ x3, x1 ⊕ x2 ⊕ x3). This is nothing but the well known Hadamard code. An interesting feature of this code is that any message bit can be recovered with locality 2. For example, suppose we want to recover the bit x1. We can recover x1 from the codeword C if at least one of the following 4 pairs of components of the codeword is not erased:
- {x1}
- {x1 ⊕ x2 , x2}
- {x1 ⊕ x3 , x3}
- {x2 ⊕ x3 , x1 ⊕ x2 ⊕ x3}
- {x2}
- {x1 ⊕ x2 , x1}
- {x3 ⊕ x2 , x3}
- {x1 ⊕ x3 , x1 ⊕ x2 ⊕ x3}
- {x3}
- {x1 ⊕ x3 , x1}
- {x3 ⊕ x2 , x2}
- {x1 ⊕ x2 , x1 ⊕ x2 ⊕ x3}
Another interesting property of the above LCD is the following: each component of the codeword occurs uniformly in the decoding tuples, corresponding to each message bit. That is, each component of the codeword occurs same number of times in the decoding tuples of the message bits. For example, consider the fourth component of the codeword, namely x1 ⊕ x2 . It occurs in the decoding tuples of x1 once, in the decoding tuples of x2 once and in the decoding tuples of x3 once. The same holds for every other component of the codeword. This property of the LCD codes, called as the smoothness property makes it an important tool for cryptographic primitives like PIR.
The important parameters of the LCD codes are the codeword length n and the query complexity r. It is desirable to have both these parameters as small as possible. However, there is a trade-off between these parameters and determining the true shape of this trade-off is a major open problem in this area.
Now lets see how the LCD codes can be used to design information theoretically secure PIR scheme in the multiple server settings. We will again consider the above example. Before that, let me informally state the problem of PIR: there is a public database, replicated across several servers and available for the user access. A user can access any data in the database by querying the server. However, we would like to preserve the user privacy in that each server should
be completely oblivious of the data accessed by the user (we assume that the servers do not collude with each other). This problem can be considered as a weaker form of another important cryptographic primitive, namely Oblivious Transfer (OT). In OT, we also require that the user should not get any information about the other data items which he has not accessed; where as in a PIR scheme, the user can retrieve other items along with the item that he wants to access.
We can design a 2 server PIR scheme using the above example: let x be the data base, containing 3 bits and let S1 and S2 be the servers. Each server applies the LCD encoding on x and stores the codeword. Now suppose a user wants to access a particular data bit, say the bit x3 . The user randomly selects one of the 4 decoding tuples for retrieving x3. For example, suppose he selects the tuple {x3 ⊕ x2 , x2}. Then he queries the server S1 for the component x3 ⊕ x2 and the server S2 for the component x2. Once he obtains them, he can obtain x3 (notice that in this process he also obtains x2 but that is allowed in PIR). On the other hand, each server has no information about the data bit that the user has retrieved. From the view point of S1, the component x3 ⊕ x2 might have been queried for x1 or x2 or x3 with the same probability. Similarly, from the view point of S2, the component x2 might have been queried for x1 or x2 or x3 with the same probability.
There are several interesting open problems in the area of LCD. The interested readers can refer to the PhD thesis of Sergey (recently published as a book).