The last talk of the first day of Eurocrypt 2012 was given by Serge Fehr from CWI, who presented a simple but really cute result. The problem is related to secret sharing, one of the fundamental problems in Cryptography. In its original form, (threshold) secret sharing problem is the following: there exists a dealer with secret s, which he wants to share among n participant, say P1, ..., Pn, such that the following two properties should be satisfied:
However, a weakness in the above model is that it is assumed that during the reconstruction, all parties will provide the correct shares. This is a strong assumption because in a real life scenario, a party may be actively corrupted, who may produce an incorrect share. The notion of robust secret sharing deals with this problem. The secrecy property in a robust secret sharing scheme is the same as in the secret sharing problem. However, the reconstruction property is now enhanced (informally) to robust reconstruction property as follows:
Now a natural question is what would be the overhead involved with respect to the share size in a robust secret sharing scheme, where we (informally) define the overhead as the difference between the maximum size of the share and the size of the secret. McEliece and Sarwate first observed that if n = 3t+1 then Shamir secret sharing scheme becomes a robust secret sharing scheme. This is because in this case, the set of Shamir shares is nothing but a Reed-Solomon (RS) code of dimension t and distance 2t+1, which can correct t errors. So by applying standard RS decoding algorithm to the list of n shares, of which at most t could be corrupted, we can easily reconstruct back the original polynomial and hence the secret. Moreover, there will be no overhead involved in the share size, as the size of each share will be same as the size of the secret. On the other hand, it is also known that if n < 2t, then it is impossible to construct a robust secret sharing scheme. Intuitively, this is because if there is a dis-honest majority, then the corrupted parties can simulate the shares corresponding to an incorrect secret and it will be impossible to distinguish the correct shares from the incorrect shares.
The interesting case for the robust secret sharing is when n / 3 < t < n / 2; specifically when n = 2t+1. In this setting it is possible to design robust secret sharing which may involve negligible error in the reconstruction (i.e. may output an incorrect secret or the reconstruction may output NULL). However, now there will be additional overhead with respect to the share size. Moreover, the reconstruction procedure will involve additional steps (other than the standard Lagrange interpolation) to identify the incorrect shares. Suppose k is the security parameter and we want the error in the reconstruction to be at most 2-k. The two extreme ends in this context are as follows:
The idea behind the current paper is to select the MAC keys from a smaller field, which implies shorter MAC keys and authentication tags. This reduces the size of the MAC keys and the authentication tags in comparison to the RB scheme. However, the resultant MAC no longer provides negligible error probability; rather it has non-negligible error. That is, an incorrect share might now be considered as a valid share during the reconstruction procedure with non-negligible probability. To deal with this problem, the authors suggested a new sophisticated reconstruction phase. Specifically, instead of blindly considering a share to be valid if it is approved with respect to the MAC keys of any t+1 parties, we also now consider the "status" of these t+1 parties. Specifically, if the share of a party is found to be incorrect then that party is discarded and later it is not considered at all while checking the validity of the share of other parties. The idea is to create a win-win situation with the adversary as follows: if the corrupted parties submit too many incorrect shares then most of them will be identified and discarded. This will make it difficult for the remaining shares under the control of the adversary to be considered as valid. On the other hand, if the corrupted parties submit many correct shares then the remaining corrupted shares may be considered as valid. But then they can be corrected easily by applying the RS decoding algorithm on the valid shares, as now we will have very few corrupted valid shares.
- Secrecy: any set of t or less number of parties receive no information about the secret. Moreover this should hold even if the parties are computationally unbounded. This is the popular notion of unconditional security, also known as the information theoretic security.
- Reconstruction: any set of t+1 or more parties can correctly reconstruct the secret from their shares.
However, a weakness in the above model is that it is assumed that during the reconstruction, all parties will provide the correct shares. This is a strong assumption because in a real life scenario, a party may be actively corrupted, who may produce an incorrect share. The notion of robust secret sharing deals with this problem. The secrecy property in a robust secret sharing scheme is the same as in the secret sharing problem. However, the reconstruction property is now enhanced (informally) to robust reconstruction property as follows:
- Robust reconstruction: given the set of n shares, of which at most t could be corrupted, there exists an efficient reconstruction algorithm, which identifies the correct shares and reconstructs the secret with very high probability.
Now a natural question is what would be the overhead involved with respect to the share size in a robust secret sharing scheme, where we (informally) define the overhead as the difference between the maximum size of the share and the size of the secret. McEliece and Sarwate first observed that if n = 3t+1 then Shamir secret sharing scheme becomes a robust secret sharing scheme. This is because in this case, the set of Shamir shares is nothing but a Reed-Solomon (RS) code of dimension t and distance 2t+1, which can correct t errors. So by applying standard RS decoding algorithm to the list of n shares, of which at most t could be corrupted, we can easily reconstruct back the original polynomial and hence the secret. Moreover, there will be no overhead involved in the share size, as the size of each share will be same as the size of the secret. On the other hand, it is also known that if n < 2t, then it is impossible to construct a robust secret sharing scheme. Intuitively, this is because if there is a dis-honest majority, then the corrupted parties can simulate the shares corresponding to an incorrect secret and it will be impossible to distinguish the correct shares from the incorrect shares.
The interesting case for the robust secret sharing is when n / 3 < t < n / 2; specifically when n = 2t+1. In this setting it is possible to design robust secret sharing which may involve negligible error in the reconstruction (i.e. may output an incorrect secret or the reconstruction may output NULL). However, now there will be additional overhead with respect to the share size. Moreover, the reconstruction procedure will involve additional steps (other than the standard Lagrange interpolation) to identify the incorrect shares. Suppose k is the security parameter and we want the error in the reconstruction to be at most 2-k. The two extreme ends in this context are as follows:
- In a classical result, Rabin and Ben-Or presented a robust secret sharing scheme, where each share has an overhead of Ω(kn) in the share size and where the reconstruction algorithm takes time polynomial in n.
- In another significant work, Cramer, Damgard and Fehr presented a robust secret sharing scheme, where each share has an overhead of O(k + n) in the share size. However, the reconstruction algorithm requires exponential (in n) time.
The idea behind the current paper is to select the MAC keys from a smaller field, which implies shorter MAC keys and authentication tags. This reduces the size of the MAC keys and the authentication tags in comparison to the RB scheme. However, the resultant MAC no longer provides negligible error probability; rather it has non-negligible error. That is, an incorrect share might now be considered as a valid share during the reconstruction procedure with non-negligible probability. To deal with this problem, the authors suggested a new sophisticated reconstruction phase. Specifically, instead of blindly considering a share to be valid if it is approved with respect to the MAC keys of any t+1 parties, we also now consider the "status" of these t+1 parties. Specifically, if the share of a party is found to be incorrect then that party is discarded and later it is not considered at all while checking the validity of the share of other parties. The idea is to create a win-win situation with the adversary as follows: if the corrupted parties submit too many incorrect shares then most of them will be identified and discarded. This will make it difficult for the remaining shares under the control of the adversary to be considered as valid. On the other hand, if the corrupted parties submit many correct shares then the remaining corrupted shares may be considered as valid. But then they can be corrected easily by applying the RS decoding algorithm on the valid shares, as now we will have very few corrupted valid shares.
No comments:
Post a Comment