The last talk of the first day of Eurocrypt 2012 was given by Serge Fehr from CWI, who presented a simple but really cute result. The problem is related to secret sharing, one of the fundamental problems in Cryptography. In its original form, (threshold) secret sharing problem is the following: there exists a dealer with secret s, which he wants to share among n participant, say P

However, a weakness in the above model is that it is assumed that during the reconstruction, all parties will provide the correct shares. This is a strong assumption because in a real life scenario, a party may be actively corrupted, who may produce an incorrect share. The notion of robust secret sharing deals with this problem. The secrecy property in a robust secret sharing scheme is the same as in the secret sharing problem. However, the reconstruction property is now enhanced (informally) to robust reconstruction property as follows:

Now a natural question is what would be the overhead involved with respect to the share size in a robust secret sharing scheme, where we (informally) define the overhead as the difference between the maximum size of the share and the size of the secret. McEliece and Sarwate first observed that if n = 3t+1 then Shamir secret sharing scheme becomes a robust secret sharing scheme. This is because in this case, the set of Shamir shares is nothing but a Reed-Solomon (RS) code of dimension t and distance 2t+1, which can correct t errors. So by applying standard RS decoding algorithm to the list of n shares, of which at most t could be corrupted, we can easily reconstruct back the original polynomial and hence the secret. Moreover, there will be no overhead involved in the share size, as the size of each share will be same as the size of the secret. On the other hand, it is also known that if n < 2t, then it is impossible to construct a robust secret sharing scheme. Intuitively, this is because if there is a dis-honest majority, then the corrupted parties can simulate the shares corresponding to an incorrect secret and it will be impossible to distinguish the correct shares from the incorrect shares.

The interesting case for the robust secret sharing is when n / 3 < t < n / 2; specifically when n = 2t+1. In this setting it is possible to design robust secret sharing which may involve negligible error in the reconstruction (i.e. may output an incorrect secret or the reconstruction may output NULL). However, now there will be additional overhead with respect to the share size. Moreover, the reconstruction procedure will involve additional steps (other than the standard Lagrange interpolation) to identify the incorrect shares. Suppose k is the security parameter and we want the error in the reconstruction to be at most 2

The idea behind the current paper is to select the MAC keys from a smaller field, which implies shorter MAC keys and authentication tags. This reduces the size of the MAC keys and the authentication tags in comparison to the RB scheme. However, the resultant MAC no longer provides negligible error probability; rather it has non-negligible error. That is, an incorrect share might now be considered as a valid share during the reconstruction procedure with non-negligible probability. To deal with this problem, the authors suggested a new sophisticated reconstruction phase. Specifically, instead of blindly considering a share to be valid if it is approved with respect to the MAC keys of any t+1 parties, we also now consider the "status" of these t+1 parties. Specifically, if the share of a party is found to be incorrect then that party is discarded and later it is not considered at all while checking the validity of the share of other parties. The idea is to create a win-win situation with the adversary as follows: if the corrupted parties submit too many incorrect shares then most of them will be identified and discarded. This will make it difficult for the remaining shares under the control of the adversary to be considered as valid. On the other hand, if the corrupted parties submit many correct shares then the remaining corrupted shares may be considered as valid. But then they can be corrected easily by applying the RS decoding algorithm on the valid shares, as now we will have very few corrupted valid shares.

_{1}, ..., P_{n}, such that the following two properties should be satisfied:- Secrecy: any set of t or less number of parties receive no information about the secret. Moreover this should hold even if the parties are computationally unbounded. This is the popular notion of unconditional security, also known as the information theoretic security.
- Reconstruction: any set of t+1 or more parties can correctly reconstruct the secret from their shares.

_{1}, ..., P_{n }respectively. Furthermore, without loss of generality assume that the secret s is selected uniformly and randomly from F. Then to share s, the dealer selects a random polynomial f(x) from F (i.e., whose coefficients are from F) of degree at most t, with s as the constant term. Then to party P_{i}, the dealer assign the share Sh_{i}= f(i). It is easy to see that this scheme satisfies the secrecy and the reconstruction property. Moreover, it also has an important property with respect to the share size. Specifically, the size of each share is the same as the size of the secret. It is well known that in any information theoretically secure scheme, the size of each share should be at least the same as the size of the secret. In that sense, Shamir secret sharing has optimal share size.However, a weakness in the above model is that it is assumed that during the reconstruction, all parties will provide the correct shares. This is a strong assumption because in a real life scenario, a party may be actively corrupted, who may produce an incorrect share. The notion of robust secret sharing deals with this problem. The secrecy property in a robust secret sharing scheme is the same as in the secret sharing problem. However, the reconstruction property is now enhanced (informally) to robust reconstruction property as follows:

- Robust reconstruction: given the set of n shares, of which at most t could be corrupted, there exists an efficient reconstruction algorithm, which identifies the correct shares and reconstructs the secret with very high probability.

Now a natural question is what would be the overhead involved with respect to the share size in a robust secret sharing scheme, where we (informally) define the overhead as the difference between the maximum size of the share and the size of the secret. McEliece and Sarwate first observed that if n = 3t+1 then Shamir secret sharing scheme becomes a robust secret sharing scheme. This is because in this case, the set of Shamir shares is nothing but a Reed-Solomon (RS) code of dimension t and distance 2t+1, which can correct t errors. So by applying standard RS decoding algorithm to the list of n shares, of which at most t could be corrupted, we can easily reconstruct back the original polynomial and hence the secret. Moreover, there will be no overhead involved in the share size, as the size of each share will be same as the size of the secret. On the other hand, it is also known that if n < 2t, then it is impossible to construct a robust secret sharing scheme. Intuitively, this is because if there is a dis-honest majority, then the corrupted parties can simulate the shares corresponding to an incorrect secret and it will be impossible to distinguish the correct shares from the incorrect shares.

The interesting case for the robust secret sharing is when n / 3 < t < n / 2; specifically when n = 2t+1. In this setting it is possible to design robust secret sharing which may involve negligible error in the reconstruction (i.e. may output an incorrect secret or the reconstruction may output NULL). However, now there will be additional overhead with respect to the share size. Moreover, the reconstruction procedure will involve additional steps (other than the standard Lagrange interpolation) to identify the incorrect shares. Suppose k is the security parameter and we want the error in the reconstruction to be at most 2

^{-k}. The two extreme ends in this context are as follows:- In a classical result, Rabin and Ben-Or presented a robust secret sharing scheme, where each share has an overhead of Ω(kn) in the share size and where the reconstruction algorithm takes time polynomial in n.
- In another significant work, Cramer, Damgard and Fehr presented a robust secret sharing scheme, where each share has an overhead of O(k + n) in the share size. However, the reconstruction algorithm requires exponential (in n) time.

_{i}is given the usual Shamir share. In addition, each such share Sh_{i}is authenticated using information theoretically secure message authentication codes (MAC). Specifically, the share Sh_{i}will be authenticated by n independent MAC keys, one corresponding to each party, where party P_{j}will hold the j^{th}MAC key and party P_{i}will hold the authentication tag corresponding to this MAC key. The idea is that if P_{i}is corrupted and P_{j}is honest, then later P_{i}cannot produce an incorrect share, without being identified by the honest P_{j}with very high probability. On the other hand, if P_{i}is honest and P_{j}is corrupted, then even after knowing the MAC key, a corrupted P_{j}learns nothing about the the share of the honest P_{i}. During the reconstruction phase, the parties produce the shares, the MAC keys and the authentication tags (we assume a non-rushing adversary; however a rushing adversary can be tolerated by asking the parties to first reveal the Shamir shares followed by the MAC keys and the authentication tags). Now a share is considered a valid share if it is approved with respect to the MAC keys of at least t+1 parties. It is easy to see that all correct shares will be valid. However, an incorrect share will be considered as valid with negligible probability. Now notice that in addition to a Shamir share, each party gets n MAC keys and n authentication tags. To bound the error probability by 2^{-k}, the field size in the RB construction is selected to be 2^{k}and the authentication tags and the MAC keys are selected from this field. Now each field element can be represented by k bits. Thus the overhead associated with the share size is nk bits for a k bit secret.The idea behind the current paper is to select the MAC keys from a smaller field, which implies shorter MAC keys and authentication tags. This reduces the size of the MAC keys and the authentication tags in comparison to the RB scheme. However, the resultant MAC no longer provides negligible error probability; rather it has non-negligible error. That is, an incorrect share might now be considered as a valid share during the reconstruction procedure with non-negligible probability. To deal with this problem, the authors suggested a new sophisticated reconstruction phase. Specifically, instead of blindly considering a share to be valid if it is approved with respect to the MAC keys of any t+1 parties, we also now consider the "status" of these t+1 parties. Specifically, if the share of a party is found to be incorrect then that party is discarded and later it is not considered at all while checking the validity of the share of other parties. The idea is to create a win-win situation with the adversary as follows: if the corrupted parties submit too many incorrect shares then most of them will be identified and discarded. This will make it difficult for the remaining shares under the control of the adversary to be considered as valid. On the other hand, if the corrupted parties submit many correct shares then the remaining corrupted shares may be considered as valid. But then they can be corrected easily by applying the RS decoding algorithm on the valid shares, as now we will have very few corrupted valid shares.

## No comments:

## Post a Comment