Being the flagship event for the principles of distributed computing, most of the talks in PODC 2012 were related to "core" distributed computing, like transactional memory, shared memory, distributed algorithms, failure detectors, etc. However, there were few interesting talks related to Byzantine fault-tolerant distributed computing protocols.
There were two talks related to the improvement of resilience for Byzantine agreement problem, one of the fundamental problems in distributed computing. In a nut-shell, a Byzantine agreement (BA) protocol involving n parties allows the parties to agree on a common bit, even if t out of the n parties behave maliciously in any arbitrary manner, to prevent the honest parties from agreeing on a common bit. There are well known bounds on when is agreement possible; for example n > 3t is known to be a necessary and sufficient condition for the possibility of any BA protocol. A very natural question is can we get rid of this bound, at the expense of additional primitives/assumptions? The two related talks considered this problem. Before going into the details of the two talks, it would be important to note that the main reason for the bound of n > 3t is the ability of a corrupted party to "equivocate"; i.e. a corrupted party can send different messages to two different honest parties and make the two honest parties disagree, even if the corrupted party is supposed to send the same message to both the honest parties. If somehow we can get rid of this problem and do not allow the corrupted parties to equivocate, then intuitively we can improve the number of faults which can be tolerated in the protocol. The two related talks looked into two different approaches by which we can enforce non-equivocation.
The first proposal by Allen Clement, Flavio Junqueira, Aniket Kate and Rodrigo Rodrigues considered a setting where apart from the traditional point-to-point channels between the parties, a very small trusted hardware is also embedded inside each processor (party), enforcing the parties to non-equivocate. This idea was proposed few years back in the security community and it was beleived that doing so will certainly improve the resilience of distributed fault tolerant primitives. The major contribution of the paper is to show that non-equivocation alone is not sufficient to improve the bound n > 3t; in addition one also requires transferable authentication, such as digital signatures. Given the primitive for non-equivocation plus digital signatures, it is possible to design BA protocol with n > 2t.
The second proposal by Alexander Jaffe, Thomas Moscibroda and Siddhartha Sen considered a setting where apart from the point-to-point channels, we also have "several" 3-cast channels between every 3 parties; a 3-cast channel between 3 parties is like an hyperedge or a multi-cast channel between the three parties, which allows any of the three parties to send a message identically to the remaining two parties (who are connected through the channel). It is well know that if a common broadcast channel is provided to all the n parties then BA is possible with n > 2t and in the absence of a common broadcast channel, n > 3t is required for BA. Unfortutanely, nothing was known for the possibility of the BA protocol for the interval between n = 2t+1 and n = 3t+1. The major contribution of the paper is to show the possibility of BA for n = 2t + h, for any 1 <= h <= t+1, provided "sufficient" number of 3-cast channels are available; the sufficient number of 3-cast channels is actually a function of h. The authors presented tight bounds on the number of 3-cast channels as a function of h, which are required for solving the BA problem with n = 2t + h parties. An obvious and interesting research direction is to see whether we can use the above two approaches for improving the fault tolerance of general secure multi-party computation (MPC) protocols.
Another interesting paper by Guanfeng Liang and Nitin Vaidya was again related to the BA problem. Traditionally, while designing BA (and general MPC) protocols, we do not consider the channel capacity/bandwidth of a channel. That is, we assume that every channel has the same capacity and "any" amount of information can be exchanged between two parties, who are pair-wise connected. However, this is far away from the reality because in real life, every channel may not have the same capacity and so the amount of communication allowed over a specific channel may be limited/bounded by the channel capacity. The contribution of the paper is to design BA protocols over networks where different channels may have different capacities. In such a setting, an obvious choice to measure the overall output is to measure the amount of bits that are agreed per unit time, thus leading to the notion of capacity and throughput of BA protocols. Though the authors could propose a lower bound on the throughput of BA protocols, they could not meet this bound and the problem of designing BA protocol with optimal throughput is left as an open problem. Again a natural question would be to look into the MPC protocols in the same spirit, where we may now define constraints on the individual channel capacities and then try to design MPC protocols.
The last interesting paper (for me) was by Varsha Dani, Valerie King, Mahnush Movahedi and Jared Saia. Currently, in all the information theoretically secure MPC protocols (secure against a computationally unbounded adversaries), to evaluate a functionality of size m (consisticing of m gates), an n-party protocol requires a communication complexity of O(nm), thus making the communication complexity to be dependent on the size of the circuit. This is in contrast to the computational setting, where by using FHE, we can design MPC protocols, where the total communication complexity will be independent of the circuit size. This paper claims to present an information theoretically secure MPC protocol, where the communication complexity is independent of the circuit size in the amortized sense. The key idea used here is to somehow eliminate the need for each party to communicate with every other party during the evaluation of a gate. Instead, the protocol ensures that during the evaluation of a gate, a party has to communicate only to a quorum of parties of size log n, such that at most 1/3 rd of the parties in the quorum can be corrupted. The quorum construction in turn is based on the seminal result of King et al of PODC 2010, where they designed BA protocol, breaking the traditional O(n^2) communication complexity barrier for reaching agreement. The proposed MPC protocol is in the synchronous settings and it would be interesting to see the asynchronous version of the protocol.
There were two talks related to the improvement of resilience for Byzantine agreement problem, one of the fundamental problems in distributed computing. In a nut-shell, a Byzantine agreement (BA) protocol involving n parties allows the parties to agree on a common bit, even if t out of the n parties behave maliciously in any arbitrary manner, to prevent the honest parties from agreeing on a common bit. There are well known bounds on when is agreement possible; for example n > 3t is known to be a necessary and sufficient condition for the possibility of any BA protocol. A very natural question is can we get rid of this bound, at the expense of additional primitives/assumptions? The two related talks considered this problem. Before going into the details of the two talks, it would be important to note that the main reason for the bound of n > 3t is the ability of a corrupted party to "equivocate"; i.e. a corrupted party can send different messages to two different honest parties and make the two honest parties disagree, even if the corrupted party is supposed to send the same message to both the honest parties. If somehow we can get rid of this problem and do not allow the corrupted parties to equivocate, then intuitively we can improve the number of faults which can be tolerated in the protocol. The two related talks looked into two different approaches by which we can enforce non-equivocation.
The first proposal by Allen Clement, Flavio Junqueira, Aniket Kate and Rodrigo Rodrigues considered a setting where apart from the traditional point-to-point channels between the parties, a very small trusted hardware is also embedded inside each processor (party), enforcing the parties to non-equivocate. This idea was proposed few years back in the security community and it was beleived that doing so will certainly improve the resilience of distributed fault tolerant primitives. The major contribution of the paper is to show that non-equivocation alone is not sufficient to improve the bound n > 3t; in addition one also requires transferable authentication, such as digital signatures. Given the primitive for non-equivocation plus digital signatures, it is possible to design BA protocol with n > 2t.
The second proposal by Alexander Jaffe, Thomas Moscibroda and Siddhartha Sen considered a setting where apart from the point-to-point channels, we also have "several" 3-cast channels between every 3 parties; a 3-cast channel between 3 parties is like an hyperedge or a multi-cast channel between the three parties, which allows any of the three parties to send a message identically to the remaining two parties (who are connected through the channel). It is well know that if a common broadcast channel is provided to all the n parties then BA is possible with n > 2t and in the absence of a common broadcast channel, n > 3t is required for BA. Unfortutanely, nothing was known for the possibility of the BA protocol for the interval between n = 2t+1 and n = 3t+1. The major contribution of the paper is to show the possibility of BA for n = 2t + h, for any 1 <= h <= t+1, provided "sufficient" number of 3-cast channels are available; the sufficient number of 3-cast channels is actually a function of h. The authors presented tight bounds on the number of 3-cast channels as a function of h, which are required for solving the BA problem with n = 2t + h parties. An obvious and interesting research direction is to see whether we can use the above two approaches for improving the fault tolerance of general secure multi-party computation (MPC) protocols.
Another interesting paper by Guanfeng Liang and Nitin Vaidya was again related to the BA problem. Traditionally, while designing BA (and general MPC) protocols, we do not consider the channel capacity/bandwidth of a channel. That is, we assume that every channel has the same capacity and "any" amount of information can be exchanged between two parties, who are pair-wise connected. However, this is far away from the reality because in real life, every channel may not have the same capacity and so the amount of communication allowed over a specific channel may be limited/bounded by the channel capacity. The contribution of the paper is to design BA protocols over networks where different channels may have different capacities. In such a setting, an obvious choice to measure the overall output is to measure the amount of bits that are agreed per unit time, thus leading to the notion of capacity and throughput of BA protocols. Though the authors could propose a lower bound on the throughput of BA protocols, they could not meet this bound and the problem of designing BA protocol with optimal throughput is left as an open problem. Again a natural question would be to look into the MPC protocols in the same spirit, where we may now define constraints on the individual channel capacities and then try to design MPC protocols.
The last interesting paper (for me) was by Varsha Dani, Valerie King, Mahnush Movahedi and Jared Saia. Currently, in all the information theoretically secure MPC protocols (secure against a computationally unbounded adversaries), to evaluate a functionality of size m (consisticing of m gates), an n-party protocol requires a communication complexity of O(nm), thus making the communication complexity to be dependent on the size of the circuit. This is in contrast to the computational setting, where by using FHE, we can design MPC protocols, where the total communication complexity will be independent of the circuit size. This paper claims to present an information theoretically secure MPC protocol, where the communication complexity is independent of the circuit size in the amortized sense. The key idea used here is to somehow eliminate the need for each party to communicate with every other party during the evaluation of a gate. Instead, the protocol ensures that during the evaluation of a gate, a party has to communicate only to a quorum of parties of size log n, such that at most 1/3 rd of the parties in the quorum can be corrupted. The quorum construction in turn is based on the seminal result of King et al of PODC 2010, where they designed BA protocol, breaking the traditional O(n^2) communication complexity barrier for reaching agreement. The proposed MPC protocol is in the synchronous settings and it would be interesting to see the asynchronous version of the protocol.
Hi ,
ReplyDeleteThanks for the update ,
Am curious to know , if there are any multiparty FHE schemes ? All the literature i came across are two party settings
Thanks for your time .
Regards
Sashank
FHE is really about a single party computing on encrypted data. If you want multiple parties to compute on the their shared data then the "Multikey FHE" scheme based on NTRU is really what you are after. So the answer really depends on what you mean by "multiparty FHE"
ReplyDelete