The
third talk on Wednesday was given by Aggelos Kiayias (University of Connecticut). Protocols in the MPC setting are designed to allow
a group of n players compute a function on n arbitrary inputs. The
protocol must at least ensure input privacy and output delivery to
the clients. An adversary willing to make the protocol fail, can
simply break the assumption which relies on. But most likely
it will try to corrupt more than t players (for some security
threshold t). How is this corruption actually performed?
Theoreticians just assume this is done instantly and comes at no cost.
The reality is rather different: each player may have different
resources or security parameters.
The question Aggelos threw was whether it is possible to come up with an appropiate model. He proposed to use resource-based corruptions, where a symbolic amount of coins (or resources) is assigned to each player, e.g. an adversary will spend less time breaking weak passwords in dictionary attacks. The adversary can corrupt the player if it spends at least s coins in such operation. Things are more complex when all the players look identical for external observers (hidden diversity). It can be achieved by e.g. randomly permuting the position of the players in the cloud.
The question Aggelos threw was whether it is possible to come up with an appropiate model. He proposed to use resource-based corruptions, where a symbolic amount of coins (or resources) is assigned to each player, e.g. an adversary will spend less time breaking weak passwords in dictionary attacks. The adversary can corrupt the player if it spends at least s coins in such operation. Things are more complex when all the players look identical for external observers (hidden diversity). It can be achieved by e.g. randomly permuting the position of the players in the cloud.
He
explained a game to measure the amount of coins that the adversary
will need in order to corrupt (more than t) players. This is what he
called 'Combinatoric Game of Anonymity': we have a system conformed
with buckets and balls. Buckets are players, and their size is the
number of balls required to fill them up (corrup them). Assuming the only
feedback we get from the system is whether or not a bucket is filled,
how do we know the amount of balls required to collapse the entire system?
Clearly if we do know the exactly amount of balls, then we are back
in the classical situation, and the players can be considered to be
instantly corrupted by a sufficient enough resource-powered
adversary. There are intermediate states of ignorance, though. For
example the adversary may only know n and that some (unknown) bucket has size s (size-only), or n and the maximal size (
max-only).
Another interesting question was whether we can discretize the amount of work needed to break a cryptosystem. He explained the concept of exact hardness for some threshold value epsilon. It is used to compare functions according to their inversion ( gives the number of steps needed to succesfully carry out the compilation with probability greater than epsilon). How easily can the exact hardness be computed? Bounds can be provided in the random oracle model, and ranges in the standard model.
There are several interesting properties. For example, given a family of functions, find a value T such that it is not posible to amortize the exact hardness of any function in the family beyond T ( that is, the exact hardness of any function in the family for (any given arbitrary) epsilon, is greater than T times a finite linear combination of the exact hardness of arbitrary functions in the family). Another property is the indistinguishability, when an adversary can not infer which function is better to attack first.
No comments:
Post a Comment