Friday, May 15, 2015

52 Things: Number 32: difference between game-based and simulation-based security definitions

This is the latest in a series of blog posts to address the list of '52 Things Every PhD Student Should Know to do Cryptography': a set of questions compiled to give PhD candidates a sense of what they should know by the end of their first year. In this post we outline the difference between a game-based and a simulation-based security definition.

In a game-based security definition the security is defined, unsurprisingly, by a game. This game revolves around some generic primitive and is usually played by a challenger and an adversary, where the challenger poses a challenge to the adversary with a certain 'goal' in mind. The adversary may further have access to some number of oracles and it is said to 'win' if it achieves its goal, which usually means it needs to provide some 'correct' output depending on the challenge. The advantage of an adversary is defined as a number that roughly corresponds to how much 'better' the adversary can do at the game than a trivial adversary that just guesses its output. E.g., if the adversary needs to output the value of an uniformly random bit, its advantage corresponds to how much better it can do than a success probability of one half. Now, a cryptographic scheme is said to satisfy this security definition if and only if all 'efficient' adversaries cannot achieve a substantial advantage when the generic primitive is instantiated by the scheme.

Informally, one may think of the challenger as a legitimate user that wants to use a cryptographic scheme and of the adversary as the bad guy that wants to achieve something against the wishes of the legitimate user, where this 'achievement' corresponds to the goal of the adversary. Generally, the challenger will have access to all secret parameters (think secret or signing key), whereas the adversary only has access to some oracles (think public hash functions or public key encryption) plus whatever it is given by the challenger during the game (think public parameters and the challenge).

Security proofs in this paradigm include two important concepts. To link security to computationally hard problems they use reductions, which lead to a statement of the following form: 'if an adversary wins the game with non-negligible advantage, it is possible to construct an algorithm that uses the adversary as a subroutine to solve some hard problem efficiently.' The other concept is game hopping through a sequence of games. Here, one takes the event of an adversary winning the game and relates it to events in a sequence of different games. Each subsequent game is close to the previous one in the sense that the adversary cannot tell the difference between two subsequent games unless it can solve some hard problem or alternatively something happens that has a negligible probability of happening.

The previous five blog posts in this series contain four game-based security definitions and one example of a game-based proof with a sequence of games, so we will not consider any specific examples here.


In a simulation-based security definition, security is defined by the existence of a simulator and some ideal 'functionality'. Consider a cryptographic scheme in the real world and now imagine how you would like this scheme to behave in an ideal world. E.g., in a voting scheme, it would be nice to have a trusted third party that has secure channels to all voters, takes in all the votes via these secure channels, and publishes the result and nothing else. A cryptographic scheme is now secure if, for any adversary against this scheme in the real world, there exists a simulator that provides the same output as the adversary in the real world, while interacting with the ideal 'functionality' in the ideal world. This means that any 'attack' possible in the real world can also be applied to the ideal functionality in the ideal world. Conversely, if the ideal functionality resists attacks in the ideal world, the real scheme resists these attacks in the real world as well.

The notion first appears in a paper by Goldreich, Micali, and Widgerson, who show that you can play any game (which is some joint computation by multiple parties) such that at any step of the game, any group of less than half the players know nothing more than they would in an ideal execution of the game with a trusted party. More recently, the notion of simulation-based security appeared in the paper introducing Universal Composability by Ran Canetti. It is mostly used in settings of multi-party computation.


So what is the difference? In the game-based approach, each notion of security has its own game. If this notion correctly captures or models the real world attributes you would like your system to have, then you are done. If your scheme needs to satisfy various notions, you will need to play games for each one. However, there is a known hierarchy in some cases, e.g., IND-CCA security implying IND-CPA security.

Conversely, in the simulation-based approach, the security is modeled by the ideal functionality. Conceptually, your schemes will be secure from attacks that do not break the ideal functionality. This means that different security notions are captured by this model.

For more reading, you can find good discussions on the crypto StackExchange here and here.

No comments:

Post a Comment