Friday, January 20, 2012

The Dolev-Yao Model and Formal Verification

The topic of the first study group of the new year was cryptographic protocol verification. Anna-Lisa and Peter were tasked with introducing the Dolev-Yao model and an introduction to some of the tools used to verify some of the properties of cryptographic protocols.

The well known paper of Dolev and Yao [pdf] provided a simple model for verification, and many of the main principles are still used today. The original motivation for the paper was to verify public key protocols against active attackers with considerable power. In their framework the following assumptions are made:

  1. The underlying public key system is 'perfectly secure'
    • one-way functions are unbreakable
    • public directory is secure and cannot be tampered with
    • everyone has access to all EX
    • only X knows DX

  2. Adversary has complete control over the entire network
    • acts as a legitimate user, and can obtain any message from any party
    • can initiate the protocol with any party, and can be a receiver to any party in the network

  3. Concurrent executions of the protocol can occur

The model has the following features, which can be viewed as either benefits or restrictions depending on what you are trying to do:

  • It is simple to describe protocols in this model
  • Adversary has unlimited power, so although this is a conservative approach this may not be realistic
  • Protocols have a 'black-box' nature, which means that linking individual protocols with others is extremely difficult

As an example Peter introduced the Needham-Schroeder protocol [pdf], which is used to establish a shared secret between two parties, A and B. This example shows the motivation for using verification tools to find attacks. The protocol runs as follows:

  • A → B : {NA, A}B
  • B → A : {NA, NB}A
  • A → B : {NB}B

In the first step, A creates a nonce NA and encrypts it, along with B's identity, under the public key of B. Party B then creates its own nonce NB, and sends it along with A's nonce, encrypted under A's public key, back to A. The final step is to verify the exchanges. In 1996 Lowe presented a simple man-in-the-middle attack [pdf] on this protocol and an amendment which fixes the vulnerability. The attack involves an impostor (I) who convinces B that they are in fact communicating with party A. We write I(A) to indicate I posing as A. The attack runs as follows:

  • A → I : {NA, A}I
  • I(A) → B : {NA, A}B
  • B → I(A) : {NA, NB}A
  • I → A : {NA, NB}A
  • A → I : {NB}I
  • I(A) → B : {NB, A}B

At the final step, B thinks he has established a secret with A. The fix suggested by Lowe involves changing the B → A : {NA, NB}A in the protocol, adding in an identity check: B → A : {NA, NB, B}A.

The Dolev-Yao paper covers cascade protocols, in which the only operations allowed on messages are encryption and decryption, and thus DXEX = EXDX = id. Note that textbook RSA satisfies this property, but in practice no real systems do. It is however a useful and simple class of protocols that can be formally verified. In this model, we have two parties A and B and n functions f1,...,fn such that at the start of the protocol, A knows message M, f1 is applied to M, for even i the functions fi are applied by B and for odd i the functions fi are applied by A. An adversary Z has all of the fi, EX for all users X, DZ and f1(M). The protocol is said to be insecure if Z can construct a function g where g f1 = id.

Note that in a Diffie-Hellman key exchange procedure modified for this framework we have f1 = EB ⋅ DA and f2 = EA ⋅ DB, an adversary can simply set g = f2 to prove the protocol insecure. Although this is a trivial example, it should indicate how these processes work for more complex systems.

The Dolev-Yao paper also includes so-called Name-Stamp protocols that include identities of the parties, and explains how the verification process works in this setting. The paper includes proofs that verification of both name-stamp protocols and cascade protocols run in polynomial time.

Next it was Anna-Lisa's turn to discuss some of the ideas behind ProVerif, introduced by Bruno Blanchet in 2008 [pdf], a tool that is used to check security properties such as secrecy and authenticity.

The problem of verifying the security of a protocol is undecidable in general, and tools like ProVerif aim to provide answers wherever possible. Some source of the undecidability is the fact that an unlimited number of sessions can occur. To deal with these there are two approaches:

  1. Consider some restriction, for example putting a bound on the number of sessions
    • useful for finding attacks
    • not able to prove correctness as attacks may be missed
    • the analysis will always terminate

  2. Over-approximations (as used in ProVerif)
    • used to prove correctness
    • could potentially find false attacks
    • in some cases this will be non-terminating

informally, secrecy of a message holds if it is kept secret from a Dolev-Yao adversary in the execution of the protocol. Authenticity means that if party A thinks that he is talking to B then this is in fact the case.

Much work has gone into formal verification in recent years, and ProVerif has been used in the following scenarios: Abadi and Blanchet verified a certified email protocol [pdf], then along with Fournet alaysed the Just Fast Keyring protocol [pdf] and Bhargavan et al. analyzed the TLS protocol [pdf] amongst others.

In ProVerif, protocols are defined by means of the pi-calculus, which is a formalism for describing concurrent systems. In the first stage, the protocol, pi-calculus and cryptographic primitives act as inputs to ProVerif along with the property that we wish to prove. Then an Automatic Translator turns these inputs into Horn clauses and Derivability Queries (an expression of the processes in an abstract form) and then a resolution is the output. This could be one of three outputs:

  • Protocol is Correct
  • There is a Potential Attack
  • Does Not Terminate

The syntax of the pi-calculus can be found on page 6 of [pdf]. The abstraction into Horn clauses is crucial to the treatment of an unbounded number of sessions. Due to this abstraction, the tool is successful at proving security in many cases, but can fail on correct protocols.

Problems can occur in protocols where certain values that are initially secret become public after a length of time, the Horn clause model considers that this value can be reused at the beginning of the protocol, thus breaking the protocol.

No comments:

Post a Comment