Friday, January 27, 2012

Verifiable Computation

Today's study group was on Verifiable Computation and was given by Ashish and Enrique.
Verifiable Computation (VC) allows a computationally-limited client to outsource an expensive computation on some input to a computationally-powerful server. The main issue in such delegation is for the client to be able (taking into account that it is computationally weak) to verify the correctness of the result returned by the sever. An obvious application of this protocol is cloud computing.

We will denote the function that needs to be computed by F, the input of the client on which he wishes to evaluate the function by x and the output of such a computation by y where y=F(x). The key point is that the verification of the correctness of the returned result should be substantially less expensive than re-computing F(x).

Generally, the VC protocol consists of 3 phases, which are as follows:
  • Preprocessing: This is only required once. In this phase, the client generates some auxiliary information (private & public) related to the function F.
  • Input Preparation: This phase is required for every input x, the client generates some auxiliary information (private & public) related to the input x.
  • Computation and Verification: The public information related to both the function F and the input x are given to the server. The server then returns to the client a value σy which is an encoding of y=F(x). The client can then recover F(x) and verify its correctness.
A VC protocol consists of the following algorithms:
  • KeyGen: This algorithm is run by the client and takes a security parameter λ and the function F. It outputs a pair of public\secret keys (pk,sk). The secret key is only known to the client while the public key is sent to the server.
  • InputGen: This algorithm uses the secret key sk to generate secret information τx and public information σx which encode the input x. The public information σx is given to the server. 
  • Compute: This algorithm is given the public key pk and the public information σx and it generates an encoded version σy of the function output on x, i.e. y=F(x).
  • Verify: This algorithm uses the secret key sk and the secret information τx to recover y from σy and outputs Accept/Reject depending on whether or not it is correct. 
There are a number of desired properties such a protocol offers, which we briefly summarize as follows:
  • Correctness: If both the client and the server behave honestly, then the client should accept the result returned by the server.  
  • Soundness: The probability that the server returns an incorrect result and the client fails to spot that is negligible. 
  • Input/Output Privacy: The server does not learn the input x and/or the output F(x). 
  • Efficiency: This concerns the efficiency of both the client and the server sides of the computation as well as the number of rounds involved. 
  • Public Verifiability: Trusted third parties can verify the correctness of the computation. 
  • Public Delegation: No pre-computation from the client. 
  • Generality of the Function: How general the function that such a protocol can evaluate.  
  • Assumptions: What type of cryptographic assumptions the security of the protocol rests on. 
 It is still an open problem to have a protocol that satisfies all of the above properties simultaneously.

Three construction approaches were discussed:
The first one is by Gennaro, Gentry and Parno from Crypto 2010 [pdf]. This approach makes use of Yao garbled circuits [pdf] and a Fully Homomorphic Encryption (FHE) scheme. The function F is represented by a Boolean circuit C (hence this construction works for any function) and then the gates are garbled so as to hide the structure of their underlying truth tables.

In this construction, the client is allowed to run expensive computation during the preprocessing phase (possibly through a powerful trusted third party) but it is computationally bounded in the online phase.

To garble a gate G where wz=G(wa,wb), we choose random labels kia and kib for i =0,1 (i.e. the possible values of the wire) then the corresponding truth table row's output value is double encrypted as γ{i,j} =Ekia(Ekjb(G(i,j))) for all possible permutations of i and j ∈ {0,1}. This will result in a ciphertext for each row of the gate's truth table. Then the 4 ciphertexts are shuffled to obtain a garbled gate.

For the Yao construction to be considered secure, the symmetric encryption function E must have three properties:
  • Indistinguishability of Ciphertexts: It is impossible to distinguish the encryption of two vectors of messages. 
  • Elusive Range: Different keys result in ciphertexts which fall into different ranges. 
  • Efficiently Verifiable Range: Given a key and a ciphertext, one can efficiently verify if the ciphertext is in the correct range for the given key. 

We can summarize the VC construction as follows:
  • KeyGen: The circuit C representing the function F is garbled and the secret key is the set of labels used for the wires, i.e. sk= ∪i {k0i,k1i} for all wires i, whereas the public key is the set of shuffled ciphertexts, i.e. pk= ∪gg00, γg01, γg10, γg11 } for all gates g. 
  • InputGen: A pair of secret/public keys (skFHE,pkFHE)$ for the FHE scheme is generated. Let w1,…,wk be the bits of the input x, the labels for those wires are encrypted under pkFHE to get the ciphertexts c1,…,ck. The public information is then set to σx=(pkFHE,{c1,…,ck}), whereas the secret information is τx=skFHE
  • Compute: Using the homomorphic property of the FHE scheme, the ciphertexts containing the labels of the circuit final output wires will be determined by the server. We let w̃i for i=1,…,n be the labels of the circuits' final output wires (i.e. the bits of y). Then σy={ c̃1,…,c̃k}. 
  • Verify: Use skFHE to decrypt c̃i and then use the secret key of the encryption used in the Yao circuit to map the obtained value to the corresponding value of the bit i of y. Reject if either the mapping or the decryption fails. 
 The above protocol offers input/output privacy but the offline phase is not efficient.

The second construction is by Chung , Kalai and Vadhan from Crypto 2010 [pdf]. In their paper, they give different variants  of VC  protocols. Their approach works by allowing the client in the preprocessing phase to sample a random input r from the same distribution as the input x and compute F(r). Then in the online phase, the client sends (r,x) in an arbitrary order (unknown to the server) to the server. The server evaluates the function F on both inputs and then returns the results to the client who compares F(r) returned by the sever with that he already computed in the preprocessing phase and accepts if they are the same. The soundness error of this protocol is 1/2.

Note here that if the sever learns x, it can easily cheat and therefore x needs to be hidden from the sever. In another variant of the protocol, a deterministic FHE scheme is used to encrypt the input x by computing x̂=EFHE(pkFHE,x). Also, during the preprocessing phase the client computes r̂=EFHE(pkFHE,0…0) and exploiting the homomorphic property of the FHE scheme, the client computes F(r̂). Instead of giving (x,r) in the clear, the server is given (x̂,r̂) and in a similar way to the previous variant, correctness is determined by comparing the precomputed F(r̂) with that returned by the server. Again, the soundness error in this protocol is 1/2. To further reduce the soundness error, many instances of x̂i and r̂i could be used instead of a single instance. This makes the soundness error exponentially small in the number of instances used.

All the variants we mentioned so far are one-time, i.e. the values r̂i cannot be reused. To convert the previous variant to a many-time use protocol, a double encryption is used where the values {pkFHE,{r̂i},{x̂i}} are further encrypted using a different public key pk'FHE.

The four variants we mentioned so far require the client to perform heavy computation in the offline phase. The authors also propose another variant in which the offline phase is made more efficient but involves interaction. The idea is that the computation in the offline phase is also delegated.

The final construction is by Canetti, Ravi and Rothblum from ACM-CSS 2011 [pdf]. The protocol involves two servers, S0 and S1 one of which is dishonest, whereas the other one is honest. However, the client does not know which is which.

The servers are modeled as deterministic Turing Machines (i.e. they do not have random tape). The client sends the same input to both servers and if the returned results differ, the client iterates through the set of states of both servers (a binary search can be used for this task) and stops whenever it identifies a mismatch between two consecutive states statei and statei+1. At that point, the client computes the transition function itself from statei to statei+1 to identify which of the two servers is the one that returned the incorrect result.

This approach could be extended to more than two servers. This construction does not satisfy input/output privacy. On the other hand, it satisfies public verifiability and public delegation. Also, this approach does not require any preprocessing from the client.

Sunday, January 22, 2012

Dagstuhl Symmetric Cryptography Seminar, Latter Half

After the Monday and the Tuesday, the Dagstuhl seminar surprisingly continued on Wednesday, but for the sake of symmetry, I'll concentrate on what I considered the highlights of the last two days of the seminar (being the Thursday and the Friday) in the blog entry. My apologies to those who spoke but are not covered here.

Tetsu Iwata presented his recent attack on EAX-prime. Tetsu gave a wonderful talk and kindly excused himself for the simplicity of the attack. It is always instructive to see how a relatively small change (from EAX to EAX-prime) can cause such a huge difference (from provably secure to rather broken).

Orr Dunkelmann continued his Crypto'11 rump session, describing the effectiveness of Meet-in-the-middle attacks on IDEA. For the initial attack, already an improvement on prior art, all that seemed to matter was IDEA's key scheduling. Since in the first few rounds, the key can be split up in a part that is used only in the first round and another part that is used only later on, a meet-in-the-middle attack becomes possible. Orr and his coauthors then went further by combining this new idea with known techniques, such as the Biryukov-Demirci relation and splice-and-cut.

In the late afternoon session, Antoine Joux presented new results on bit splitting for knapsack and codes. The idea arises from his Eurocrypt'10 work with Howgrave-Graham on finding knapsack solutions by representing every solutions in many different ways and applying Wagner's generalized birthday algorithm to find one of the by now many solutions. Last year the method for knapsacks was improved and independently applied to the problem of decoding random linear codes. By combining these latter two ideas, a further improvement is possible, giving rise to the asymptotically fastest decoding algorithm to date.

Elena Andreeva gave two short talks welded together. The first part was an update on provable results of the remaining five SHA-3 candidates. The second part was more tentative, as she gave a new definition to measure the security of blockcipher security. The hope is that this definition will be useful to make more concrete what it means for a cipher to be secure against (or succumb to) a related-key or known-key attack. It was still very much work in progress, but I really liked the approach and it was obvious that I was not the only one!

The final talk on Thursday was given by Phil Rogaway. He presented a completely new approach to designing a blockcipher. Here a single round of the blockcipher defines a keyed involution with a particular structure. Due to this structure, Phil and his coauthors were able to prove how many rounds were sufficient to achieve provable security against IND-CCA type attacks. The work was information-theoretic in nature (so the round keys for the involutions were horrendously large), but one can argue that such a design still provides a heuristic once one makes the design more efficient (and giving the proof in a private random oracle model).

Thursday evening there were no talks, but apparently there was an opportunity to watch Enigma, the movie with Kate Winslet. I'd already seen it once, so instead I played some pool with the other attendees. It's always fun and while some were better than others, and the matches were all reasonably even. On my way to the wine, I also noticed Ewan Fleischmann was playing chess against Christian Forler. Ewan did fairly well while I was watching, but seemed to have screwed up a bit in between bottles.

The last day only had a morning program, before people would disband after lunch. John Steinberger presented a continuation of his talk of Monday, but in a relatively standalone fashion. John's ability to condense problems from probability theory to their core keeps amazing me.

All in all, I enjoyed the seminar a lot and after my first semester of teaching, it was good and inspiring to see the forefront of research again. So I'd like to take this opportunity to thank the organizers, Frederik Armknecht, Stefan Lucks, Bart Preneel, and Phil Rogaway, for the wonderful job they did, which included finding last-minute replacements for last-minute cancellations. Special thanks also to Kenny Paterson for travelling to Dagstuhl together! In about a week's time there will be the workshop on the practical relevance of cryptographic theory. Since a large part of our group will go there, expect regular updates on this blog!

Saturday, January 21, 2012

Dagstuhl Symmetric Cryptography Seminar

Last week the by now traditional January retreat on symmetric cryptography took place in Schloss Dagstuhl. The intention is to have a retreat there every other year, with a sibling retreat in Luxembourg in the odd years in January. I had been to the past two Luxembourg events, but this was the first time I had the honour and pleasure to visit Dagstuhl. Recently there has been a real boom in security and cryptography there, as for some this symmetric crypto seminar was the fourth opportunity in the last half year to visit (earlier crypto-related workshops were on Verifiable Elections and the Public, quantum cryptanalysis, public key cryptography, secure computing in the cloud, and Privacy and Security in Smart Energy Grids).

I had heard a lot about Dagstuhl (and its mathematical cousin Oberwolfach) in the past, so I was quite excited about going. The organizers had done a sterling job in picking a good mix of cryptographers, so that symmetric crypto could be covered from all relevant angles. The program was representative of this and what I really liked was the blend of presentations: there was work in various stages of the lifecycle (recently published work, accepted work awaiting publication, submitted work, and work still very much in progress); there were slick powerpoint/beamer presentations as well as black board presentations (prepared or impromptu); and the topics represented the attendees well.

The first day was my own talk which, due to poor planning on my part, meant skipping the first session. In this session Kenny Paterson talked about his Asiacrypt'11 paper with Tom Ristenpart and Tom Shrimpton on some recent attacks on and proofs for the TLS record protocol. While I missed this particular talk, Kenny and Tom S. had discussed the problem with me in the past. The attack is quite cute and the proof I've been told is somewhat subtle, so I am looking forward to a full version of the paper with all the details. Another talk of the morning session was by John Steinberger. He gave a blackboard talk on an upcoming Eurocrypt paper and the buzz afterwards was absolutely astounding, so I was quite bummed to have missed his talk.

In the afternoon, the talk that probably stood out most was that by Dan Bernstein on authenticated ciphers. The main thrust of his talk was that, with the SHA-3 competition soon coming to a close, cryptographers would need a new target to focus on. His suggestion was to start a competition for an authenticated cipher design and his choice was well motivated: it allowed innovations in both modes-of-operations and in primitives, moreover most symmetric primitives known to man could potentially be brought into the fold. There was a later discussion on Wednesday, where several criticisms were raised. Orr Dunkelman suggested it was too soon for such a competition: it is a commonly held belief that the SHA-3 competition drew attention away from SHA-1 and SHA-2 and perhaps it was time for a break from competitions to reevaluate the current crop of primitives. Some suggested the competition was too broad and should focus more (e.g. on just MACs), whereas others held that widening the scope and looking at secure channels (or storage) might be a better approach. Yet overall the feeling for Dan's initiative was positive. There is an ECRYPT II workshop on authenticated ciphers in July which will undoubtedly lead to a clearer view of how an AEAD competition could look like.

On Tuesday the afternoon was reserved for a wonderful hike. In the morning Alex Biryukov talked about Cesar, which was an acronym for a very large scale AES cracker based on related keys. He had computed the absolutely staggering costs of a machine to recover an AES key. The number of chips required was so large, that current production facilities would be a serious bottleneck. As a remedy, Alex had included the cost of building several 1 billion dollar fabs into his estimates. I was a bit disappointed that he did not compute the cost of the power plants needed to power his device (as running the attack consumed around the same amount of energy as the entire USA in a year).

The second talk of the morning was a wonderful talk by Itai Dinur, Orr Dunkelmann and Adi Shamir on multiple results on multiple encryption. Adi started by laying out the problem and wetting our appetite. Orr followed up by giving the details of the algorithms they had discovered, assisted from the sidelines by Adi. Finally Itai demonstrated the generality of their approach by giving a to me quite surprising application to another problem, improving on certain tradeoffs recently established in the literature.

I also really enjoyed Gregor Leander's talk, which dealt with the rigour displayed in linear cryptanalysis papers. For these results, dependent bits are often treated as if they were independent, which renders all results heuristic, and possibly wrong (as one cannot easily check whether (say) a 2126.42 attack really has the advertised running time). There are various attempts in the literature to tackle this lack of formalism. As Gregor explained:

  • The authors ignore the issue altogether.

  • The authors acknowledge the issue, and continue as before.

  • The authors acknowledge the issue, assume the bits are independent (even though they clearly are not), and continue as before.

  • The authors acknowledge the issue, assume the bits behave as if independent (where the behaviour can be restricted to that relevant for the attack), and continue as before

  • As above, but with experimental back up that the behaviour is as expected for small parameters.


It did put a smile on my face, in no small part due to Gregor's wonderful delivery. He also had a more serious continuation, where he showed three completely distinct counterexamples to a related lemma by Daemen and Rijmen, who tried to give sufficient conditions for a formal treatment of linear biases.

Friday, January 20, 2012

The Dolev-Yao Model and Formal Verification

The topic of the first study group of the new year was cryptographic protocol verification. Anna-Lisa and Peter were tasked with introducing the Dolev-Yao model and an introduction to some of the tools used to verify some of the properties of cryptographic protocols.

The well known paper of Dolev and Yao [pdf] provided a simple model for verification, and many of the main principles are still used today. The original motivation for the paper was to verify public key protocols against active attackers with considerable power. In their framework the following assumptions are made:

  1. The underlying public key system is 'perfectly secure'
    • one-way functions are unbreakable
    • public directory is secure and cannot be tampered with
    • everyone has access to all EX
    • only X knows DX

  2. Adversary has complete control over the entire network
    • acts as a legitimate user, and can obtain any message from any party
    • can initiate the protocol with any party, and can be a receiver to any party in the network

  3. Concurrent executions of the protocol can occur

The model has the following features, which can be viewed as either benefits or restrictions depending on what you are trying to do:

  • It is simple to describe protocols in this model
  • Adversary has unlimited power, so although this is a conservative approach this may not be realistic
  • Protocols have a 'black-box' nature, which means that linking individual protocols with others is extremely difficult

As an example Peter introduced the Needham-Schroeder protocol [pdf], which is used to establish a shared secret between two parties, A and B. This example shows the motivation for using verification tools to find attacks. The protocol runs as follows:

  • A → B : {NA, A}B
  • B → A : {NA, NB}A
  • A → B : {NB}B

In the first step, A creates a nonce NA and encrypts it, along with B's identity, under the public key of B. Party B then creates its own nonce NB, and sends it along with A's nonce, encrypted under A's public key, back to A. The final step is to verify the exchanges. In 1996 Lowe presented a simple man-in-the-middle attack [pdf] on this protocol and an amendment which fixes the vulnerability. The attack involves an impostor (I) who convinces B that they are in fact communicating with party A. We write I(A) to indicate I posing as A. The attack runs as follows:

  • A → I : {NA, A}I
  • I(A) → B : {NA, A}B
  • B → I(A) : {NA, NB}A
  • I → A : {NA, NB}A
  • A → I : {NB}I
  • I(A) → B : {NB, A}B

At the final step, B thinks he has established a secret with A. The fix suggested by Lowe involves changing the B → A : {NA, NB}A in the protocol, adding in an identity check: B → A : {NA, NB, B}A.

The Dolev-Yao paper covers cascade protocols, in which the only operations allowed on messages are encryption and decryption, and thus DXEX = EXDX = id. Note that textbook RSA satisfies this property, but in practice no real systems do. It is however a useful and simple class of protocols that can be formally verified. In this model, we have two parties A and B and n functions f1,...,fn such that at the start of the protocol, A knows message M, f1 is applied to M, for even i the functions fi are applied by B and for odd i the functions fi are applied by A. An adversary Z has all of the fi, EX for all users X, DZ and f1(M). The protocol is said to be insecure if Z can construct a function g where g f1 = id.

Note that in a Diffie-Hellman key exchange procedure modified for this framework we have f1 = EB ⋅ DA and f2 = EA ⋅ DB, an adversary can simply set g = f2 to prove the protocol insecure. Although this is a trivial example, it should indicate how these processes work for more complex systems.

The Dolev-Yao paper also includes so-called Name-Stamp protocols that include identities of the parties, and explains how the verification process works in this setting. The paper includes proofs that verification of both name-stamp protocols and cascade protocols run in polynomial time.

Next it was Anna-Lisa's turn to discuss some of the ideas behind ProVerif, introduced by Bruno Blanchet in 2008 [pdf], a tool that is used to check security properties such as secrecy and authenticity.

The problem of verifying the security of a protocol is undecidable in general, and tools like ProVerif aim to provide answers wherever possible. Some source of the undecidability is the fact that an unlimited number of sessions can occur. To deal with these there are two approaches:

  1. Consider some restriction, for example putting a bound on the number of sessions
    • useful for finding attacks
    • not able to prove correctness as attacks may be missed
    • the analysis will always terminate

  2. Over-approximations (as used in ProVerif)
    • used to prove correctness
    • could potentially find false attacks
    • in some cases this will be non-terminating

informally, secrecy of a message holds if it is kept secret from a Dolev-Yao adversary in the execution of the protocol. Authenticity means that if party A thinks that he is talking to B then this is in fact the case.

Much work has gone into formal verification in recent years, and ProVerif has been used in the following scenarios: Abadi and Blanchet verified a certified email protocol [pdf], then along with Fournet alaysed the Just Fast Keyring protocol [pdf] and Bhargavan et al. analyzed the TLS protocol [pdf] amongst others.

In ProVerif, protocols are defined by means of the pi-calculus, which is a formalism for describing concurrent systems. In the first stage, the protocol, pi-calculus and cryptographic primitives act as inputs to ProVerif along with the property that we wish to prove. Then an Automatic Translator turns these inputs into Horn clauses and Derivability Queries (an expression of the processes in an abstract form) and then a resolution is the output. This could be one of three outputs:

  • Protocol is Correct
  • There is a Potential Attack
  • Does Not Terminate

The syntax of the pi-calculus can be found on page 6 of [pdf]. The abstraction into Horn clauses is crucial to the treatment of an unbounded number of sessions. Due to this abstraction, the tool is successful at proving security in many cases, but can fail on correct protocols.

Problems can occur in protocols where certain values that are initially secret become public after a length of time, the Horn clause model considers that this value can be reused at the beginning of the protocol, thus breaking the protocol.


Monday, January 16, 2012

VI-MSS workshop: Mathematical and Statistical Aspects of Cryptography

The Virtual Institute for Mathematical and Statistical Sciences (VI-MSS) hosted a workshop on the Mathematical and Statistical Aspects of Cryptography from the 12th to the 14th of January. It took place at the Indian Statistical Institute (ISI) in Kolkata, India. The main focus of the workshop was public key cryptography and this resulted in talks on a broad range of subjects such as lattices, elliptic curves, RSA and even braid cryptography. These talks ranged from practical (the bit-security of ECC and parallel hardware implementations for cryptanalysis) to theoretical (weakly-programmable random oracles and non-malleable commitments). A sizeable portion of the talks was about lattice-based cryptography and the disparity between theory and practice in this area was a general theme throughout the talks. This led to discussions on how to bridge this gap between theory and practice.

One good example of how to do this was illustrated by Damien Stehle in his talk on making NTRU as secure as worst-case problems over lattices (pdf). While this paper describes some changes to the NTRU cryptosystem that lead to theoretical proofs of security, Damien explained that the point was not that these changes should be made in practice, as the result will not be as efficient as desired. Rather, his point was that NTRU is already quite close to something that is provably secure, which should inspire confidence in the system. Additionally, some minor changes might improve the security while retaining efficiency.

Another interesting talk was given by Nadia Heninger on approximate divisors via lattices (pdf). The approximate integer common divisor problem was first posed by Howgrave-Graham (pdf), who also proposed an algorithm to solve it using lattice reduction. The problem arises when factoring numbers while having some information on one of the factors, which is of interest in the RSA setting, where the modulus can be factored if enough bits from one of the primes have leaked. Recently, a multivariate version of the problem was used as a hardness assumption for the construction of fully homomorphic encryption over the integers (pdf) and for an improved version of this construction (pdf). In the talk, Nadia showed how to extend Howgrave-Graham's approach to this multivariate version, the partial approximate common divisor problem. An issue that arises in the analysis of this problem is that current lattice reduction methods such as the LLL algorithm have approximation factors that are exponential in the dimension of the lattice. If these approximation factors were even slightly subexponential in the dimension, this would lead to a polynomial-time break of the fully homomorphic encryption schemes based on approximate common divisor problems. Nadia also showed an analogy of approximate common divisor problems for polynomials, which leads to applications in coding theory such as Reed-Solomon codes and Parvaresh-Vardy codes. An interesting open problem proposed in the talk was to extend the approach to number fields without having to perform lattice basis reduction on the canonical embedding. This would allow for more efficient attacks on fully homomorphic encryption over ideal lattices, as using the canonical embedding leads to a gigantic blowup of the exponential factor for lattice basis reduction methods in this case.

Tuesday, January 10, 2012

Turing Year

Yesterday was the first meeting in Cambridge in the sixth month workshop devoted to the legacy of Alan Turing. The talks were all introductory in nature; and rather than discuss here what they were about I thought it might be worth looking at Turing's legacy in Crypto somewhat.

The first thing the layman would think of is the second world war work on Enigma. In this Turing took the early work of the Polish cryptographers and helped design (with Welchman) the Bombe. This was an electro-mechanical machine used for finding wheel settings of the Enigma machines. It was not a computer! The key to breaking Enigma was that the Enigma machine never encrypted the same letter to itself, and if (for a particular setting) A encrypted to E, then E would encrypt to A. This last property is what made the Welchman diagonal board on the Bombe improve the Bombe's performance.

The layman also often confused Enigma with Tunny (a.k.a. Lorenz). The breaking of Lorenz was by far the most impressive bit of work done by Bletchley Park. The initial mathematics being done by Bill Tutte and the design of the resulting machine, Colossus, being the work of Tommy Flowers. However, Colossus was a real computer; it was digital, had input and output devices, an "ALU" and could be programmed (sort of). Thus whilst Turing may not have had a hand in the design of Colossus one can see that Colossus follows on from Turing's early paper on the decision problem.

In fact it is Turing's work on the decision problem which has had the most lasting impact on cryptography. In the paper he wrote Turing imagined a computing machine (now called a Turing machine) which was able to perform any computational task. The key idea was that there was one machine, now called the Universal Turing Machine, which could be "programmed" to perform any specific computational task. This is quite revolutionary idea; it is the reason why you only need buy one computer and it can do word-processing, games, internet access etc. The computer is "general purpose". Compare this to your kitchen where you have a toaster for toasting, a kettle for boiling water, a stove for boiling saucepans, and an oven for baking.

Imagine if you needed to buy a different device for each computational task you wanted to do. For example one device to read books (a Kindle), one device to play games (an XBox), one device to listen to music (an iPod). Hmmm, we seem to be going backwards these days!!!

Anyway back to the plot. The Turing machine is not only important as it is basically what a modern computer is; it is also important as it models what an arbitrary computer can do. Thus in cryptography we always consider our adversaries to be Turing machines; or more specifically probabilistic Turing machines whose run-times are bounded by some polynomial function of the length of their input (a PPT, or Probabilistic Polynomial-time Turing machine).  I would say Turing's name is mentioned more often than any other name in cryptographic talks; even if it is just by the TLA of a PPT.

Finally, there is one other philosophical contribution Turing made to cryptography in my opinion. The "Turing Test" in Artificial Intelligence is about whether someone can tell the difference between a real human and a computer by simply asking questions via a remote link. In this test the computers goal is to simulate the human so that an observer is unable to tell the difference between a human and the simulator. The is exactly what we do in many security proofs in cryptography. For a specific algorithm (having access to some special bit of information or functions) we define a simulator (which does not have access to the information or functions). We can then argue that if someone cannot tell the difference between the simulator and the real algorithm, then the algorithm is "secure".

Anyway, welcome to the Turing Year of 2012