The latest study group, on signature schemes from standard assumptions, was given by Florian Böhl, who is currently visiting Bristol. He discussed three schemes from the (bilinear) Computational Diffie-Hellman assumption: the scheme by Waters from Eurocrypt '05, the scheme by Hohenberger and Waters from Eurocrypt '09 and finally a new scheme which is to appear at Eurocrypt '13. Theoretically, Digital Signature schemes are a weaker primitive than Public Key Encryption, as signatures are equivalent to one-way functions whereas PKE requires something stronger. On the other hand, it seems much more difficult to construct practical signature schemes as opposed to PKE schemes. The most efficient known signature schemes rely on "strong" assumptions, where the adversary only needs to break one instance of the system out of many possible instances. The schemes that Florian discussed are based on simpler assumptions, such as the standard (bilinear) Computational Diffie-Hellman
assumption and the standard RSA assumption.
The security notions for signature schemes are slightly different than those of encryption schemes. Consider message indistinguishability of encryption schemes, where the adversary chooses two messages, gets the encryption of one of them and has to guess which message was encrypted. However, for signatures, the adversary sends in several messages and receives accompanying signatures. Finally, he needs to output a valid signature of a new message of his choosing. The only restraint on this message is that it was not queried before by the adversary, which means he has many options in choosing his challenge.
An important notion of security for digital signatures is Existential Unforgeability under Chosen Message Attack, which can be non-adaptive (EUF-naCMA) or adaptive (EUF-CMA). In the non-adaptive case, the adversary chooses q messages and sends them to the challenger. He then receives the public parameters and valid signatures for these q messages and has to output a fresh message with a valid signature for this message. In the adaptive case, the adversary receives the public parameters at the start and can adaptively query q messages one by one, receiving a valid signature every time. As in the non-adaptive case, he needs to output a fresh message with a valid signature.
It is possible to turn a non-adaptively secure scheme into an adaptively secure scheme using something known as a Chameleon Hash Function. A Chameleon Hash Function (CHF) is a special hash function that has a trapdoor which allows you to find collisions. Specifically, it takes a message and some randomness as input and if you are given a new message, the trapdoor allows you to find new randomness such that the CH of the first message and the first randomness is the same as that of the second message and second randomness. By signing the CH with the non-adaptive scheme, the challenger can generate signatures for q random CH's and then find the appropriate randomness for each of the adaptive queries of the adversary. The randomness must also be published as part of the signature to allow verification. An adversary breaking the new scheme must either break the collision-resistance of the CHF or give a forgery for the non-adaptive scheme.
At Eurocrypt '05, Waters presented an Identity-Based Encryption scheme with a security reduction to the decisional Bilinear Diffie-Hellman assumption. It is possible to construct a signature scheme from this Identity-Based Encryption scheme using a generic conversion (due to Boneh and Franklin). An IBE scheme is an encryption scheme where the public key is someone's identity (such as their email address for example) and a master secret key can be used to derive secret keys for each identity. To turn this into a signature scheme, one can view the message to be signed as an 'identity' and then give the associated secret key as the signature of the message. To verify the signature of a message, the verifier can encrypt a random value with the message (the public key in the IBE scheme) and then decrypt it using the signature (the associated secret key for this identity). The security of this signature scheme is equivalent to the security of the underlying IBE scheme, which is decisional BDH in Waters' scheme. However, Waters shows that it is possible to use the bilinear map to verify signatures deterministically, which results in a signature scheme with a security reduction to the weaker Computational Diffie-Hellman assumption.
It turns out that this scheme implicitly contains a Programmable Hash Function, a concept later formalized by Kiltz and Hofheinz (Crypto '08), which will be the topic of a study group in the near future. The PHF proposed by Waters has some adaptive properties and hence the scheme does not require the use of a CHF to be adaptively secure. However, this particular PHF is unfortunately not very efficient and therefore the resulting signature scheme is also not very efficient. In the scheme by Hohenberger and Waters from Crypto '09, they instead use a slightly weaker notion which is due to Boneh and Boyen, from their selectively-secure signature scheme. The security of the resulting scheme is based on the RSA assumption and the scheme is stateful, in the sense that the signer must maintain a state that counts the number of signatures issued. Hohenberger and Waters state that they believe that the statefulness of the scheme can be removed and that their scheme is a step towards realizing stateless signature schemes.
Consequently, the new paper that was discussed at the study group, which is to appear at Eurocrypt '13, will show how to convert such stateful schemes to efficient stateless schemes. They use a CHF and a 'weakly programmable hash function' as in the scheme by Hohenberger and Waters. In their work, they provide new and more efficient schemes, based on the RSA, bilinear CDH and SIS assumptions.
A blog for the cryptography group of the University of Bristol. To enable discussion on cryptography and other matters related to our research.
Monday, March 25, 2013
Monday, March 18, 2013
2012 Turing Award Goes to Cryptographers
The Turing Award (a kind of Noble Prize for Computer Science) has been awarded for 2012 to two cryptographers; Shafi Goldwasser and Silvio Micali. The announcement of this award came last week and is for their pioneering work on the foundations of cryptography. Over the years the list of Turing Award winners reads like a who's who of famous computer scientists (Hamming, Dijkstra, Knuth, Ritchie etc etc) and has included prior work in cryptography; Andrew Yao (2000) and Rivest, Shamir and Adleman (2002).
Goldwasser and Micali's work forms the foundation of almost everything that modern cryptographers do. In a ground breaking series of papers in the 1980's they defined the notion of semantic security, this is the de-facto modern security definition for encryption schemes. They also presented the first encryption scheme which met this definition (the Goldwasser-Micali scheme). This public key scheme is not very efficient, but the ideas behind it's construction form the basis of all modern encryption algorithms.
With Rivest they produced the GMR security definition for digital signature schemes, otherwise known as universal forgery under chosen message attack (UF-CMA). This definition again forms the basis of all modern constructions of digital signatures; such as the ones used to verify the validity of digital certificates in your browser or chip-and-pin cards.
However, their most interesting contribution is the idea of zero-knowledge. In many areas of computer science one is interested in a computer learning some information; say about a data set in biology etc. Areas such as Machine Learning try to develop algorithms which take data and learn from that data. In security we are interested in the opposite to some extent; namely we want to know that an algorithm cannot learn certain information (for example a secret key). To formalise this notion Goldwasser and Micali introduced the notion of zero-knowledge; i.e. they formally defined what it means for an algorithm or process to learn nothing.
This has important practical importance. When you go into a shop to purchase something with your chip-and-pin card, the card is used to authenticate you to the shop, it also is used to authenticate the transaction. These authentication steps require using the secret key embedded into the card. However, we do not want the shop to learn the underlying secret even if it sees a large number of transactions performed with your card. This is where zero-knowledge protocols come in. They gaurantee that no matter how many transactions are seen by the merchant, the merchant cannot use these to validate an invalid transaction. They also form the basis behind protocols used in electronic locks; for example the key fob for opening your car door.
These three topics (semantic security, GMR security and zero-knowledge) form the basis of the cryptography teaching at Bristol University, and none of this would have been possible without the ground breaking work of Goldwasser and Micali. It is also fitting that the 2012 award should go to cryptographers; after all 2012 was the 100th anniversary of the birth of Alan Turing, and in a previous blog post we have discussed Turing's central contributions to cryptography.
Thursday, March 7, 2013
Study Group: A New Approach to Practical Active-Secure Two-Party Computation
This week's study group, given by Nigel, concerned the paper A New Approach to Practical Active-Secure Two-Party Computation (CRYPTO 2012) by J.B. Nielsen et al. In this paper the authors propose a 2-party UC protocol computational secure against a semi-honest adversary in the random oracle model, in which two parties, Alice and Bob, with some private input bits x and y, respectively, jointly compute a boolean function f(x,y). The main motivation for the paper is to give a PRACTICAL efficient 2-party protocol with the standard requirements of 1) Correctness (the output of the computation is correct) and 2) Privacy (the output is kept secret). To this purpose they use a new approach based on Oblivious Transfer (OT), actually getting efficiency from a highly inefficient tool. In particular they introduce a new way of extending OTs: from a small number of seed OTs they can derive, in an actively secure way, a large number of committed OT-like primitives.
Their approach is based on the passive-secure 2-party protocol of [GMW87], that extensively uses OT. The input bits x and y are secret shared in such a way that Alice holds xA, yA and Bob holds xB, yB, with x=xA⊕ xB, y = yA⊕ yB. Then the boolean circuit computing the function f(x,y) is evaluated gate-by gate. Suppose the parties want to compute an AND gate: they need to calculate a random sharing zA, zB of z= xy = xAyA
⊕ xA yB ⊕ xB yA ⊕ xB yB. So they locally compute tA=xAyAand tB=xByB, and then, using OT, they compute the cross products u=xAyB and v=xByA such that xy=t⊕u⊕v. To achieve active security the authors add information theoretic Message Authentication Code (MAC) to all the shares. The first step is the oblivious authentication of bits (aBit): A party (for example Bob) holds a global key Δ A ∈ {0,1}k , where k is the security parameter, and, for each of the Alice's bit x, a local key Kx ∈ {0,1}k . Alice inputs x in a OT and she receives the MAC Mx= Kx⊕ x Δ A. Notice that if y is another message and My=Ky⊕ y Δ A its corresponding MAC, then Alice can locally compute Mx⊕ My=(Kx⊕ Ky)⊕(x⊕y) Δ A. It is important to note that Bob has to use the same global key Δ A for each of the OT. To ensure this a cut-and-choose-like technique is proposed in which one authenticates twice as many bits as he needs and then he sacrifices half of them checking that they were done with the same global key. Using aBit, the authors describe Authenticated AND (aAnd) and Authenticated OT (aOT). If Alice holds authenticated random bits a,b,c, aAnd allows to compute c=ab and obtain an authentication of this result; aOT permits to perform OT of previously authenticated bits and produce an authentication on the result.
Summing up the authors describe a secure 2-party protocol for boolean function in the preprocessing model. During the preprocessing phase they prepare random authenticated messages, random authenticated multiplication triples and random authenticated OTs, using aBit, aAnd and aOT. In particular aAnd is used to get a MAC on tA=aAbA and tB=aBbB and aOT to secure compute u=aAbB and v=aBbA (u,v are the cross products in c=ab=aAbA
⊕ aA bB ⊕ aB bA ⊕ aB bB), that will be used in the AND gates during the online phase, where the preprocessed data is used to actually evaluate the circuit. The efficiency is guaranteed by the OT-extension technique, that from a small number of long MACs on Bob 's random bits permits to derive many, short MACs on Alice's random bits. Intuitively the OT-extension works as follows:
1) Long MAC for Bob: this first step consists in generating a few aBit using very long keys. Bob inputs some random bits yj, j=1, ..., h, in a OT and receives
Nj=Lj⊕ yjΓB,
for j=1, ..., h and Nj, Lj, ΓB∈ {0,1}n, with h small (around 128) and n very big. This step is also called LaBit (Leacking Bits) as the key holder (Alice in this case) can learn a few of Bob's bits yj.
2) Short MAC for Alice. This step consists in generating a lot of aBit with short keys:
- Rewrite the MACs created in 1) in matrix form Nij=Lij⊕ yjΓBi , i=1,...,n and j=1,...,h:
[ N1 | . . . | Nh ] = [ L1 | . . . | Lh ] ⊕ [ y1ΓB1 | . . . | yhΓBh].
- Swap the position of the MACs and the keys:
[ L1 | . . . | Lh ] = [ N1 | . . . | Nh ] ⊕ [ y1ΓB1 | . . . | yhΓBh].
- Flip the values of columns and rows:
[ L1j . . . Lnj ] = [ N1j . . . Nnj ] ⊕ [ yj (ΓB1 . . . ΓBn) ], j=1, . . ., h.
- Rename Mij = Lji, Kij = Nji, xi = ΓBi and ΓAj = yj, from which we get the n MACs
Mi=Ki⊕ xiΓA,
i=1, ..., n and Mi, Ki, ΓA∈ {0,1}k.
This step is called WaBit (Weak Bits) as a few bit of the global key ΓA may leak to the MAC holder.
3) Fix the problems obtaining a fully secure global key Δ A.
Using these ideas the authors obtain the first practical 2-party computation protocol based on OT at a rate of about 20000 gates for seconds.
Monday, March 4, 2013
Crypto is Dead; Long Live Crypto!
At last week's RSA conference, in the Cryptographer's Panel, Adi Shamir said that we need to prepare for a post-crypto world. The same basic idea was put forward by Jonathan Zittrain in his invited talk at last years CRYPTO conference, which was entitled "The End of Crypto". The basic idea behind both statements is that in the modern world the attacker is already inside the box, so no matter how much encryption you use the bad guy will always get your data.
Let us elaborate on this idea a bit more. In the consumer field you may use your smart phone to store all sorts of important data on it (email addresses, passwords, banking data etc), but most users also download apps. Very few of these apps have had any form of code or security review, and so could contain malicious code which can compromise your phone, and your security.
In the enterprise space the problem is even more acute. The existance of APTs (Advanced Persistant Threats) means that enterprises need to assume that the attacker is already within their perimeter. This has always been true, in that often attacks come from the inside, but now we have seen external attacks which are based on dorment code, lying in wait, within an enterprises security perimeter.
The usual cryptographic model is that there is at least one honest person in a protocol. This is still true, the user of a system may be honest, but if we cannot trust the system they are on to not be compromised, we can essentially not trust anything. What is the point of encrypting data if the attacker already has the key? Even if the data has been encrypted successfully, at some point it needs to be decrypted so as to process it. At the point of decryption, if the attacker controls the machine, he also controls the decrypted data. So with such all pervasive threats, how can crypto be of any use what-so-ever? In some sense this means "Crypto is Dead!".
Luckily, however, cryptography is about more than just encryption and authentication. Modern cryptography gives us a number of tools which we can use to secure systems. In Bristol we have been working for a number of years in turning a theoretical technology called multi-party computation (MPC) into a practical reality. The basic idea behind MPC is the following: Suppose a set of n parties have their own secret inputs, for example party i has input xi. Now suppose they want to compute some function on these inputs say
f(x1,...,xn)
MPC allows us to do this, via a protocol, in such a way that the parties learn the output of the function, but no one learns anything about the other parties inputs. Traditional use-cases which have been proposed for MPC are the sharing of databases etc.
However, MPC also allows one to solve the problem of what hardware to trust in an enterprise environment such as that described above. In other words we can use MPC as a threat mitigation strategy to avoid APTs and other malware on our systems. We do this by altering the standard use case of MPC, instead of having multiple parties we have one party who just happens to be using multiple computers.
Suppose we expect that an attacker is inside our network. If we put in place enough protection to ensure he is not on all of our computers at once, then we may be able to hold a secret key on that uncompromised computer. But we do not know which computer is uncompromised. Using MPC we can get around this. Consider the simplified situation where we have two computers, where we know it is likely one is compromised and the other is not. We now "split" the encryption key between the two servers, for example via a one-time pad. So each party essentially has a random string, but the two parties together when combining their random string have the secret key. We treat the two random strings as the inputs x1 and x2 above, and now consider f as the encryption function. In this way the two servers can perform the encryption, without anybody ever knowing the underlying secret key itself.
We could use the same idea for messages. Instead of having one server needing to decrypt some ciphertext, with the obvious problem of what happens when the server is compromised, a combination of servers could use MPC to decrypt the ciphertext and then use MPC to process the plaintext. In this way no single party ever sees the underlying key, and the underlying plaintexts, at all.
This might seem like pie in the sky, but we can actually perform such computations today, using protocols developed in Bristol and at other places around the world. For example when we first implemented the AES functionality in a covertly secure manner back in 2009 we could evaluate a single AES block encryption between two partes in 60 seconds. Now this can take as little at 0.012 seconds, and we can process around 1000 such encryptions per second. With these latencies and throughputs we can actually implement solutions which help mitigate against the problem of servers being compromised by APTs.
Of course MPC only acts as a threat mitigation defence in such situations. If all machines are compromised, then all bets are off. But that is the same as with all security mechanisms, one needs to combine them together to provide a secure system. But the key message is that with MPC one no longer needs to place ones secrets all in one place. So whilst the traditional cryptographic model may be dead in the modern world, the application of cryptography to address security problems is far from dead. So Long Live Crypto!
We could use the same idea for messages. Instead of having one server needing to decrypt some ciphertext, with the obvious problem of what happens when the server is compromised, a combination of servers could use MPC to decrypt the ciphertext and then use MPC to process the plaintext. In this way no single party ever sees the underlying key, and the underlying plaintexts, at all.
This might seem like pie in the sky, but we can actually perform such computations today, using protocols developed in Bristol and at other places around the world. For example when we first implemented the AES functionality in a covertly secure manner back in 2009 we could evaluate a single AES block encryption between two partes in 60 seconds. Now this can take as little at 0.012 seconds, and we can process around 1000 such encryptions per second. With these latencies and throughputs we can actually implement solutions which help mitigate against the problem of servers being compromised by APTs.
Of course MPC only acts as a threat mitigation defence in such situations. If all machines are compromised, then all bets are off. But that is the same as with all security mechanisms, one needs to combine them together to provide a secure system. But the key message is that with MPC one no longer needs to place ones secrets all in one place. So whilst the traditional cryptographic model may be dead in the modern world, the application of cryptography to address security problems is far from dead. So Long Live Crypto!
Subscribe to:
Posts (Atom)