Friday, August 31, 2012

FPL 2012, First day


In the side-channel session of this years FPL conference in Oslo, Le Masle et al. presented a novel framework to detect power attacks on crypto-systems implemented on reconfigurable hardware. Roughly speaking, rather than putting expensive countermeasures in place, their strategy is to shut down the device as soon as a power analysis attack is mounted. Detection of a power analysis attack is done by monitoring the device supply voltage with a ring oscillator-based on-chip power monitor. With such a power monitor in place, it is possible to observe the power supply voltage to make assumptions whether a power measurement circuit has been inserted onto a device’s power rail. The entire framework is relatively cheap to implement in hardware, it only takes 3300 LUTs of a Spartan-6 LX45 FPGA, which is 12% of the total area available.

In the second paper of the session, Mane et al. discussed a “new’ approach to efficiently and “securely” implement block ciphers, taking AES as an example. Their approach is more or less a version of hiding, where the goal is to keep the power consumption constant, by keeping the number of bit flips in each cycle constant. For their AES implementation, they use a second look-up table that holds the complementary bits of the original AES look-up table. In addition, they use a special data format, where each 32-bit word is split into two balanced half-words, and each bit from the original word is interleaved with an associated complementary bit. In theory, such a solution should result in constant bit flips, however, everyone who attempted to implement hiding countermeasures in practice knows about the difficulty to do this in secure way. The main challenge is to perfectly balance the circuits for each data bit with its complementary counterpart, a task that is almost impossible to realize on an FPGA device. 

Friday, August 24, 2012

Crypto 2012, Day Four

Today was already the last day of Crypto'12. It was a thoroughly enjoyable conference, with plenty of interesting talks, but also socially. On the symmetric side of the cryptologic spectrum, I noticed that many of the accepted papers had previously been presented as work in progress at the Dagstuhl seminar. The result by Itai Dinur, Orr Dunkelman, Nathan Keller, and Adi Shamir on Efficient Dissection of Composite Problems, with Applications to Cryptanalysis, Knapsacks, and Combinatorial Search Problems was no exception (full version available as IACR eprint 2012/217). At the time, Shamir had asked the participants not to broadcast the results (hence the obfuscated blog coverage at the time), but this time everything is public. The program committee had award this paper with the predicate ``Best''. As customary for the recipient of the Best Paper Award, the authors were granted additional speaking time, and likewise some extra attention to the wonderful talk by Itai Dinur seems in order.

It all begins with the simple and elementary observation that for a single blockcipher E with a k bit key operating on k bit blocks, full key recovery by exhaustive search takes 2k time and virtually no memory. When the blockcipher is DES (the international standard for a long, long time) k=56 and exhaustive search is an issue. In order to increase resistance against this attack, a natural solution would be to repeat the blockcipher with two distinct keys, i.e. to compute EL(EK(X)). The resulting key (K,L) has length 2k so one might hope that the best generic attack would take time 22k. Alas, a meet-in-the-middle attack allows easy recovery much faster. First compute and store a table EK(X) for all K and subsequently try all E-1L(Y) for all L in the hope of hitting a tabulated element (resulting in a complete candidate key, one expects to find around 2k candidate keys this way). For this reason, double encryption is hardly ever seen, but instead one directly moves to triple encryption. The best known generic attack on triple encryption takes time 22k and memory 2k. By these three examples, one might be tempted to conjecture that the time*memory complexity is lower bounded by the size of the keyspace. It turns out this conjecture is incorrect, as demonstrated by Dinur and his coauthors. Already for quadruple encryption, a better trade-off is possible by guessing the ``middle'' value (after double encryption). For a given input-output pair (X,Z) of quadruple encryption, and each guess Y, one can compute both a 2k bit key that connects X with Y and one that connects Y with Z using the standard meet-in-the-middle. Thus in time 22k and memory 2k one finds 22k full 4k bit candidate keys (by trying more input-output pair a unique key can be retrieved). Note that the total time*memory complexity of this attack is only 2^3k, constituting an improvement over prior art.

This improvement can also be used for any higher number of rounds, but every so often a further improvement can be made by splitting the problem conveniently into two parts. These splits are not always symmetric, and the magic sequence at which a further improvement takes place, turns out to be 1, 2, 4, 7, 11, 16, 22, 29, 37, 46, 56, ... (also known as A000124 and related to Floyd's triangle). So far, this is all beautiful theory, but their is a remarkle absence of multiple encryption beyond triple in the wild, limiting the impact of the attacks. Amazingly, many other problems can be rephrased into one that allows the techniques to be applied. Foremost is a knapsack problem, for which new attacks are developed that in many cases best existing ones by Becker, Coron, and Joux, and also by Howgrave-Graham and Joux (though the most promising attack to me still seems to be one by these earlier works).

There were of course more talks on Thursday, but I will skip straight to the very last session. The penultimate talk of the conference was on the homomorphic Evaluation of the AES Circuit. This is work by Gentry, Halevi, and Smart and the latter gave the talk in his inimitable style. At some point, he asked out loud Why AES? which prompted the answer Because it is out there, and we can in analogy of answering the question Why climb the Mountain to which the answer was simply Because it is out there. I will leave a technical exposition of the work to Nigel, but one of the interesting things to see is how low level implementation (think SIMD instructions) and high level ideas from fully homomorphic encryption (e.g. number theoretic noise-cancelling tricks) form a blend that should appeal to a very large part of the crypto community. My only criticism would be that the philosophy of mountain climbing is slightly more complicated, but it was a highly entertaining talk nonetheless. It prompted the session chair Daniele Micciancio to exclaim at the end You just experienced Nigel Smart.

Zvika Brakerski continued the FHE session with a talk on how to remove modulus switching and still get an efficient scheme. For the details I will have to refer to the paper, but I would like to take this spot to advertise the two tutorials that were mentioned at the end of the talk.

And that, dear cryptoblog readers, was the end of Crypto.

Thursday, August 23, 2012

CRYPTO 2012: Day Three

The maybe most-anticipated talk of the conference was also the one with the shortest and most generic title: Public Keys. The authors collected more than 11 million public keys openly accessible on the Internet, and searched for duplicates and common factors of RSA moduli. As for duplicates, they found only a few ElGamal and DSA keys, but 270000 duplicate RSA moduli, of which not all seem to be used by unrelated entities. For the distinct moduli, the authors managed to factorize 10 of 2048 bits and 13000 moduli of 1024 bits, of which 5000 are still in use, in a few hours on a laptop. Putting the broken moduli in graphs shows clusters representing different vendors. Most notable, an IBM product generated 687 keys using all possible combinations of only 9 primes. As a conclusion, the authors considers RSA to be riskier than other public-key cryptosystems that do not offer the possibility of breaking keys by GCD computations. The speaker also mentioned the idea of a central directory for public keys, which would allow the testing of new public keys. However, this does not prevent the actual problem of keys becoming compromised by the same randomness being available to another entity. Consequently, this would only help to take the unavoidable action of revoking all public keys generated by such a faulty algorithm at an earlier point in time. After the presentation, it was pointed out in a comment that weak PRNGs affect all cryptosystems equally because one can always try to replicate the key generation given access to the algorithm. Another comment proposing a public directory for private keys instead of public keys was met with amusement. On a side note, the somewhat controversial earlier title of the paper was shown to be a tongue-in-cheek suggestion by Adi Shamir.

In the session on secure computation, it was interesting to see how three state-of-the-art MPC schemes encounter the same problem – one could say the core problem of MPC – and tackle it in three different ways. This problem is how to securely multiply two secret numbers. All three schemes involve the generation of so-called multiplication triples with information-theoretic MACs during a pre-processing phase where the inputs to the MPC are not know yet. Multiplication triples are secret sharings of (a,b,ab), where a and b are random numbers. It is well-known that, due to the properties of linear secret sharing, multiplication triples suffice to compute any arithmetic or binary circuit in MPC, depending on the field of the secret sharing scheme. The scheme can be made secure against active adversaries if the triples are authenticated by MACs, which are essentially a secret sharings of the product of the values with a MAC key. In conclusion, the core problem appears twice, first computing the triples and second computing the MACs. Two papers in the session solved the problem with computational security, the one by Damgård et al. using a reduced version of FHE, called somewhat homomorphic encryption, the one by Nielsen et al. using oblivious transfer to generate binary triples. While the latter method is not entirely new, a novel way of amortizing OT allows to greatly speed up the pre-processing phase. In contrast, the contribution of the paper by Eli-Sasson et al. is an improvement of MPC with unconditional security against active adversaries. It uses a clever secret-sharing of per-party MAC keys that allows to compute a secret sharing of the MACs locally.

Eventually, the day was concluded by the traditional beach barbecue.

Wednesday, August 22, 2012

CRYPTO 2012: DAY 2

In the second section today Stefano Tessaro presented a joint work with Mihir Bellare and Alexander Vardy about semantic security for the wiretap channel. Introduced by Wyner in 1975 and then generalized by Csiszar and Korner in 1978, the wiretap channel is a setting  where one aims to obtain information-theoretic privacy of communicated data based uniquely on the physical assumption that the communication channel from sender to receiver (ChR) is less noisy than the communication channel from sender to adversary (ChA). In more details, we have a sender S that applies a randomized encryption function to a message M and sends the corresponding ciphertext C to a receiver R through a noisy channel ChR, so that R can decrypt C to recover the original message. The ciphertext C is also sent to an adversary using another noiser channel ChA. This physical assumption is the only assumption made: the encryption is keyless and the security is information-theoretic.
In the talk the channel is a randomized map Ch: {0,1} -----> {0,1}, that maps
bit to bit and then is extended to {0,1}^* via Ch(x1,....xn)=Ch(x1)....Ch(xn).
Two examples are the Clear Channel (Ch(b)=b) and the Binary Simmetric Channel with error probability p, BSCp. In his talk Stefano considered the setting where ChR=BSCp and ChA=BSCq, with p<q<=1/2, even the results of their paper are more general and can be used on a larger set of channel.
The rate R is defined as |M|/|C|, i.e. the ratio between the length of the message that is encrypted and the resulting ciphertext.
The goal of this scheme is to achieve privacy, correctness and rate of the scheme as high as possible.

There is a lot of work on this area,  and it has done  almost exclusively within the information  theory   and   coding   (I&C)  community and not within the cryptographic community. So if we look at this work in a cryptographic perspective, we find two major drawbacks:
1) improper privacy notions that are insufficient to provide security of applications and that only consider security for messages chosen uniformly at random
2) no polynomial -time schemes with optimal rate: the results are mostly non-constructive.

Then Stefano described how their paper bridges the gap between previous results on wiretap channel and modern cryptography. First they provide new security notions for defining privacy in the wiretap channel model: on one hand they adapt security notions that are used in the context of encryption, such as semantic security (SS) and distinguishing security (DS) to wiretap channel,  on the other hand they adapt existing notion from information theory, that they call mutual information security (MIS), to take into account arbitrary message distribution, and they prove that they are equivalent, connecting two different way of defining privacy. The second result is a concrete construction of first polynomial time scheme achieving these security notions with optimal rate.


Monday, August 20, 2012

CRYPTO 2012 : Day One

The annual return of cryptographers to sunny Santa Barbara started today. The morning was dominated by an excellent invited talk by Jonathan Zittrain entitled "The End of Crypto".

The talk gave an interesting initial presentation of the history of computers and networks. In terms of computers before the advent of the PC there was essentially two models. Reprogrammable computers which were essentially mainframes and kept out of the hands of the public; and non-programmable computers which the public could use but had limited functionality (think of the old pong video consoles). With the advent of the Apple-2 people could produce their own softare, and one such piece of software (Visicalc) was so successful that (apparently) initially Apple had no clue why the Apple-2 devices were flying off the shelves. The key point being that the manufacturer could not predict the ways in which the computer was going to be used.  Essentially the PC revolution was successful because the public had access to a device which had the accessibility of non-programmable stuff with the flexibility of reprogrammable computers.

In a similar vein the development of networks was the same. Looking at an old Compuserve system from the days before the internet we see a system which was closed and had no ability for user determined functionality. The user could only use it in the ways that the company had initially designed into the product.  However, the Internet allows anyone to communicate with any other endpoint. That the Internet actually works is quite amazing, and not immediately obvious from the start. For example in 1992 someone in IBM apparently said "You cannot build a corporate network out of TCP/IP".

How has this change affected cryptography? We have (via PGP say) the ability for a user to encrypt data, without an intermediary such as a manufacturer or network provider enabling it.  The promise of crypto was to allow Alice to talk to Bob without people in the middle learning it. But the promise has not materialised due to the problem of malicious code. This problem is caused by the inherent success of the universal machine/network model discussed earlier; computers run loads of software, none of which we know really what they do. They could contain malicious code for example to delete ones hard drive say

The key question is "Whats the point of end-to-end encryption is the endpoints aren't secure?".

How have we dealt with this problem?  For any given system can see it as either top-down (for example an automobile from General Motors is given to us as a final product from the top) or bottom-up (for example with a PC anyone can generate ideas/usages).  Also a system can be hierachical with a few high level points of control,  or polyarchical with multiple control points distributed amongst the users. It should be noted that a polyarchical network can turn into a hierarchical network one, for example by putting people on bad-lists for spam, the owner of the bad-lists becomes eventually the control point for a heirarchical network.

A modern trend is to move to a sandbox, where you can only run programs which are trusted. Here the iPhone is the classic example; looking at the start of it's evoluation very simular to the UI of the old Compuserve system.  Eventually new Apps were developed for the iPhone. But then the issue is that Apple can control what is allowed on users phones (and not the users).  This is not just Apple, for example one can see Facebook as a gate keeper.

Despite the success of open systems, we now have a movement to enclosure. This makes stuff more secure, but brings more problems. The key problem is that if oneof these gatekeepers is offended by company X, they can essentially put the company out of business. Here Crypto is essentially used to ensure you are locked into a relationshup with a given provider. Thus instead of freeing the user by providing security and privacy, cryptography is used to bind users and withdraw their privacy.

The basic tennent of the talk was that cryptography is not making us safer or more secure, it is now producing problems, because the crypto is used to tie down hardware. An interesting example it how it could be used by companies, govenment, criminals etc to force your hardware to do things which violate your privacy. For example an agency could use cryptography to send a signal to a phone app to turn its microphone on and hence snoop on all your conversations (including ones not on the phone).

So the endpoints are not secure in the sense in which it is non-trusted corporations and entities which controls the endpoints and not the users.  So "the end of crypto" is about the aim of crypto. i.e. "The Ends" of cryptography. The speaker ended by saying that cryptography should be about protection of the freedom to securely communicate; but then their are issues about how exceptions to this rule right are handled. And finally it is about protection of the freedom to share discriminately; i.e. I may want to share a photo with some people and not others.

Overall the talk presented a large number of interesting philosophical points on how our subject is developing and how technology in general is affecting our lives in ways which we did not envisage

Thursday, August 2, 2012

PODC 2012

Being the flagship event for the principles of distributed computing, most of the talks in PODC 2012 were related to "core" distributed computing, like transactional memory,  shared memory, distributed algorithms, failure detectors, etc. However, there were few interesting talks related to Byzantine fault-tolerant distributed computing protocols.

There were two talks related to the improvement of resilience for Byzantine agreement problem, one of the fundamental problems in distributed computing. In a nut-shell, a Byzantine agreement (BA) protocol involving n parties allows the parties to agree on a common bit, even if t out of the n parties behave maliciously in any arbitrary manner, to prevent the honest parties from agreeing on a common bit. There are well known bounds on when is agreement possible; for example n > 3t is known to be a necessary and sufficient condition for the possibility of any BA protocol. A very natural question is can we get rid of this bound, at the expense of additional primitives/assumptions? The two related talks considered this problem. Before going into the details of the two talks, it would be important to note that the main reason for the bound of n > 3t is the ability of a corrupted party to "equivocate"; i.e. a corrupted party can send different messages to two different honest parties and make the two honest parties disagree, even if the corrupted party is supposed to send the same message to both the honest parties. If somehow we can get rid of this problem and do not allow the corrupted parties to equivocate, then intuitively we can improve the number of faults which can be tolerated in the protocol. The two related talks looked into two different approaches by which we can enforce non-equivocation.

The first proposal by Allen Clement, Flavio Junqueira, Aniket Kate and Rodrigo Rodrigues considered a setting where apart from the traditional point-to-point channels between the parties, a very small trusted hardware is also embedded inside each processor (party), enforcing the parties to non-equivocate. This idea was proposed few years back in the security community and it was beleived that doing so will certainly improve the resilience of distributed fault tolerant  primitives. The major contribution of the paper is to show that non-equivocation alone is not sufficient to improve the bound n > 3t; in addition one also requires transferable authentication, such as digital signatures. Given the primitive for non-equivocation plus digital signatures, it is possible to design BA protocol with n > 2t.

The second proposal by Alexander Jaffe, Thomas Moscibroda and Siddhartha Sen considered a setting where apart from the point-to-point channels, we also have "several" 3-cast channels between every 3 parties; a 3-cast channel between 3 parties is like an hyperedge or a multi-cast channel between the three parties, which allows any of the three parties to send a message identically to the remaining two parties (who are connected through the channel). It is well know that if a common broadcast channel is provided to all the n parties then BA is possible with n > 2t and in the absence of a common broadcast channel, n > 3t is required for BA. Unfortutanely, nothing was known for the possibility of the BA protocol for the interval between n = 2t+1 and n = 3t+1. The major contribution of the paper is to show the possibility of BA for n =  2t + h, for any 1 <= h <= t+1, provided "sufficient" number of 3-cast channels are available; the sufficient number of 3-cast channels is actually a function of h. The authors presented tight bounds on the number of 3-cast channels as a function of h, which are required for solving the BA problem with n = 2t + h parties. An obvious and interesting research direction is to see whether we can use the above two approaches for improving the fault tolerance of general secure multi-party computation (MPC) protocols.

Another interesting paper by Guanfeng Liang and Nitin Vaidya was again related to the BA problem. Traditionally, while designing BA (and general MPC) protocols, we do not consider the channel capacity/bandwidth of a channel. That is, we assume that every channel has the same capacity and "any" amount of information can be exchanged between two parties, who are pair-wise connected. However, this is far away from the reality because in real life, every channel may not have the same capacity and so the amount of communication allowed over a specific channel may be limited/bounded by the channel capacity. The contribution of the paper is to design BA protocols over networks where different channels may have different capacities. In such a setting, an obvious choice to measure the overall output is to measure the amount of bits that are agreed per unit time, thus leading to the notion of capacity and throughput of BA protocols. Though the authors could propose a lower bound on the throughput of BA protocols, they could not meet this bound and the problem of designing BA protocol with optimal throughput is left as an open problem. Again a natural question would be to look into the MPC protocols in the same spirit, where we may now define constraints on the individual channel capacities and then try to design MPC protocols.

The last interesting paper (for me) was by Varsha Dani, Valerie King, Mahnush Movahedi and Jared Saia. Currently, in all the information theoretically secure MPC protocols (secure against a computationally unbounded adversaries), to evaluate a functionality of size m (consisticing of m gates), an n-party protocol requires a communication complexity of O(nm), thus making the communication complexity to be dependent on the size of the circuit. This is in contrast to the computational setting, where by using FHE, we can design MPC protocols, where the total communication complexity will be independent of the circuit size. This paper claims to present an information theoretically secure MPC protocol, where the communication complexity is independent of the circuit size in the amortized sense. The key idea used here is to somehow eliminate the need for each party to communicate with every other party during the evaluation of a gate. Instead, the protocol ensures that during the evaluation of a gate, a party has to communicate only to a quorum of parties of size log n, such that at most 1/3 rd of the parties in the quorum can be corrupted. The quorum construction in turn is based on the seminal result of King et al of PODC 2010, where they designed BA protocol, breaking the traditional O(n^2) communication complexity barrier for reaching agreement. The proposed MPC protocol is in the synchronous settings and it would be interesting to see the asynchronous version of the protocol.