tag:blogger.com,1999:blog-27520587001544672262017-02-21T21:21:32.511+00:00Bristol Cryptography BlogA blog for the cryptography group of the University of Bristol. To enable discussion on cryptography and other matters related to our research.Nigel Smarthttp://www.blogger.com/profile/17681184541012804026noreply@blogger.comBlogger442125tag:blogger.com,1999:blog-2752058700154467226.post-3985643442700164482017-02-21T18:36:00.000+00:002017-02-21T18:46:54.697+00:00Homomorphic Encryption API Software Library<div dir="ltr" style="text-align: left;" trbidi="on"><div style="text-align: justify;">The <i>Homomorphic Encryption Application Programming Interface</i> (HE-API) software library is an open source software library being developed as part of the <a href="http://heat-project.eu/">Homomorphic Encryption Applications and Technology</a> (HEAT) project, and is available <a href="http://github.com/bristolcrypto/HEAT">here</a>. The main purpose of this software library is to provide a common easy-to-use interface for various existing Somewhat Homomorphic Encryption (SHE) libraries. Limited support for fixed-point arithmetic is also provided by this library. Note that the HE-API library is still a work in progress.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Fully Homomorphic Encryption (FHE) is a cryptographic primitive that allows meaningful manipulation of ciphertexts. In spite of several recent advances, FHE remains out of practical reach. Hence a reasonable restriction to make is to limit the set of evaluated circuits to a specified subclass, usually determined by the multiplicative depth of the circuit. Such encryption schemes are called as SHE schemes. Various libraries such as <a href="http://github.com/shaih/HElib">HElib</a>, <a href="http://sealcrypto.codeplex.com/">SEAL</a>, <a href="http://github.com/CryptoExperts/FV-NFLlib">FV-NFLlib</a>, <a href="http://github.com/tricosset/HElib-MP">HElib-MP</a>, etc., are already available that implement these SHE schemes.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The purpose of this HE-API software library is to provide a common, generic, easy-to-use interface for various existing libraries that implement SHE schemes. The SHE libraries that are currently integrated in the HE-API library are HElib and FV-NFLlib. It may be noted that the FV-NFLlib library is itself an outcome of the HEAT project. At a high-level, the HE-API software library abstracts out the technicalities present in the underlying SHE libraries. For instance, the HElib library implements the <a href="http://eprint.iacr.org/2011/277">BGV SHE scheme</a>, while the FV-NFLlib implements the <a href="http://eprint.iacr.org/2012/144">FV SHE scheme</a>. Needless to say, the syntax for various classes and routines in the individual libraries will be different, though the underlying semantics are very similar. The HE-API library integrates the underlying SHE libraries under a single interface, thereby shielding the user from syntactic differences. Another feature of the HE-API library is that it contains minimal, yet complete, set of routines to perform homomorphic computations. The design of this library is motivated by the ease of use for non-experts.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><b>Supported Data Types</b></div><div style="text-align: justify;"></div><div style="text-align: justify;">The following application data types are supported by the HE-API software library. </div><div style="text-align: justify;"><ul><li>Boolean</li><li><span style="font-family: inherit;">Unsigned long integers</span></li><li><a href="http://gmplib.org/">GMP</a>'s arbitrary precision integers class: <span style="font-family: "courier new" , "courier" , monospace;">mpz_class</span></li><li>Polynomials with coefficients of type: <span style="font-family: inherit;">unsigned long integers</span> or <span style="font-family: "courier new" , "courier" , monospace;">mpz_class</span></li><li><span style="font-family: "courier new" , "courier" , monospace;"></span>Vectors of : <span style="font-family: inherit;">unsigned long integers</span> or <span style="font-family: "courier new" , "courier" , monospace;">mpz_class</span><span style="font-family: "courier new" , "courier" , monospace;"></span></li><li>Fixed-point numbers</li></ul>Note that all the data types and routines described above may <i>not</i> be currently supported by every underlying SHE library.</div><div style="text-align: justify;"><br /></div></div>Srinivas Vivekhttps://plus.google.com/110838877883478858848noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-65848202892037199432017-01-13T16:56:00.001+00:002017-01-13T16:56:05.930+00:00RWC 2017 - Secure MPC at Google<div style="text-align: justify;">This talk was given by Ben Kreuter and its focus was on the apparent disparity between what we research in academia versus what is required in the real world, specifically in the field of multi-party computation (MPC). MPC is the idea of allowing multiple parties to compute some function on their combined input without any party revealing anything about their input to the other parties (other than what can be learnt from the output alone).<br /><br />While significant work has been done on making MPC efficient in practice (for example, the work of Yehuda Lindell et al. on high-throughput MPC which was presented by Lindell in the preceding talk), the focus tends to be on generic protocols (e.g. general logic circuits) with strong security guarantees (e.g. malicious security), which invariably leads to large computational overhead. In practice, we usually require only specific protocols, which can therefore be optimised, and comparatively weak security guarantees.<br /><br />In the real world, network cost is the salient factor, rather than the speed of the protocol, since the parties who are involved in a computation often have to use networks (such as the Internet) which are being used by many other people at the same time and cannot make the best use of the network's full capabilities. The MPC at Google is about computation amongst, for example, mobile phones, laptops and servers; this introduces issues like battery constraints and the possibility of the computation not completing; these considerations, firmly grounded in the real world, are important when developing MPC techniques in research.</div><h4><br /></h4><h4>Business applications</h4><div style="text-align: justify;">A large portion of Google's revenue is generated by advertising: the tech giant, well-known for its aptitude for accurately determining users' desired search results even when queries are expressed ineloquently, specialises in creating personalised adverts to its wide spectrum of users. The efficacy of an advert is generally measured by the proportion of viewers of it who later become customers. Clearly this can be done by businesses comparing their database of customers' transactions with Google's databases of who has been shown which adverts. This, however, would be an invasion of privacy: instead, Google and the business can do MPC: more specifically, a private set intersection protocol.<br /><br />In a private set intersection protocol, the parties involved compute how large the intersection is amongst the sets input by each party, or even some function on those elements in the intersection. So if the business and Google compute a private set intersection protocol on their data, they can determine how well the advertising went.<br /><br />Roughly speaking, the MPC Google does in the real world is as follows: Google has a set $\{g_1,g_2,...,g_n\}$ of field elements which encodes a set of people who have been shown an advert for a certain product, and a business has a set $\{b_1,b_2,...,b_m\}$ of field elements which encodes a set of people who have been sold the product in question; Google raises each of its elements to a power $G$ and sends the set $\{g_1^G,g_2^G,...,g_n^G\}$ to the business. The business does the same with its elements for some exponent $B$ to get $\{b_1^B,b_2^B,...,b_m^B\}$, encrypts a set of binary vectors under Paillier encryption (which is additively homomorphic), one corresponding to each element in its set, encoding some other property of the sales (like the amount paid), and also computes the set $\{g_1^{GB},g_2^{GB},...,g_n^{GB}\}$. The business sends Google the set of pairs $\{(b_1^B,P(v_1)),(b_2^B,P(v_2)),...,(b_m^B,P(v_m))\}$ along with $\{g_1^{GB},g_2^{GB},...,g_n^{GB}\}$, and Google computes $\{b_1^{GB},b_2^{GB},...,b_m^{GB}\}$ and adds together all encrypted vectors $P(v_i)$ for which there exists some $j$ such that $g_i^{GB}=b_j^{GB}$. It sends this ciphertext back to the business, which decrypts and interprets the result.<br /><br />This protocol is very simple, and it is only passively secure (in which players are assumed to execute the protocol faithfully but will possibly try to learn things by inspecting their communication transcripts). An interesting, perhaps somewhat orthogonal concern, to how we approach research from an academic point of view is that it is important that we can convey the security and efficiency of our protocols to lawyers, managers and software engineers who will eventually be sanctioning, authorising or implementing the protocols. "The lawyers are interesting because you can show them a proof, and two plus two equals four is a negotiable statement here... managers usually trust your expertise...and software engineers are the worst because they already assume [the protocol] is impossible."<br /><br />An alternative solution using garbled circuits was explored in the recent past, but it turned out that their use required some subtle assumptions regarding the computation and communication which would have made the protocol impractical.<br /><br />Future work would involve getting a (not too much more expensive) maliciously secure protocol and developing the use of the homomorphic encryption to allow different functions to be computed on the data in the intersection.<br /><br /><h4>Consumer applications</h4>The Android keyboard app by Google, Gboard, logs what a user types so that it can guess words for auto-completing in the future. This data could be used for training machine learning models, and merging results from many local models would enable the formation of guessing algorithms that work well for everyone. However, to do this, the server would need to receive a set large dataset of words typed by a user from each phone so that this processing could be done. Clearly there is an issue of privacy here; moreover, there is also potentially a differential privacy issue.<br /><br />This is clearly a good situation in which to use MPC. Each party masks their data using a basic additive secret-sharing scheme: if each party has a vector to input, for every coordinate, every pair of parties agrees on some random field element, one subtracts and one adds this to that coordinate of their vector. When the parties send this to Google, the masks will therefore cancel when added together.<br /><br />In practice,they use a PRG and perform a key exchange (in which one key is given to each pair of parties, for every possible pair) at the beginning to achieve the same effect but with much smaller communication overhead. They also have a trick for dealing with device failures (which is important given the application).<br /><br /><br />This talk provided helpful and relevant insight into the the importance of matching what we research with what we require in the real world, which is, after all, one of the main reasons for having conferences such as Real World Crypto. Many of the talks are available to watch online <a href="https://www.realworldcrypto.com/rwc2017" target="_blank">here</a>, and I would highly recommend doing so if interested.</div>Tim Woodnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-85111859976511475372017-01-12T12:50:00.000+00:002017-01-12T12:50:27.629+00:00RWC 2017 - Is Password Insecurity Inevitable?Fresh back from an enlightening trip across the pond, I wanted to write about one of my favourite talks, all about password (in)security, from this year's Real World Cryptography conference.<br /><br />As we know: <br /><ol><li>Passwords protect <i>everything.</i></li><li>Passwords are <i>terrible.</i></li></ol>But happily, Hugo Krawczyk from IBM Research spoke about some great new work to resolve these two seemingly incompatible statements. There were a lot of details in the talk that I'll have to miss out here (slides are available <a href="http://webee.technion.ac.il/~hugo/rwc17.pdf">online</a>). In particular, I'm going to focus on 'Part I: Take the burden of choosing and memorising passwords off humans'.<br /><br />The basic idea - this isn't new - is to have the user memorise a single master password that they use to access a password store. Then the password store derives unique pseudorandom passwords for each service the user wants to access (Google, Facebook, etc.) The problem with this solution is that the password store becomes a single point of failure: if it is compromised, then an offline dictionary attack to find the master password will compromise all of the user's accounts at once.<br /><br />Krawczyk et al. suggest an improvement: SPHINX, which amusingly stands for "a password Store that Perfectly Hides from Itself (No eXaggeration)". The first idea is for the password store to not keep hold of (even a hash of) the master password - instead it has an independent secret key $k$, and any time the user wants to log in to a service $S$, they send the master password $pwd$ to the store, the store computes a PRF $PRF(k, pwd | S)$ and this will be sent to $S$ as the user's password for $S$. This means that if the store is compromised, the master password and the account passwords can't be learned unless the user communicates with the store. So this works well if the store is in local, offline hardware, where the user is unlikely to use the store after it is compromised by an attacker.<br /><br />However, the authors go further and replace the PRF with an <i>oblivious </i>PRF. This means the store computes an "encrypted" version of $PRF(k, pwd | S)$ from an "encrypted" $pwd|S$, so doesn't learn the plaintext values of the master password or the service password. In practice this can be achieved by the user (i.e. the user's machine) hashing the string $pwd | S$ into an element $g$ of a Diffie-Hellman group, then computing $h = g^r$, where $r$ is a fresh, random exponent, and sending $h$ to the password store. The store's secret key is an exponent $a$, so it computes $h^a$ and sends this back to the user. The user removes the blinding exponent $r$ (i.e. computes $(h^a)^{r^{-1}} = g^a$) and the result is the unique password for $S$. Now even when the password store is compromised and even if the user communicates with the store, the master password and the account passwords can't be learned.<br /><br />In principle an attacker could recover all the account passwords by compromising both the password store <i>and </i>a service $S$, learning the secret key $a$ and the service password $g^a$, computing $g = H(pwd|S)$ and perfoming an offline dictionary attack to find $pwd|S$. Then for any other service $S'$, the password can be computed via $H(pwd|S')^a$. But as long as $S$ follows good practice and only stores a hash $H'(g^a)$ of the service password, this attack fails: an offline dictionary attack to recover $g^a$ is unfeasible as it's essentially a random group element. <br /><br />There are no particularly expensive computations involved in using SPHINX, the communication between the user and SPHINX does not need to be secure (so it could be online somewhere) and the store will work regardless of what password protocol is used by the service, so it's extremely flexible. SPHINX therefore strikes me as both useful and practical, which is surely the definition of Real World Cryptography.Ryan Stanley-Oakeshttp://www.blogger.com/profile/06198224903467720264noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-86472394985453419042017-01-09T22:22:00.002+00:002017-01-09T22:22:46.285+00:00RWC 2017 - Post-quantum cryptography in the real-world<div class="MsoNormal" style="text-align: justify;">A new year takes off and, along with it, thousands of resolutions are formulated. Although I am not the right person to talk about them (my diet will begin next Monday), I wish to discuss a resolution that the cryptographic community as a whole has set for itself in this 2017. Because that's what people do at Real World Crypto (RWC): they talk about new threads, topics could be worth exploring during the new year, directions for their researches and interests. This year, for the first time in RWC, post-quantum cryptography (PQC) was given an entire session, clear sign that time is changing and the moment has come to bring the discussion to the real world. The message is clear: even if quantum computers are not popping up in tomorrow's newspapers, we can't postpone any longer.<o:p></o:p></div><div class="MsoNormal" style="text-align: justify;"><br /></div><div class="MsoNormal" style="text-align: justify;">A very simple reason for this was given by Rene Peralta, of the NIST PQC team, during the overture of the session: standardisation takes time, up to seven years if we start right now, and full transition takes even longer. I found Rene's presentation to be neat and direct: our public-key cryptography fails against quantum computers and our symmetric one needs some (non-drastic) modifications. The resolution is to "start thinking about it this year, possibly by November 30<sup>th</sup>, 2017". However, a question arises quite naturally: are we ready?<o:p></o:p></div><div class="MsoNormal" style="text-align: justify;"><br /></div><div class="MsoNormal" style="text-align: justify;">The other three talks of the session tried to answer in the affirmative. Among the several PQC proposals that are around in theoretical papers, two made their ways into RWC: the well-stablished lattice-based cryptography and the new-born isogeny-based cryptography, which nevertheless carries the pride and sympathy of ECC.<o:p></o:p></div><div class="MsoNormal" style="text-align: justify;"><br /></div><div class="MsoNormal" style="text-align: justify;"><u>Lattices and funny names: NewHope and Frodo and Crystals<o:p></o:p></u></div><div class="MsoNormal" style="text-align: justify;">Lattice-based cryptography has three representatives in the run for PQC schemes. Valeria Nikolaenko showed two: the first one is called NewHope and is a key agreement protocol based on the hardness of Ring-LWE. The latter is a problem very favourable to applications because it combines sound theoretical security (worst-case to average-case reduction) to fast implementations thanks to specific choices of parameters which allow for speed-ups in the computations: NewHope turns out to be even faster than ECC and RSA, but at the price of a larger communication. However, there are some concerns on the security of LWE when the ring structured is added. Thus, Frodo ("take off the ring") is designed to achieve the same goal using only standard LWE. The drawback is a degradation in performance, since the tricks hinted above cannot be used anymore and keys are generally bigger.<o:p></o:p></div><div class="MsoNormal" style="text-align: justify;"><br /></div><div class="MsoNormal" style="text-align: justify;">The third lattice-based scheme was presented by Tancrede Lepoint and is a suite called Crystals. This is based on yet another kind of lattices: module lattices, for which it is also known a worst-case to average-case reduction. These are less structured lattices (hence possibly calming down the detractors of ring structure) in which similar implementation speed-ups are possible: the timing is indeed comparable to NewHope's, while the communication is improved.<o:p></o:p></div><div class="MsoNormal" style="text-align: justify;"><br /></div><div class="MsoNormal" style="text-align: justify;"><u>"Make elliptic curves great again"</u></div>Michael Naehrig presented a new proposal for PQC: do you remember curves with plenty of small subgroups where to easily solve the discrete logarithm problem? Now they come in handy again: all the subgroups (of order 2 and 3) are considered to be nodes of a graph, whose edges are the isogenies (a.k.a. bijetive homorphisms between curves). In this new context, given two curves in the graph, it is difficult to come up with the isogeny linking the two. However, such a new approach doesn't really stand against other solutions: keys are small but performance is not a pro (so to speak).Marco Martinolihttps://plus.google.com/106563453646172877995noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-50781950769338482882017-01-09T12:26:00.001+00:002017-01-09T12:26:43.893+00:00RWC 2017 - Erasing Secrets from RAM<span style="font-family: Arial,Helvetica,sans-serif;">One of my favourite talks from the <a href="https://www.realworldcrypto.com/rwc2017" target="_blank">Real World Crypto 2017</a> conference was given by Laurent Simon, on Erasing Secrets from RAM.</span><br /><span style="font-family: Arial,Helvetica,sans-serif;"><br /></span><span style="font-family: Arial,Helvetica,sans-serif;">In short, it was found that in practice, many non-malicious programs handling keys and other sensitive data do not erase the RAM correctly. This would allow an attacker (that has access to all of a system's volatile memory and CPU state) access to any unerased sensitive data.</span><br /><br /><span style="font-family: Arial,Helvetica,sans-serif;">It was thought that compiler optimisation played a part in the lack of erasion. Take the code below:</span><br /><br /><span style="color: blue;"><span style="font-family: "Courier New",Courier,monospace;">void sensitive_function(...) {</span></span><br /><br /><span style="color: blue;"><span style="font-family: "Courier New",Courier,monospace;"> u8 sensitive_buffer[KEY_MAX] = "\0";</span></span><br /><span style="color: blue;"><span style="font-family: "Courier New",Courier,monospace;"> ...</span></span><br /><span style="color: blue;"><span style="font-family: "Courier New",Courier,monospace;"> <span style="color: red;">zeromem</span>(sensitive_buffer, KEY_MAX);</span></span><br /><br /><span style="color: blue;"><span style="font-family: "Courier New",Courier,monospace;">} </span></span><br /><br /><span style="font-family: Arial,Helvetica,sans-serif;">The compiler may choose to remove the</span> <span style="font-family: "Courier New", Courier, monospace;">zeromem <span style="font-family: Arial, Helvetica, sans-serif;">line, as the <span style="font-family: "Courier New",Courier,monospace;">sensitive_buffer</span> is going out of scope anyway. This would leave sensitive data on the RAM, unbeknownst to the programmer.</span></span><br /><br /><span style="font-family: Arial, Helvetica, sans-serif;">So, the paper presents a tool that allows developers to mark sensitive variables in their code, and then see (post-compilation) any potential leakage of sensitive data.</span><br /><br /><span style="font-family: Arial, Helvetica, sans-serif;">They call this tool <a href="https://github.com/lmrs2/secretgrind" target="_blank">Secretgrind</a>, based off the popular <a href="http://valgrind.org/" target="_blank">Valgrind. </a></span><br /><br /><span style="font-family: Arial, Helvetica, sans-serif;">Anyway, as it turns out, the compiler optimisation problem mentioned above wasn't actually a problem in practice - they didn't once encounter this problem in all their testing. Instead, the majority of sensitive leaks were down to developers' mistakes; they had forgotten to erase sensitive variables on both the stack and the heap.</span><br /><br /><span style="font-family: Arial, Helvetica, sans-serif;">There were a few issues with IO API's caching optimisations, though - such as when you read a line from a PEM file using <span style="font-family: "Courier New",Courier,monospace;">mmap</span>, it often loads the whole file into memory to save you the time. However, this is not immediately obvious, and when you go to delete the line from RAM, the rest of the file is still in memory!</span><br /><span style="font-family: Arial, Helvetica, sans-serif;"><br /></span><span style="font-family: Arial, Helvetica, sans-serif;">Laurent concluded the talks saying Secretgrind was still in development, and although referring to it as a 'hack' (due to it's fragility), wishes for it to be used to "help you guys check your code".</span>Joey Greenhttps://plus.google.com/110336179333524481287noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-56099936354959799052017-01-05T22:06:00.001+00:002017-01-05T22:06:15.419+00:00RWC 2017 - A Formal Security Analysis of the Signal Messaging ProtocolReal World Crypto 2017 kicked off yesterday in New York City. This afternoon, Luke Garratt presented his and his colleagues' work, <a href="https://eprint.iacr.org/2016/1013.pdf">A Formal Security Analysis of the Signal Messaging Protocol</a>. The signal protocol is used by the Facebook messenger app, WhatsApp and the Signal app (to name but a few). It is therefore surprising that there had been no formal security analysis, and their work addresses this issue. The paper is motivated by the questions<br /><br /><div style="text-align: center;">What should Signal achieve?</div><div style="text-align: center;"><br /></div>and<br /><br /><div style="text-align: center;">Does it?</div><div style="text-align: center;"><br /></div><div style="text-align: left;">Or, put in more modern language (spoiler alert),</div><div class="separator" style="clear: both; text-align: center;"><span style="font-size: x-large;">Why what</span><a href="https://1.bp.blogspot.com/-VlgkDvxm58c/WG7B4LjZwII/AAAAAAAAAV4/g-ToTrEQIC8DIpQNiJACr3ZHkdIW72sXQCLcB/s1600/whatsapp-logo-PNG-Transparent.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-VlgkDvxm58c/WG7B4LjZwII/AAAAAAAAAV4/g-ToTrEQIC8DIpQNiJACr3ZHkdIW72sXQCLcB/s1600/whatsapp-logo-PNG-Transparent.png" /></a> <span style="font-size: x-large;">does is</span><span style="font-size: large;"> </span><a href="https://4.bp.blogspot.com/-F9fGHEipBdU/WG7CBgAGGBI/AAAAAAAAAV8/kk8kVWki-JM4oafhitvEKsbrD7W9C4XhwCLcB/s1600/like_icon.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-F9fGHEipBdU/WG7CBgAGGBI/AAAAAAAAAV8/kk8kVWki-JM4oafhitvEKsbrD7W9C4XhwCLcB/s1600/like_icon.png" /></a></div><div style="text-align: left;"><br /></div><div style="text-align: left;">Let's first look at what kind of security we can hope for. We speak of Forward Secrecy, which ensures that past messages are not compromised even if the communication is at some point in the future. This is to prevent an adversary storing all the conversations and waiting until an attack is successful to then recover all the communication. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">Post-compromise security pushes this even further. If we have post-compromise security, not only past conversations are not compromised, but also future ones. The Signal protocol achieves this using a technique called ratcheting, which involves session keys being updated with each message sent. Why is this useful? Well, it makes the adversary's life much harder. In order to attack a system with post-compromise security, an adversary must obtain long-term keys and immediately attack and continue attacking if they want to compromise future sessions. As opposed to forward security, where an adversary would obtain a long-term key and wait for an interesting target to launch a man-in-the-middle attack (e.g. TLS-DHE) or to no forward security, where they would just store all ciphertext traffic until they obtain a long-term key and decrypt everything (e.g. TLS-RSA). Times are hard. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">Their security model captures: </div><div style="text-align: left;"></div><ul><li>Full network control by the adversary;</li><li>Perfect forward secrecy;</li><li>Post-compromise security;</li><li>Key compromise impersonation attacks;</li><li>Some (but not all) random numbers compromise.</li></ul><div>Their proof is too long to be featured in this blog post, but Luke promises it is tedious rather than complex. Their conclusion? So far, so good. </div><div><br /></div><div>In terms of limitations, they note that they have not analysed any implementation of the protocol, so this is a theoretical analysis only. They also assume an honest key distribution and have not (yet) considered multiple devices. </div>Anamaria Costachehttps://plus.google.com/111580579835398184759noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-88335968751588178132016-11-10T11:10:00.000+00:002016-11-10T13:52:31.389+00:00Study Group - All Your Queries Are Belong to Us: The Power of File-Injection Attacks on Searchable EncryptionPublished at USENIX 2016 and written by Yupeng Zhang, Jonathan Katz, and Charalampos Papamanthou from University of Maryland this paper proves again how difficult is to get proper security in searchable encryption.<br /><br />Some of you may wonder why I chose this paper. Reason is that I wanted to get a grasp of how research looks like outside of the MPC area at a top security conference. And what other conference to pick than this year's USENIX?<br /><br />After I finished reading this paper I realised a cool thing: that you need little to none a priori knowledge about Searchable Encryption (SE) [1]. Fortunately I didn't find myself diving into the literature of SE in order to fully understand the ideas presented there - how many crypto papers have you read and still be able to say this once you're done with them?<br /><br />Let's introduce first the notion of SE. The setup consists in 2 parties, a client and an encrypted mail server. The client wants to obtain all e-mails which have 'alice' and 'work' as tags. He then computes a token in a <b>deterministic</b> manner, then the server replies with the e-mails corresponding to the query:<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-4I1SUJWnsP8/WCHPVav5cxI/AAAAAAAAA5Y/CTeo9DohhfklocCiMJ54DefQbecli2WkgCLcB/s1600/Searchable-Encryption%25282%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="207" src="https://3.bp.blogspot.com/-4I1SUJWnsP8/WCHPVav5cxI/AAAAAAAAA5Y/CTeo9DohhfklocCiMJ54DefQbecli2WkgCLcB/s320/Searchable-Encryption%25282%2529.png" width="320" /></a></div><br />To ease the presentation we can assume that instead of e-mails the server will respond with <b>identifiers</b> (urls).<br /><br />An adversary wins if he breaks the token privacy, namely it recovers keywords from tokens.<br /><br />A file injection attack is when an attacker has encrypted e-mails of his choice on the server and can see the tokens which are queried by the client. More simple, the server behaves in a honest-but-curious way and stores encrypted e-mails of his choice by spamming the client.<br /><br />From a first sight this seems harmless. But if you combine it with some leaked e-mails (lots of them these days) the authors managed to have a 70% recovery rate for a keyword from a token with only 20% of leaked files. This was tested using an Enron email dataset of ~30000 emails. [2]<br /><br />To understand how this works it's important to look at a simpler attack:<br /><br /><b>Binary search attack.</b><br /><br />Suppose an adversary has a keyword universe $K$, with size $|K|$ a power for $2$. He can inject $\log_2{|K|}$ files and then recover keywords from every token with $100\%$ success.<br /><br />Each file $F_i$, $i \in [0, \log_2{|K|}] $ contains keyword $k_j$ for which $j$ has $i$'th most <span data-dobid="hdw">significative </span>bit equal to $1$. To see why this works let's look at an example inspired from the paper. <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-_Cp2A52vlYM/WCH23xAGtgI/AAAAAAAAA5s/Ar02MU_Bm-Ys8hmdHRj_jvbYwBsvsZjsACLcB/s1600/Searchable-Encryption-Result.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="166" src="https://3.bp.blogspot.com/-_Cp2A52vlYM/WCH23xAGtgI/AAAAAAAAA5s/Ar02MU_Bm-Ys8hmdHRj_jvbYwBsvsZjsACLcB/s320/Searchable-Encryption-Result.png" width="320" /></a></div><br />In this case we have $|K| = 8$. Black cells are the keywords inserted in file $i$. Since the search result is $F_1$, $F_3$ ($101$) we conclude that the token was derived from the keyword $k_5$.<br />A server which inserts one email per day can execute an attack in $2$ weeks time for $10,000$ keyword universe.<br /><br /><b>Countermeasure.</b><br /><br />One crucial observation is that we insert $|K|/2$ keywords per file. So one can think that limiting the keywords per file to some threshold $T << |K|/2$ will fix the problem.<br /><br />The threshold countermeasure turns out to be ineffective. The authors proved this with an attack which works almost like the first one after splitting the keywords in chunks of size $2T$. (denoted as hierarchical search attack).<br /><br />For $|K| = 5,000$ and $T=200$ an adversary should can inject $131$ files in order to recover keywords for every token.<br /><br /><b>Attacks with partial leakage.</b><br /><br />Sometimes e-mails leak. And when that happens...SE is almost useless against adaptive injection attacks as the authors prove. We say adaptive because it needs a forward insecure SE scheme.<br /><br />What if we have want to recover keywords from multiple tokens?<br /><br />The idea is to compute the distribution for each keyword $f^*(k)$ from the leaked files. Then the assumption that $f^*(k)$ is close to the queried tokens distribution $f(t)$ is made - which will turn true latter.<br />Next, a small candidate set of keywords is chosen (also called ground truth) and files are injected using the binary search attack. The rest of the tokens are recovered by taking joint distributions with previous tokens.<br /><br />For more attacks and countermeasures which fail I strongly recommend reading the article [3].<b></b><br /><b><br /></b>At the end of the article one can wonder if building secure SE schemes would really help against these access pattern attacks...The authors suggest that an interesting direction is to look into lower bounds on these attacks but this seems a far from trivial task.<br /><br />[1] <a href="http://crypto.stanford.edu/~dabo/pubs/papers/encsearch.pdf">http://crypto.stanford.edu/~dabo/pubs/papers/encsearch.pdf</a><br />[2] <a href="https://www.cs.cmu.edu/~./enron/">https://www.cs.cmu.edu/~./enron/</a><br />[3] <a href="https://eprint.iacr.org/2016/172">https://eprint.iacr.org/2016/172</a>Dragos Alin Rotaruhttps://plus.google.com/104882118410125117532noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-24106409228362950332016-11-04T13:24:00.000+00:002016-11-14T09:11:47.803+00:00What is SPDZ? Part 3: SPDZ specificsThis blog post is the final in a series of three in which we describe what SPDZ is. The first part can be found <a href="https://bristolcrypto.blogspot.co.uk/2016/10/what-is-spdz-part-1-mpc-circuit.html" target="_blank">here</a>, and the second <a href="https://bristolcrypto.blogspot.co.uk/2016/10/what-is-spdz-part-2-circuit-evaluation.html" target="_blank">here</a>.<br /><br />In this part, we consider what it is that makes SPDZ SDPZ. <br /><h2></h2><br /><h3 style="text-align: justify;">SPDZ MPC</h3><div style="text-align: justify;">Using the term SPDZ is perhaps a little misleading. There are actually a few protocols which we call SPDZ because they are really just improvements on the original. As the online phase of SPDZ is already ‘pretty good’, optimisations to the protocol normally involve improving the offline phase.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">In what follows, we talk about the methods used in SPDZ for preprocessing (generating the triples) and then explain some improvements that have been made.</div><div style="text-align: justify;"><br /></div><h4 style="text-align: justify;"><span style="font-size: small;">Original SPDZ</span></h4><div style="text-align: justify;">Here we will outline some of the methods in the original SPDZ protocol, as in [5].</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><i>Preprocessing</i></div><div style="text-align: justify;">In the first SPDZ paper, the authors show how to use somewhat homomorphic encryption (SHE) to generate the triples. This is a (pretty expensive) public key cryptosystem in which the encryption operation is ring homomorphic - that is, \[\textsf{Enc}_{\textsf{pk}}(a)\boxplus\textsf{Enc}_{\textsf{pk}}(b)=\textsf{Enc}_{\textsf{pk}}(a+b)\] and \[\textsf{Enc}_{\textsf{pk}}(a) \boxdot \textsf{Enc}_{\textsf{pk}}(b) = \textsf{Enc}_{\textsf{pk}}(a \cdot b)\] </div><div style="text-align: justify;">(For those who know about FHE: we can do MPC with FHE, as you’d hope; however, at the time of writing no FHE scheme which is competitive with current secret-sharing MPC currently exists. SPDZ is what the authors call the ‘ideal use of FHE in MPC’: raw material is computed locally so that the online computations are minimal and communication overhead is small.)</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">During the offline phase, we also generate lots of random values which are used by the parties to provide input to the circuit.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><i>Sacrificing</i></div><div style="text-align: justify;">In order to check that the adversary has not introduced errors into the triples, for each triple we want to use we have to ‘sacrifice’ a triple by taking another triple from the preprocessing and checking that the third element is the product of the first two using Beaver’s method outlined in the <a href="https://bristolcrypto.blogspot.co.uk/2016/10/what-is-spdz-part-2-circuit-evaluation.html" target="_blank">previous post</a>. Fortunately the method by which triples are generated is very efficient.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><i>ZKPoPKs</i></div><div style="text-align: justify;">Zero-Knowledge Proofs of Plaintext Knowledge (ZKPoPKs) are a way of ensuring parties encrypt as they should: when encrypting in the SHE scheme, there are conditions on the plaintext and noise that need to be met to ensure the ciphertext is well-formed.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><i>MACs</i></div><div style="text-align: justify;">Before any preprocessing computation takes place, the parties agree on some MAC key $\alpha$ with which they MAC all their data and which they secret-share amongst themselves so that no individual party knows the MAC key.<br /><br />This MAC key is used to MAC all the data in the circuit. For each secret-shared <i>value</i>, there is also a secret-shared MAC on that value.<br /><br />After the circuit has been evaluated, the parties open the secret corresponding to the circuit output and also the MAC on it, and check that the MAC is correct. If an actively corrupt party cheats at any point in the circuit evaluation, the final MAC check reveals this has happened. Note that this means the parties could be computing on invalid data.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">In the original paper, each party also MACked the global MAC key with a personal MAC key. This was not needed in subsequent works.</div><br /><h4 style="text-align: justify;">Updated SPDZ: SDPZ2</h4><div style="text-align: justify;">While the online phase of SPDZ was (almost) the same, in SPDZ2 [4] the authors made significant changes to the offline phase of the original protocol. We outline the major changes here.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">They solved an open problem left by the first paper in giving a secure protocol which generated a shared SHE decryption key.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">They also offered various performance enhancements, including the introduction of ‘squares’ and ‘bits’ in the preprocessing, which allowed faster online computations of specific operations in the circuit.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">A major improvement was allowing MACs to be checked on data before the end of the computation. This involved checking the MAC of a public value that depended on the underlying data. The big advantage of this is that the preprocessed data MACked under the same key can still be used after this MAC check, but not in the original protocol since the check reveals the key.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The new protocol had covert and malicious variants, the former having lesser parameter requirements. A covertly secure protocol ensures the adversary wins with probability at most $1/n$ for some $n \in \mathbb{N}$. The reason for this was that it was believed more closely to match the real-world situation.</div><div style="text-align: justify;"><br /></div><h4 style="text-align: justify;">New SPDZ: MASCOT</h4><div style="text-align: justify;">The most recent improvements to the SPDZ protocol is a work known as MASCOT [3]. As with SPDZ2, the online phase of SPDZ was left as it was; this paper improves on the offline phase, showing how to use oblivious transfer (OT) to achieve much faster preprocessed data than by using SHE. It is authored by Bristol’s own Marcel, Emmanuela and Peter, and appeared at <a href="https://www.sigsac.org/ccs/CCS2016/" target="_blank">CCS</a> last week.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">(Oblivious transfer is a process in which one party inputs two strings and a second party chooses exactly one; this is done in such a way that the first party doesn’t know which string the second chose, and the second does not learn anything about the string it didn’t choose.)</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The authors of MASCOT note that the heavy machinery of public-key cryptography required in the original SPDZ paper to generate the preprocessing (namely, the use of SHE) prevents the communication from being a bottleneck since the local computation takes so long.<br /><br />The protocol is actively secure and performs significantly faster than previous SPDZ2 implementations, with much lower communication overhead.</div><div style="text-align: justify;"><br /></div><h4 style="text-align: justify;">The Future of SPDZ</h4><div style="text-align: justify;">MASCOT has only just been released, and there are already whispers of a new paper. Watch this space...</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"><br /><h4>References</h4></div><div style="text-align: justify;">[1] B. Applebaum, Y. Ishai, and E. Kushilevitz. <a href="http://www.eng.tau.ac.il/%7Ebennyap/pubs/AGCfull.pdf" target="_blank">How to garble arithmetic circuits</a>. 52nd FOCS, pp120–129. IEEE Computer Society Press, 2011</div><div style="text-align: justify;">[2] D. Beaver. <a href="https://link.springer.com/chapter/10.1007%2F3-540-46766-1_34" target="_blank">Efficient Multiparty Protocols using Circuit Randomisation</a>. In J. Feigenbaum, editor, CRYPTO, volume 576 of Lecture Notes in Computer Science, pp420-432, Springer, 2012.</div><div style="text-align: justify;">[3] R. Bendlin, I. Damgard, C. Orlandi, and S. Zakarias. <a href="https://eprint.iacr.org/2010/514" target="_blank">Semi-homomorphic encryption and multiparty computation</a>. In EUROCRYPT, pp169-188, 2011.</div><div style="text-align: justify;">[4] I. Damgard, M. Keller, E. Larraia, V. Pastro, P. Scholl, N. P. Smart. <a href="https://eprint.iacr.org/2012/642" target="_blank">Practical covertly secure MPC for dishonest majority - or: Breaking the SPDZ limits</a>. In ESORICS (2013), J. Crampton, S. Jajodia, and K. Mayes, Eds., vol. 8134 of Lecture Notes in Computer Science, Springer, pp. 1–18.</div><div style="text-align: justify;">[5] I. Damgard, V. Pastro, N. P. Smart, and S. Zakarias. <a href="https://eprint.iacr.org/2011/535" target="_blank">Multiparty computation from somewhat homomorphic encryption</a>. In Advances in Cryptology – CRYPTO 2012, volume 7417 of LNCS, pp643–662. Springer, 2012.</div><div style="text-align: justify;">[6] I. Damgard and S. Zakarias. <a href="https://eprint.iacr.org/2012/512" target="_blank">Constant-overhead secure computation of</a></div><div style="text-align: justify;"><a href="https://eprint.iacr.org/2012/512" target="_blank">boolean circuits using preprocessing</a>. In TCC, pp621-641, 2013.</div><div style="text-align: justify;">[7] M. Keller and E. Orsini and P. Scholl. <a href="https://eprint.iacr.org/2016/505" target="_blank">MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer</a>. Cryptology ePrint Archive, Report 2016/505, 2016.</div><div style="text-align: justify;">[8] J. Buus Nielsen, P. Nordholt, C. Orlandi, and S. Burra. <a href="https://eprint.iacr.org/2011/091" target="_blank">A new approach to practical active-secure two-party computation</a>. In Reihaneh Safavi-Naini and Ran Canetti, editors, Advances in Cryptology CRYPTO 2012, volume 7417 of Lecture Notes in Computer Science, pp681-700. Springer Berlin Heidelberg, 2012.</div><div style="text-align: justify;">[9] A. Shamir. <a href="https://dl.acm.org/citation.cfm?id=359176" target="_blank">How to Share a Secret</a>. In Communications of the ACM, Volume 22 Issue 11, Nov. 1979, pp612-613.</div><div style="text-align: justify;">[10] A. Yao. <a href="https://dl.acm.org/citation.cfm?id=1382944" target="_blank">How to generate and exchange secrets</a>. In SFCS '86 Proceedings of the 27th Annual Symposium on Foundations of Computer Science, pp162–167. IEEE, 1986.</div><div style="text-align: justify;"><br /></div>Tim Woodnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-63522857802012812242016-11-03T12:27:00.000+00:002016-11-03T12:27:15.193+00:00Study Group - Dedup est MachinaThe title of the paper is '<a href="http://www.ieee-security.org/TC/SP2016/papers/0824a987.pdf" target="_blank">Dedup Est Machina: Memory Deduplication as an Advanced Exploitation Vector</a>'. <br /><br />I saw the paper being presented at the IEEE Security and Privacy Symposium, and it was one of the more enjoyable talks due to the nature of the exploit, and the enthusiasm of Erik Bosman presenting it. It also won a 'PWNIE' award for the 'most innovative research on hacking' at Black Hat USA this year.<br /><br />The idea of the paper is simple - to combine the Memory Deduplication exploit with a Rowhammer attack. <br /><br />They use this to craft a JavaScript-based attack against the Microsoft Edge browser, to gain arbitrary memory read and write access in the browser.<br /><br /><h3><b>Memory Deduplication</b></h3><br />The idea is to reduce the total memory footprint of a running system by sharing memory pages with identical content across independent processes. <br /><br />A good example of this is the page cache in modern operating systems. The page cache stores a single cached copy of file system contents in memory, and shares the copy across different processes. <br /><br />In running processes, two or more pages with the same content in run-time memory are always deduplicated, even if the pages are completed unrelated, and their equivalence is fortuitous. <br /><br />So, in order to keep a single copy of a number of identical pages, the memory deduplication system needs to perform three tasks:<br /><ol><li><i>Detect</i>. The system needs to detect memory pages with the same content, usually done at regular and predetermined intervals during normal system operations</li><li><i>Only Keep One</i>. We only want to keep one physical copy, and return the others to the memory allocator; to do this, the deduplication system updates the page-table entries of the owning processes, so that the virtual addresses now point to a single shared copy; these entries are also marked as read-only to support copy-on-write semantics</li><li><i>Create Private Copy on Write</i>. Specifically, when one of the owning processes writes to the read only page, a copy-on-write page fault occurs; at this point, the memory deduplication system can create a private copy of the page, and map it into the corresponding page table entries of the faulting process</li></ol>On Windows 8.1 onwards, it's known as memory combining.<br /><br /><h3><b>Memory Deduplication as a Side Channel</b></h3><br />So you may have spotted how you could exploit this as a side channel. Writing to a shared page from any of the owning processes results in a page fault and a subsequent page copy.<br /><br />Due to these additional expensive operations, a write to a shared page takes significantly longer compared to a write to a regular page (by significantly longer, we're talking up to one order of magnitude). This timing difference provides an attacker with a side channel to detect whether a given page exists in the system.<br /><br />To do this, the attacker has four easy steps:<br /><ol><li><i>Craft</i> a page with the exact same content as the target page</li><li><i>Wait</i> for the memory deduplication system to do its thing</li><li><i>Write</i> to the crafted page</li><li><i>Time</i> how long it takes for the write operation to finish</li></ol>If the write takes longer than a write to a non-deduplicated page, the attacker can conclude that a page with the same content exists.<br /><br />So just using this, an attacker could detect whether a user was visiting a particular web page, or running a particular program.<br /><br />Of course, false positives are possible due to noise etc, so the attacker can use redundancy (repeated attempts) to disclose the intended information in a reliable way.<br /><br />At the moment, the memory deduplication side channel is looking a little bit pathetic, as it's a slow single-bit side channel that can only be used for leaking a limited number of bits from a victim process.<br /><br />However, we can abuse memory deduplication to get some stronger primitives.<br /><br /><h3><b>Primitives</b></h3><h4><i>Primitive #1: Alignment Probing</i> </h4><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-qLKN98zXp1M/WBsp-6PYplI/AAAAAAAAAc8/ONgSw9DmHFs17MVcZX2frCzhLXDfilP3ACLcB/s1600/dedup_ap.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="143" src="https://2.bp.blogspot.com/-qLKN98zXp1M/WBsp-6PYplI/AAAAAAAAAc8/ONgSw9DmHFs17MVcZX2frCzhLXDfilP3ACLcB/s320/dedup_ap.png" width="320" /></a></div><br />First up, we have a primitive that allows a byte-by-byte probing of secret information by controlling its alignment in memory. So, this primitive is applicable when the secret data has <i>weak alignment properties.</i> <br /><br />Imagine in this case, the secret is a password stored in memory immediately after a blob of attacker-provided input bytes. What the attacker can do is provide some more input bytes, which effectively push the second part of the secret out of the target page. Now the attacker can brute force the first part of the secret (e.g. one byte) using deduplication. <br /><br />All the attacker needs to do now is provide fewer input bytes to bring another portion of the secret back into the page. Just like before, the attacker uses deduplication to get the rest of the secret. For larger secrets, you just keep repeating these steps until all of the secret is known.<br /><br />This alignment probing primitive is apparently very effective, and is used to disclose code pointers in Microsoft Edge, and a password hash in nginx.<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><br /><h4><i>Primitive #2: Partial Reuse</i></h4><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-fkxYEREb5tQ/WBsp-3Tq1JI/AAAAAAAAAc4/naGcKgoUU3sYFL85jTA0Yc-Ltw_YxeHAQCEw/s1600/dedup_pr.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="153" src="https://4.bp.blogspot.com/-fkxYEREb5tQ/WBsp-3Tq1JI/AAAAAAAAAc4/naGcKgoUU3sYFL85jTA0Yc-Ltw_YxeHAQCEw/s320/dedup_pr.png" width="320" /></a></div><br />When the secret has strong memory alignment properties, the first primitive falls through.<br /><br />Up next is a primitive that can be used when attacker-controlled input can partially overwrite stale secret data with<i> predictable reuse properties.</i> User-space memory allocators encourage memory reuse and do not normally zero out deallocated buffers for performance reasons.<br /><br />This means that a target application often reuses the same memory page and selectively overwrites the content of a reused page with new data.<br /><br />So, same set up as before. This time, when the attacker gives a larger input, it overwrites the first part of the secret. Now the attacker can brute force the second part using memory deduplication.<br /><br />Once this is known, the attacker can brute force the first part of the secret, by deduplicating against a page without an overwritten secret.<br /><br />This primitive is apparently fairly common in practice, and is used in the paper to break heap ASLR (Address Space Layout Randomisation) in nginx.<br /><br /><h4><i>Primitive #3: Birthday Heap Spray</i></h4><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-m5F623-WfCA/WBsp-5BAi8I/AAAAAAAAAdE/QOkHyNAexjQJHXTqOibol3kNifLW6gZ9wCEw/s1600/dedup_bhs.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="82" src="https://4.bp.blogspot.com/-m5F623-WfCA/WBsp-5BAi8I/AAAAAAAAAdE/QOkHyNAexjQJHXTqOibol3kNifLW6gZ9wCEw/s400/dedup_bhs.png" width="400" /></a></div>Last one. This primitive can leak a secret even when the attacker has no control over memory alignment or reuse. It relies on a generative approach that revolves around the birthday paradox, which we all know and love.<br /><br />This primitive is applicable when the attacker can force the application to controllably<i> spray</i> target secrets over memory.<br />Up until now we have only assumed there is one secret we want to leak. <br /><br />If a partially masked secret has $P$ possible values, we use memory deduplication to perform $1 \times P$ comparisons between the $P$ probe pages and the single target page - essentially brute forcing the secret.<br /><br />For large $P$, a very large amount of memory is required, for the attack, and for the redundancy. However, if the attacker can cause the target application to generate many secrets, memory deduplication provides a much stronger primitive than brute forcing.<br /><br />For instance, an attacker can generate a large number of secret heap pointers by creating a large number of objects from JavaScript, each referencing another object.<br /><br />So, assume the attacker causes the application to generate $S$ such pages, each with a different secret pointer. The attacker now also creates $P$ probe pages, with $P$ being roughly the same size as $S$. <br /><br />Each probe page uses the same encoding as the secret pages, except that, not knowing the secret pointers, the attacker needs to 'guess' their values. Each probe page contains a different guessed value. The idea is to find at least one of the probe pages matching <i>any</i> of the secret pages. So we have a classic Birthday Problem, where the secret and probe values are the birthdays.<br /><br />Since memory deduplication compares any page with any other page in each deduplication pass, it automatically tests all $P$ probe pages against the $S$ target secret pages.<br /><br />A hit on any of the $P$ possible values immediately exposes a target secret. This birthday primitive reduces the memory requirements of the attack by a factor of $S$. It's especially useful when leaking the location of randomised pointers. For exploitation purposes, it's not important which pointer the attacker hits, as long as at least one of them is hit.<br /><br />This primitive is used to leak a randomised heap pointer in Microsoft Edge, and also used to craft a reliable Rowhammer exploit.<br /><br /><b>Rowhammer Exploit</b><br /><br />Anyway, now we get to the good bit; the Rowhammer exploit. The Rowhammer bug was first publicly disclosed in June 2014 by Kim et al. However, it was not until March 2015 that someone published a working Linux kernel privilege escalation exploit using Rowhammer.<br /><br />Essentially, Rowhammer is DRAM vulnerability that allows an attacker to flip bits in a (victim) memory page by repeatedly reading from other (aggressor) memory pages. These bit flips are deterministic, meaning that once a vulnerable memory location is identified, it is possible to reproduce the bit flip patterns by reading again the same set of aggressor pages.<br /><br />There are two variations of the Rowhammer attack. Single-sided Rowhammer repeatedly activates a single row to corrupt its neighbouring rows' cells. Double-sided Rowhammer targets a single row by repeatedly activating both its neighbours rows. Although double-sided is more effective than single-sided in practice, it requires you to know not only the location of what you want to target, but also the two adjacent memory pages. Therefore, the authors of this paper choose to use the single-sided Rowhammer attack.<br /><br />So, if you want to use the Rowhammer attack to do something useful, rather than to just corrupt random memory pages, then first you need to find a vulnerable memory location. To do this, you can allocate a large array filled with doubles. If you do it right, you can end up with an encoded double value such that all bits are set to 1.<br /><br />Then, you find some eviction sets so you can bypass the cache, and you hammer 32 pages at a time (though it doesn't say where they get the number 32 from). By hammer, I mean you read from each page two million times before moving to the next page. Once you've gone through all 32 pages, you check the entire Array for bit flips. After scanning a sufficient number of pages, you will know which bits can be flipped at which offsets.<br /><br />Next, you want to determine what to place in these vulnerable memory locations to craft the exploit. The idea behind the Rowhammer attack in this paper, is to place some data in the array which, after a bit flip, can yield a reference to a controlled counterfeit object. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-spwxsML9RFg/WBsp_APA4WI/AAAAAAAAAdE/JW2k698qMbk5m5-xlT5ORKGLcUxOvwJHACEw/s1600/dedup_rh.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="183" src="https://3.bp.blogspot.com/-spwxsML9RFg/WBsp_APA4WI/AAAAAAAAAdE/JW2k698qMbk5m5-xlT5ORKGLcUxOvwJHACEw/s320/dedup_rh.png" width="320" /></a></div><br /><br />So once we find out which bit is flipped, we can store a valid object reference at the vulnerable memory location and pivot to a counterfeit object with Rowhammer.<br /><br />So that's the attack - using memory deduplication to leak the code and heap pointers, and Rowhammer to pivot them to a counterfeit object. In practice, the paper says it takes about 30 minutes for the two deduplication passes and 508MB of memory. It also takes 1GB of memory to find bit flips, and 32MB to find the cache eviction sets. The timing of the Rowhammer depends on the vulnerable DRAM chips, so they say it varies from seconds to hours in practice.<br /><br /><br /><b>Mitigation </b><br /><br />So finally, the last part of the paper talks about how one would prevent attacks like this. A fairly obvious observation would be to just no deduplicate, as that prevents all exploits like this. However, deduplication contributes heavily to using physical memory efficiently, so we would like to keep it if possible.<br /><br />This paper suggests that you only deduplicate<i> zero pages.</i> A zero page is a page with a starting address of zero, or rather, the series of memory addresses at the absolute beginning of a computer's address space. The paper doesn't go into much detail about why, but claims they've done experiments to show that it's nearly as efficient as full deduplication, but entirely eliminates the possibility for any attacks targeting memory deduplication.<br /><br /><b>Fin</b><br /><br />So that's the end of paper, they finish by saying memory deduplication is dangerous in the wrong hands, and they've been working with their contacts at Microsoft to find an immediate solution.<br /><br />I had a look round on the internet but no-ones has published anything further since the paper, so who knows?Joey Greenhttps://plus.google.com/110336179333524481287noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-28191143903540878682016-10-28T17:16:00.000+01:002016-10-28T17:16:02.776+01:00Study Group: Towards Sound Fresh Re-keying with Hard (Physical) Learning Problems<html><head><title>MathJax TeX Test Page</title><script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script><script async="" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script></head> <body> <div align="justify">The first study group on side-channel analysis in the academic year 2016/2017 is "Towards Sound Fresh Re-keying with Hard (Physical) Learning Problems" from Stefan Dziembowski, Sebastian Faust, Gottfried Herold, Anthony Journault, Daniel Masny and Francois-Xavier Standaert, and recently published at Crypto this year. There are mainly two reasons why I chose to present it: it is somehow close to my personal research and, generally speaking, I like to read of different areas influencing each other. In this particular case, learning problems which are usually associated to post-quantum cryptography (more specifically to lattice-based cryptography) have been used to prove a security result in leakage-resilient cryptography. </div> <br> <div align="justify">In one sentence and very loosely speaking, the content of the paper which has been covered in the study group can be summarised as: the authors construct a fresh re-keying scheme and show it is secure under the LPN assumption. The remaining of this blog post is structured as a number of questions and answers on the previous statement that will hopefully clarify and detail its meaning. </div> <br> <div align="justify"><b>Q: what is a fresh re-keying scheme? </b></div><div align="justify"><i>A:</i> intuitively, a fresh re-keying scheme is a set of cryptographic algorithms which allows the generation of session keys from a master secret key and some publicly known randomness. A session key is used to encrypt a message and then discarded. The following diagram nicely represents what is going on. <br><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-0UkMhGrN7s8/WBNsMoK0rdI/AAAAAAAACgI/lZbjeaaeQfAO_Lf70kI6SWQ59sHGWct3ACLcB/s1600/rk.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-0UkMhGrN7s8/WBNsMoK0rdI/AAAAAAAACgI/lZbjeaaeQfAO_Lf70kI6SWQ59sHGWct3ACLcB/s400/rk.jpg" width="400" height="316" /></a></div><br>Protecting block ciphers (or other cryptographic primitives) against DPA is an expensive procedure that introduces overhead in the design and whose security is often based on heuristic arguments. Such attacks exploit the dependence of multiple power traces with the same secret key to retrieve it. Hence, fresh re-keying schemes represent a solution that try to structurally avoid the threat by using each session key only once. Both the receiver and the sender can compute the same session key by means of the shared master secret key and some randomness associated with the ciphertext. At that point, the underlying block cipher should retain security only against SPA, while the algorithm the session key is computed by should resist DPA to prevent the adversary to gain information about the master secret key thanks to leakage. The overall scheme is designed to be more efficient than a one which applies DPA countermeasures directly to the block cipher because the GenSK algorithm is built to be easier to protect. </div> <br> <div align="justify"><b>Q: what is the LPN assumption? </b></div><div align="justify"><i>A:</i> the Learning Parity with Noise (LPN) problem asks to find a vector $\mathbf{k}\in\mathbb{Z}_2^n$ given query access to an oracle that outputs $$ (\mathbf{r},<\mathbf{r},\mathbf{k}>\oplus e) \in \mathbb{Z}_2^n\times\mathbb{Z}_2 $$ where $\mathbf{r}\in\mathbb{Z}_2^n$ is a uniformly random vector and $e\leftarrow \mathcal{B}_\tau$ is an error bit drawn from the Bernoulli distribution of a fixed parameter $0<\tau<1/2$, <i>i.e.</i> $\mathbb{P}(e=1)=\tau$. The decisional version, instead, asks to distinguish the above distribution from the uniform one on $\mathbb{Z}_2^n\times\mathbb{Z}_2$. The first step in the overall proof is to define a similar version of such a problem, what the author call the Learning Parity with Leakage (LPL) problem, in which everything is the same bar the distribution of the noise, which is now taken to be the Gaussian distribution over the reals. Note that the second component of the LPL distribution is then a real number. It is shown that: $$ \text{LPN is hard} \Rightarrow \text{LPL is hard}. $$ </div> <br> <div align="justify"><b>Q: what does it mean for a fresh re-keying scheme to be secure? </b></div><div align="justify"><i>A:</i> since we deal with an environment where leakage exists, we should specify both what kind of security model the proof follows and what kind of leakage the adversary is allowed to query. <br><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-9_kW4t6EDlQ/WBNtk2D14RI/AAAAAAAACgM/SN_5sXeoYyMaIsbsyTHfAZRmZ0LMRldawCLcB/s1600/sec.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-9_kW4t6EDlQ/WBNtk2D14RI/AAAAAAAACgM/SN_5sXeoYyMaIsbsyTHfAZRmZ0LMRldawCLcB/s400/sec.jpg" width="400" height="171" /></a></div><br>Security can be stated as the indistinguishability game in the above picture. In the real part on the left, the adversary plays with and queries a real oracle, that is to say an oracle that generates all keys and randomness according to the scheme to be secured. Hence also the leakage is computed on the sensitive variables. Instead, the right-hand side depicts the random game: an ideal oracle generates keys at random, which are then passed to a simulator which computes leakage on them. For the security claim to be valid, the (computational) adversary shouldn't distinguish the oracles she is playing with. <br>The leakage model is the $t$-noisy probing model. If you imagine a circuit, such a model can be depicted as the adversary being allowed to put probes on up to $t$ wires and to read a noisy version of the values being carried on them. In the specific case of their construction, since there isn't a circuit from which the adversary can get leakage from, the authors specify a set of variables on which noisy bit-selector functions are applied. </div> <br> <div align="justify"><b>Q: which is their fresh re-keying scheme? </b></div><div align="justify"><i>A:</i> even if the scheme, called $\Pi_{noisy}$ in the paper, is composed of a set of algorithms, I only specify the core of GenSK, which is responsible for generating the session key. I assume the inputs are some randomness $R$ and a set of shares of the master secret key $\{msk_i\}_{i\leq d}$, which are needed in order to secure the algorithm against DPA. $$ \begin{align*} 1.\ & u_i \leftarrow R\cdot msk_i \\ 2.\ & u \leftarrow \sum_{i\leq d} u_i \\ 3.\ & sk \leftarrow H(u) \\ \end{align*} $$ The sum in the second line is computed iteratively as $((\dots (u_1 +u_2)+u_3)+\dots )+u_d$ for security reasons, while the $H$ in the third line is an hash function modelled as a random oracle. In the game, the adversary is given $sk$ and $R$ and can query leakage on a certain amount of bits of: $msk_i$, $R\cdot msk_i$ and $\sum u_i$. A further step not specified in the previous list is the application of a refreshing algorithm to the shares of the master secret key, in such a way that they look like new shares when the next session key is generated. Finally, the author prove the following: $$ \text{LPL is hard} \Rightarrow \Pi_{noisy} \text{ is secure}. $$ </div> <br> <div align="justify">For many other details, the paper is available on <a href="https://eprint.iacr.org/2016/573", target=blank>ePrint</a>. </div> </body></html>Marco Martinolihttps://plus.google.com/106563453646172877995noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-85883375209169490492016-10-28T13:24:00.000+01:002016-11-14T09:13:13.699+00:00What is SPDZ? Part 2: Circuit Evaluation<div style="text-align: justify;">This blog post is the second in a series of three in which we describe what SPDZ is. The first part can be found <a href="https://bristolcrypto.blogspot.co.uk/2016/10/what-is-spdz-part-1-mpc-circuit.html" target="_blank">here</a> and the third <a href="https://bristolcrypto.blogspot.co.uk/2016/11/what-is-spdz-part-3-spdz-specifics.html" target="_blank">here</a>.</div><div style="text-align: justify;"><br /></div>In this part we discuss how we perform the additions and multiplications of shared secret values as in the SPDZ protocol.<br /><br /><h3>The Preprocessing Model </h3>The preprocessing model is a framework in which we assume the existence of a 'trusted dealer' who distributes 'raw material' to the parties before any circuit evaluation takes place. The reason for doing this is that if we have the 'raw material' already in place, the circuit can be evaluated very efficiently. <br /><br />We realise this by splitting the protocol into two phases: a 'preprocessing' (or 'offline') phase where we generate the preprocessed data, and an 'online' phase where we evaluate the circuit. (The term 'offline' is a little misleading, since the parties still communicate.)<br /><br /><h3>Evaluating the Circuit</h3>We now suppose that the parties have agreed on some arithmetic circuit they want to evaluate jointly. <br /><div style="text-align: justify;"><br /></div><h4 style="text-align: justify;">Providing input</h4><div style="text-align: justify;">The first step of the circuit evaluation is to read in values from (some of) the parties. In SPDZ, this means getting each party with input, $P_i$ with $x^i$ (the superscript $i$ is an index to avoid confusion with $x_i$ which is a share of $x$), to secret-share input $x^i$ to the other parties.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">To do this, we need to use a share of a random value: each party $P_i$ holds some $r_i$ and the value $r = \sum_i r_i$ is uniformly random and not known by any party. First, every party sends their share $r_j$ of $r$ to party $P_i$. This is called a 'partial opening' because $P_i$ now discovers the value of $r$ (by summing the shares).<br /><br />Party $P_i$ then broadcasts the value $x^i - r$. Party $P_1$ sets its share of $x^i$ as $x_1^i=r_1 + x^i - r$, and $P_j$ for $j > 1$ sets its share of $x^i$ as $x_i^j=r_j$. (In practice, it is not always $P_1$ who does a different computation.)<br /><br />Thus we have turned $x^i$ into $\langle x^i \rangle$.<br /><br /></div><h4 style="text-align: justify;">Addition</h4><div style="text-align: justify;">Suppose we want to compute some arithmetic circuit on our data; that is, some series of additions and multiplications. We have the following very useful properties of an additive sharing scheme:</div><ul style="text-align: justify;"><li>If we have some secret-shared field elements $\langle a \rangle$ and $\langle b \rangle$, so party $P_i$ has $a_i$ and $b_i$, each party can locally compute $a_i + b_i$, and hence, since $\sum_i a_i + \sum_i b_i = \sum_i (a_i+b_i)$, they obtain a secret sharing $\langle a + b \rangle$ of the value $a+b$.</li><li>If each party multiplies its share by some public value $\alpha$, then since $\sum_i \alpha a_i = \alpha \sum_i a_i = \alpha a$ the parties obtain a secret sharing $\langle \alpha a \rangle$ of the value $\alpha a$.</li></ul><div style="text-align: justify;"><br /></div><div style="text-align: justify;">In other words, the secret-sharing is linear; this is good because it means there is no communication cost for doing these operations. Unfortunately, we have to do a bit more work to multiply secret-shared elements.</div><div style="text-align: justify;"><br /></div><h4 style="text-align: justify;">Multiplication</h4><div style="text-align: justify;">In 1991, Donald Beaver [2] observed the following: suppose we want to compute $\langle x y \rangle$ given some $\langle x \rangle$ and $\langle y \rangle$, and we already have three secret-shared values, called a ‘triple’, $\langle a \rangle$, $\langle b \rangle$ and $\langle c \rangle$ such that $c = a b$. Then note that if each party broadcasts $x_i-a_i$ and $y_i - b_i$, then each party $P_i$ can compute $x-a$ and $y-b$ (so these values are publicly known), and hence compute \[ z_i=c_i + (x-a) b_i + (y-b) a_i + (x-a)(y-b) \] Now observe that if we add all of these up, we get \[\sum_i z_i = c + (x-a)b + (y-b)a + (x-a)(y-b) = xy\] and so we have a secret sharing $\langle z \rangle$ of $xy$.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The upshot is that if we have lots of triples, then we can perform many multiplications on secret-shared data and hence we can compute any arithmetic circuit. (Note that a triple cannot be reused because this would reveal information about the secrets we are trying to multiply - i.e. since $x-a$ and $y-b$ are made public in the process above)</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Importantly, observe that not only do these triples not depend on the input secret-shared values they are used to multiply, they are also completely independent of the circuit to be evaluated. This means that we can generate these triples at any point prior to evaluating the circuit. The values of $a$, $b$ and $c$ are not known by any parties when generated - each party only knows a share of each of 'some values' for which they are told this relation holds.</div><div style="text-align: justify;"><br /></div>Moreover, if these triples are generated in the offline phase at some point before the circuit evaluation, since addition, scalar multiplication and field multiplication are inexpensive in terms of communication and computation, the online phase is both highly efficient and information-theoretically secure.<br /><br />Combining the above, we have now established roughly how SPDZ evaluates an arithmetic circuit.<br /><br /><i><b>Next time</b></i>: In <a href="https://bristolcrypto.blogspot.co.uk/2016/11/what-is-spdz-part-3-spdz-specifics.html" target="_blank">the final part</a>, we will look at what makes SDPZ different from other MPC protocols which achieve the same thing.<br /><br /><h4>References </h4><div style="text-align: justify;">[1] B. Applebaum, Y. Ishai, and E. Kushilevitz. <a href="http://www.eng.tau.ac.il/%7Ebennyap/pubs/AGCfull.pdf" target="_blank">How to garble arithmetic circuits</a>. 52nd FOCS, pp120–129. IEEE Computer Society Press, 2011</div><div style="text-align: justify;">[2] D. Beaver. <a href="https://link.springer.com/chapter/10.1007%2F3-540-46766-1_34" target="_blank">Efficient Multiparty Protocols using Circuit Randomisation</a>. In J. Feigenbaum, editor, CRYPTO, volume 576 of Lecture Notes in Computer Science, pp420-432, Springer, 2012.</div><div style="text-align: justify;">[3] R. Bendlin, I. Damgard, C. Orlandi, and S. Zakarias. <a href="https://eprint.iacr.org/2010/514" target="_blank">Semi-homomorphic encryption and multiparty computation</a>. In EUROCRYPT, pp169-188, 2011.</div><div style="text-align: justify;">[4] I. Damgard, M. Keller, E. Larraia, V. Pastro, P. Scholl, N. P. Smart. <a href="https://eprint.iacr.org/2012/642" target="_blank">Practical covertly secure MPC for dishonest majority - or: Breaking the SPDZ limits</a>. In ESORICS (2013), J. Crampton, S. Jajodia, and K. Mayes, Eds., vol. 8134 of Lecture Notes in Computer Science, Springer, pp. 1–18.</div><div style="text-align: justify;">[5] I. Damgard, V. Pastro, N. P. Smart, and S. Zakarias. <a href="https://eprint.iacr.org/2011/535" target="_blank">Multiparty computation from somewhat homomorphic encryption</a>. In Advances in Cryptology – CRYPTO 2012, volume 7417 of LNCS, pp643–662. Springer, 2012.</div><div style="text-align: justify;">[6] I. Damgard and S. Zakarias. <a href="https://eprint.iacr.org/2012/512" target="_blank">Constant-overhead secure computation of</a></div><div style="text-align: justify;"><a href="https://eprint.iacr.org/2012/512" target="_blank">boolean circuits using preprocessing</a>. In TCC, pp621-641, 2013.</div><div style="text-align: justify;">[7] M. Keller and E. Orsini and P. Scholl. <a href="https://eprint.iacr.org/2016/505" target="_blank">MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer</a>. Cryptology ePrint Archive, Report 2016/505, 2016.</div><div style="text-align: justify;">[8] J. Buus Nielsen, P. Nordholt, C. Orlandi, and S. Burra. <a href="https://eprint.iacr.org/2011/091" target="_blank">A new approach to practical active-secure two-party computation</a>. In Reihaneh Safavi-Naini and Ran Canetti, editors, Advances in Cryptology CRYPTO 2012, volume 7417 of Lecture Notes in Computer Science, pp681-700. Springer Berlin Heidelberg, 2012.</div><div style="text-align: justify;">[9] A. Shamir. <a href="https://dl.acm.org/citation.cfm?id=359176" target="_blank">How to Share a Secret</a>. In Communications of the ACM, Volume 22 Issue 11, Nov. 1979, pp612-613.</div><div style="text-align: justify;">[10] A. Yao. <a href="https://dl.acm.org/citation.cfm?id=1382944" target="_blank">How to generate and exchange secrets</a>. In SFCS '86 Proceedings of the 27th Annual Symposium on Foundations of Computer Science, pp162–167. IEEE, 1986.</div><div style="text-align: justify;"><br /></div>Tim Woodnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-2857246783304155242016-10-21T13:29:00.002+01:002016-11-14T09:09:58.856+00:00What is SPDZ? Part 1: MPC Circuit Evaluation Overview<div style="text-align: justify;">This blog post is the first in a series of three in which we look at what MPC circuit evaluation is, an outline of how MPC protocols in the so-called 'preprocessing model' work, and finally the specifics of SPDZ. They will come in weekly instalments. </div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">In this part, we will introduce the idea of MPC circuit evaluation. </div><div style="text-align: justify;"><br /></div><h2 style="text-align: justify;"></h2><div style="text-align: justify;"></div><h3 style="text-align: justify;">Introduction</h3><div style="text-align: justify;">If you do research in the field of cryptography, at some point you’ve quite possibly come across the curiously named SPDZ ('speedz'). The aim of this blog post is to explain what it is and why it’s used. In order to keep this post as short and accessible as possible, lots of the details are omitted, and where new concepts are introduced, they are kept superficial.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">We start by defining secure multi-party computation (MPC): MPC is a way by which multiple parties can compute some function of their combined secret input without any party revealing anything more to the other parties about their input other than what can be learnt from the output.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Let’s make this more concrete: suppose there are two millionaires who want to know which of them has more money without revealing exactly how much money they have. How can they do this? Clearly we can do it with MPC, providing it exists.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Thankfully, MPC does exist. It is used in many different contexts and has various applications, ranging from the 'simple' and specific such as oblivious transfer (more on this later), to the relatively general-purpose functionality of joint circuit computation. SDPZ is an MPC protocol allowing joint computation of arithmetic circuits.</div><div style="text-align: justify;"><br /></div><h4 style="text-align: justify;">Circuit Garbling vs Secret Sharing</h4><div style="text-align: justify;">There are two main constructions of MPC protocols for circuit evaluation: circuit garbling and secret sharing.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The answer to the so-called millionaire’s problem was first found in the 1980s with Yao’s garbled circuits [10]. As circuit garbling is somewhat parallel to the MPC model we work with in SPDZ, we will not discuss it here.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Contrasting this, the SPDZ protocol is a secret-sharing-based MPC protocol.</div><div style="text-align: justify;"><br /></div><h3 style="text-align: justify;">Secret-Sharing-Based MPC</h3><div style="text-align: justify;">Whereas circuit garbling involves encrypting and decrypting keys in a specific order to emulate a circuit evaluation (originally a Boolean circuit, but now arithmetic circuits too [1]), SPDZ instead ‘secret shares’ inputs amongst all parties and uses these shares to evaluate a circuit.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">SPDZ is neither the first nor the only secret-sharing-based MPC protocol. Other well known constructions include BDOZ [3], TinyOT [8] and MiniMAC [6]. MASCOT [7] can be seen as an oblivious-transfer-based version of SPDZ. This will be discussed in a little more detail later on.</div><div style="text-align: justify;"><br /></div><h4 style="text-align: justify;">What is secret sharing?</h4><div style="text-align: justify;">Suppose I have some field element $a \in \mathbb{F}$, split it up ‘at random’ (uniformly) into two pieces, $a = a_1 + a_2$, and give party $P_1$ the value $a_1$ and $P_2$ the value $a_2$. Neither party knows the value $a$, but together they can recover it. We will write $\langle a \rangle$ to mean that the value $a$ is secret-shared between all parties (i.e. for each i, party $P_i$ has $a_i$, where $\sum_i a_i = a$).</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Of course, there are different ways of secret sharing data (e.g. the analogous multiplicative sharing $a = a_1 \cdot a_2$, and also more complicated schemes like Shamir’s [9]), but it turns out that the additive scheme is particularly useful for MPC applications, as we shall see.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The basic overview of secret-sharing MPC of arithmetic circuits (SSMPCoAC?) is the following: </div><ol><li>The parties first secret-share their inputs; i.e. input $x^i$ is shared so that $\sum_j x_j^i = x^i$ and party $P_j$ holds $x_j^i$ (and $P_i$ which provides input is included in this sharing, even though it knows the sum).</li><li>The parties perform additions and multiplications on these secret values by local computations and communication of certain values (in methods specified below). By construction, the result of performing an operation is automatically shared amongst the parties (i.e. with no further communication or computation).</li><li>Finally, the parties 'open' the result of the circuit evaluation. This last step involves each party sending their 'final' share to every other party (and also performing a check that no errors were introduced by the adversary along the way).</li></ol>These are the steps we follow in a few different MPC circuit evaluation protocols, as we have discussed. The way we compute the circuit differs (slightly) with the protocol.<br /><br /><i><b>Next time</b></i>: In the <a href="https://bristolcrypto.blogspot.co.uk/2016/10/what-is-spdz-part-2-circuit-evaluation.html" target="_blank">next part</a> in this series, we will see how to use these secret-shared values to evaluate an arithmetic circuit as in the SDPZ protocol.<br /><div style="text-align: justify;"><h4><b> </b></h4><h4><b>References</b> </h4>[1] B. Applebaum, Y. Ishai, and E. Kushilevitz. <a href="http://www.eng.tau.ac.il/%7Ebennyap/pubs/AGCfull.pdf" target="_blank">How to garble arithmetic circuits</a>. 52nd FOCS, pp120–129. IEEE Computer Society Press, 2011</div><div style="text-align: justify;">[2] D. Beaver. <a href="https://link.springer.com/chapter/10.1007%2F3-540-46766-1_34" target="_blank">Efficient Multiparty Protocols using Circuit Randomisation</a>. In J. Feigenbaum, editor, CRYPTO, volume 576 of Lecture Notes in Computer Science, pp420-432, Springer, 2012.</div><div style="text-align: justify;">[3] R. Bendlin, I. Damgard, C. Orlandi, and S. Zakarias. <a href="https://eprint.iacr.org/2010/514" target="_blank">Semi-homomorphic encryption and multiparty computation</a>. In EUROCRYPT, pp169-188, 2011.</div><div style="text-align: justify;">[4] I. Damgard, M. Keller, E. Larraia, V. Pastro, P. Scholl, N. P. Smart. <a href="https://eprint.iacr.org/2012/642" target="_blank">Practical covertly secure MPC for dishonest majority - or: Breaking the SPDZ limits</a>. In ESORICS (2013), J. Crampton, S. Jajodia, and K. Mayes, Eds., vol. 8134 of Lecture Notes in Computer Science, Springer, pp. 1–18.</div><div style="text-align: justify;">[5] I. Damgard, V. Pastro, N. P. Smart, and S. Zakarias. <a href="https://eprint.iacr.org/2011/535" target="_blank">Multiparty computation from somewhat homomorphic encryption</a>. In Advances in Cryptology – CRYPTO 2012, volume 7417 of LNCS, pp643–662. Springer, 2012.</div><div style="text-align: justify;">[6] I. Damgard and S. Zakarias. <a href="https://eprint.iacr.org/2012/512" target="_blank">Constant-overhead secure computation of</a></div><div style="text-align: justify;"><a href="https://eprint.iacr.org/2012/512" target="_blank">boolean circuits using preprocessing</a>. In TCC, pp621-641, 2013.</div><div style="text-align: justify;">[7] M. Keller and E. Orsini and P. Scholl. <a href="https://eprint.iacr.org/2016/505" target="_blank">MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer</a>. Cryptology ePrint Archive, Report 2016/505, 2016.</div><div style="text-align: justify;">[8] J. Buus Nielsen, P. Nordholt, C. Orlandi, and S. Burra. <a href="https://eprint.iacr.org/2011/091" target="_blank">A new approach to practical active-secure two-party computation</a>. In Reihaneh Safavi-Naini and Ran Canetti, editors, Advances in Cryptology CRYPTO 2012, volume 7417 of Lecture Notes in Computer Science, pp681-700. Springer Berlin Heidelberg, 2012.</div><div style="text-align: justify;">[9] A. Shamir. <a href="https://dl.acm.org/citation.cfm?id=359176" target="_blank">How to Share a Secret</a>. In Communications of the ACM, Volume 22 Issue 11, Nov. 1979, pp612-613.</div><div style="text-align: justify;">[10] A. Yao. <a href="https://dl.acm.org/citation.cfm?id=1382944" target="_blank">How to generate and exchange secrets</a>. In SFCS '86 Proceedings of the 27th Annual Symposium on Foundations of Computer Science, pp162–167. IEEE, 1986.</div><div style="text-align: justify;"><br /></div><br /><ol></ol>Tim Woodnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-65645785826782814092016-10-06T11:48:00.001+01:002016-10-06T11:48:48.030+01:00Study Group: Crying Wolf: An Empirical Study of SSL Warning Effectiveness<div style="text-align: justify;">Today's study group was on the now a little dated paper of 2009 'Crying Wolf: An Empirical Study of SSL Warning Effectiveness' [1], which was published at USENIX. In cryptography research, it is easy to overlook implementation and usability and instead focus on theory. As is succinctly explained in <a href="https://xkcd.com/538/" target="_blank">Randall Munroe's well-known comic</a>, the weaknesses in our cryptographic solutions are seldom in the constructions themselves, but in their real-world application.<br /><br />This paper explores the use and design of warnings which modern (!) browsers present to a user when SSL certificates cannot be verified, and in particular the user's reaction to them. There is little point in a cryptographically secure system of authentication if the end user ignores and proceeds past warnings when presented with them. The authors suggests that when browsers '<a href="https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf" target="_blank">cry wolf</a>' upon encountering SSL errors, users become desensitised over time, learn to ignore these warnings, and thus become susceptible to having their data stolen.</div><br /><h4 style="text-align: justify;">What is SSL?</h4><div style="text-align: justify;">(The initiated can skip this.)<br />SSL stands for Secure Sockets Layer, and is a method by which a client can access a web server securely. The SSL Handshake protocol uses a so-called SSL certificate to verify a server's authenticity to a client. An SSL certificate specifies whom the certificate was issued to, whom it was issued by, the period of validity and the server's public key. (Old SSL protocols have been superseded by <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security" target="_blank">TLS</a>, but the principles involved are essentially the same.) At a very high level, the protocol proceeds as follows:</div><ol style="text-align: justify;"><li> The client sends a 'hello' message to the server, requesting content. </li><li> The server sends the client its certificate, which contains its public key.</li><li> The client checks that the certificate is valid.</li><li> If the check passes, the client generates a session key, encrypts using the server's public key, and sends this to the server. If the check fails, the client aborts.</li><li> The server decrypts the session key using its secret key.</li></ol><div style="text-align: justify;">The client and the server can now encrypt all data sent between them using the (symmetric) session key.</div><div style="text-align: justify;"><br /></div><h4 style="text-align: justify;">What can go wrong?</h4><div style="text-align: justify;">If the certificate is invalid, the client aborts. The problems this study considers are: </div><ul style="text-align: justify;"><li> Expired certificate: the certificate is no longer valid.</li><li> Unknown certificate authority: the issuing authority is not known.</li><li> Domain mismatch: the domain of the web server and the certificate's listed domain do not match.</li></ul><div style="text-align: justify;">If one of the above occurs, the web browser will alert the user. The purpose of the study was to assess the effectiveness of the browser in conveying the severity of the problem to the user: strong warnings where the risks are small cause people to assume high-risk situations given the same warning are just as innocuous.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;"></div><h3 style="text-align: justify;">The Studies</h3><div style="text-align: justify;"></div><h4 style="text-align: justify;">Survey</h4><div style="text-align: justify;">Using a survey, the authors gathered data from 409 web users on their reactions to SSL warnings and their overall comprehension of the risks involved in ignoring them.<br /><br />They found that context (i.e. the type of website visited) made little difference to whether or not a user would heed the warnings.<br /><br />According to the data, respondents who understood 'Domain mismatch' and 'Unknown certificate authority' warnings were less likely to proceed than those who did not, whereas those who understood certificate expiry errors were more likely to proceed. In fact, the experimenters found that users consistently rated risk of an expired certificate lower than the other two errors.<br /><br />The authors additionally report some wonderful responses from users, including:<br /><ul><li> 'I use a Mac, so nothing bad would happen'</li><li> 'Since I use FreeBSD, rather than Windows, not much [risk]'</li><li> 'On my Linux box, nothing significantly bad would happen'</li></ul><br /></div><h4 style="text-align: justify;">Laboratory Experiment</h4><div style="text-align: justify;">A set of 100 participants were asked to use four websites to complete different tasks. One website was a banking website with an invalid certificate, one a library website with an invalid certificate, and two were other sites used as dummies.<br /><br />The participants were shown either Internet Explorer 7 (IE7), Firefox 2 (FF2), Firefox 3 (FF3), or one of two newly-designed SSL warnings. The <a href="https://www.sslshopper.com/assets/images/ie7-certificate-not-trusted.png" target="_blank">IE7 warning</a> is whole page but requires just one click to ignore. The <a href="https://www.sslshopper.com/assets/images/Firefox2-certificate-not-trusted.png" target="_blank">FF2 warning</a> is a pop-up window but also only requires one click to ignore. The first version of the <a href="https://www.sslshopper.com/assets/images/ff3_self_signed_error.png" target="_blank">FF3 warning</a> needed 11 steps. 'They made the original version of the warning so difficult for users to override, that only an expert could be likely to figure out how to do it.' The first new design was multi-page and asked users to specify the nature of the website they were visiting, presenting severe warnings for websites requiring a high level of security and milder warnings otherwise. The second new design was similar to the FF3 warning but 'looked more severe'. Images can be found in the paper.</div><br /><div style="text-align: justify;">For the library website, the IE7, FF2 and multi-page warnings did not prevent people from proceeding compared to the FF3 warning, and the single-page warning was similar to the previous warnings.<br /><br />For the banking website, the two new warnings did prevent people from accessing the website, but no more than the FF3 warning. The new warnings and the FF3 warning outperformed the IE7 and FF2 warnings in preventing people from accessing the website.<br /><br /><h4>Conclusions</h4>In conclusion, the authors say that the average user does not understand the dangers of SSL warnings, and as such the decision of whether or not to proceed should essentially be made for them by the browser in most cases.<br /><br />More recently, Chrome recently redesigned its SSL warnings due to the large proportion of users who simply ignored all SSL warnings [2].<br /><br />To see different SSL warnings in your current browser, visit <a href="http://badssl.com/">badssl.com</a>. <br /><br /><h3>References </h3>[1] Crying Wolf: An Empirical Study of SSL Warning Effectiveness by Joshua Sunshine, Serge Egelman, Hazim Almuhimedi, Naha Atri and Lorrie Faith Cranor. In Proceedings of the 18th Conference on USENIX Security Symposium, 2009; <a href="http://dl.acm.org/citation.cfm?id=1855793" target="_blank">link</a>.<br />[2] Improving SSL Warnings: Comprehension and Adherence by Adrienne Porter Felt, Alex Ainslie, Robert W. Reeder, Sunny Consolvo, Somas Thyagaraja, Alan Bettes, Helen Harris and Jeff Grimes. In CHI 2015; <a href="http://dl.acm.org/citation.cfm?id=2702442" target="_blank">link</a>.</div>Tim Woodnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-62652047678763982352016-09-29T11:42:00.000+01:002016-09-29T11:42:41.002+01:00Study Group: On the Impossibility of Tight Cryptographic ReductionsToday I kicked off the study groups for 2016/17. I spoke about <i><a href="https://eprint.iacr.org/2015/374">On the Impossibility of Tight Cryptographic Reductions</a>, </i>from this year's Eurocrypt. Keen readers of the blog might recall that this paper is a <a href="http://bristolcrypto.blogspot.co.uk/2016/05/some-reductions-will-always-be-lossy.html">particular favourite of mine</a>.<br /><br />Since I've wrote about it before, I won't go into much detail about the paper. Instead I'll say why I (still) like it, and a bit about how it's shaped my own work this year.<br /><br />So, why choose this paper again? First and foremost, it's just <b>really good</b>. It's well written and the result - that certain reductions are necessarily lossy - has big implications for the way we do things in provable security. There is an increasing drive for theoreticians to produce work that has a meaningful impact on the real world. Choosing security parameters is an important part of that picture, but this paper shows that the traditional tools of provable security can sometimes be inadequate in this regard - especially in a setting like the internet, with billions of users of cryptography.<br /><br />Is it that our methods need changing? Or should practitioners ignore the theory and 'go with their gut' when choosing parameters? Do we need to actively avoid using those crytographic primitives for whom reductions are always lossy, like rerandomisable signatures and encryption schemes where each public key has a unique secret key? These are profound questions for the community.<br /><br />Another reason I chose to talk about this paper is that it's nicely self-contained. This is not an incremental result about something obscure. Almost everyone here at Bristol has encountered reductions, and after recalling the standard EUF-CMA definition for signatures it was easy to build up to the main theorem of the paper (or at least the particular case of signatures in the main theorem). If any new PhD students are looking for some theory to get their teeth into, this paper would be a great starting point.<br /><br />Finally, I cheated a bit by giving my presentation about a paper that I've become very familiar with in the last few months, as I'm trying to extend it. At the moment, the result only applies to certain public-key primitives; I'd like to say something about multi-key to single-key reductions for <i>symmetric </i>encryption (which is of particular relevance to my PhD project, on Key Management APIs).<i> </i>I hope to have more to say on this in the not-too-distant future.Ryan Stanley-Oakeshttp://www.blogger.com/profile/06198224903467720264noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-55669075533904324712016-08-28T02:59:00.001+01:002016-09-07T18:52:38.505+01:00Crypto 2016: Breaking the Circuit Size Barrier for Secure Computation Under DDH<div dir="ltr" style="text-align: left;" trbidi="on">The CRYPTO 2016 Best Paper Award went to Boyle et al [1]. The paper provides several new protocols based on a DDH assumption with applications to 2PC (2 party-computation), private information retrieval as well as function secret sharing.<br /><br />Even more interesting, the authors present a protocol where 2PC for branching programs is realized in a way that communication complexity depends only on the input size and the computation is linear in circuit size.<br /><br />The central idea develops around building efficient evaluation of RMS (restricted multiplication straight line) programs. The special feature of RMS is that they allow multiplications only with memory and input values; the additions come for free between memory values. Although this class seems quite restrictive it covers the class of branching programs (logaritmic depth boolean circuits with polynomial size and bounded input).<br /><br />In the 2PC evaluation of RMS, suppose there is a linear shared memory value $[y]$ between the parties $P_1$ and $P_2$. When $P_1$ wants to share an input value $x$ to $P_2$ it sends an ElGamal encryption of $x$, $g^{xc}$ where $c$ is a symmetric ElGamal key. Clearly, the encryption is homomorphic with respect to multiplication, but how can we make any operations between a linear SS (secret shared) value and an ElGamal encryption?<br /><br />This is solved by introducing a distributive DLog procedure which converts the El-Gamal ciphertexts into linear SS values. The method uses a truncated PRF which counts the number of steps until the PRF evaluated in the ElGamal encryption equals to $0$. Unfortunately this algorithm has a probability of outputting an incorrect result but it can be fixed by evaluating multiple instances of the same protocol in parallel and then use an MPC protocol to select the result majority.<br /><br />Of course, there are some caveats at the beginning of the scheme such as converting the key generation procedure to a public key one and removing circularity key assumptions. These are gradually presented by the authors so that it can ease the reader's understanding of the ideas.<br /><br />What I find neat is that at the end of the paper we can see easily how to reduce the communication for general 'dense' arithmetic circuits by splitting them in multiple reduced depth chunks and then apply the RMS programs for each gate (because an addition or multiplication gate can be represented as a branching program).<br /><br />Of course we can spot some open problems left as future work such as:<br /><ol><li>Extend the protocols for larger classes other than branching programs.</li><li>Protocol only works for $2$ parties. Can we find something with constant communication for multiple parties without using FHE?</li><li>Can we convert the protocol for malicious parties in some other way rather than a generic complier as in [2]?</li></ol><br />[1]: Boyle, Elette, Niv Gilboa, and Yuval Ishai. "Breaking the Circuit Size Barrier for Secure Computation Under DDH."<br />[2]: Ishai, Yuval, et al. "Cryptography with constant computational overhead." <i>Proceedings of the fortieth annual ACM symposium on Theory of computing</i>. ACM, 2008.</div>Dragos Alin Rotaruhttps://plus.google.com/104882118410125117532noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-88075245012603708012016-08-26T14:30:00.001+01:002016-08-26T14:30:10.052+01:00CHES 2016: On the Multiplicative Complexity of Boolean Functions and Bitsliced Higher-Order Masking<div dir="ltr" style="text-align: left;" trbidi="on"><div style="text-align: justify;">During the morning session on the final day of CHES 2016, Dahmun Goudarzi presented his <a href="https://eprint.iacr.org/2016/557">paper</a>, co-authored with Matthieu Rivain, on bit-sliced higher-order masking.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Bit-sliced higher-order masking of S-boxes is an alternative to higher-order masking schemes where an S-box is represented by a polynomial over binary finite field. The basic idea is to <i>bit-slice </i>Boolean circuits of all the S-boxes used in a cipher round. Securing a Boolean AND operation, needed in the case of bit-sliced approach, is significantly faster than securing a multiplication over a binary finite field, needed in the case of polynomial-based masking schemes. But now the number of such AND operations required is significantly higher in the former case than the number of field multiplications required in the latter case. However, the use of bit-slicing with relatively large registers (for instance, 32-bit registers) previously lead the same authors to demonstrate significant improvements over polynomial-based masking schemes for specific block ciphers such as AES and PRESENT [<i>GR16</i>]. However, no generic method to apply bit-sliced higher-order masking to arbitrary S-boxes were previously known, and proposing such a method is one of the main contributions of the current work.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The running time and the randomness requirement of the bit-sliced masking technique mainly depends on the <i>multiplicative complexity</i>, i.e., the number of AND gates in the masked circuit. Indeed, a more precise measure is the <i>parallel </i>multiplicative complexity<i>. </i>While from previous works it is already known how to obtain optimal circuits (w.r.t. multiplicative complexity) for small S-boxes by using SAT solvers, solving the same problem for 6-bit or larger S-boxes had remained as an interesting problem. In the current work, the authors propose a new heuristic method to obtain boolean circuits of low multiplicative complexity for arbitrary S-boxes. The proposed method follows the same approach as a previous work [<i>CRV14</i>] that computes efficient polynomial representation of S-boxes over binary finite fields. The authors provide a heuristic analysis of the multiplicative complexity of their proposed method that is quite close to the experimental results for S-box sizes of practical relevance. Finally, an implementation of the bit-sliced masking technique evaluating sixteen 4-bit S-boxes in parallel and another implementation evaluating sixteen 8-bit S-boxes in parallel on a 32-bit ARM architecture is performed. The timing results seem to indicate that the bit-sliced masking method performs way better than the polynomial-based masking methods when the number of shares is greater than a certain bound.</div><div style="text-align: justify;"> </div><div style="text-align: justify;"><b>References:</b> </div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">[<i>CRV14</i>] Jean-Sébastien Coron, Arnab Roy, <i>Srinivas Vivek</i>: <a href="http://dx.doi.org/10.1007/978-3-662-44709-3_10" target="_blank">Fast Evaluation of Polynomials over Binary Finite Fields and Application to Side-Channel Countermeasures</a>. CHES 2014 & JCEN 2015.</div><div style="text-align: justify;"> </div><div style="text-align: justify;">[<i>GR16</i>] Dahmun Goudarzi and Matthieu Rivain. <a href="https://eprint.iacr.org/2016/264">How Fast Can Higher-Order Masking Be in Software?</a> Cryptology ePrint Archive, 2016.</div></div>Srinivas Vivekhttps://plus.google.com/110838877883478858848noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-67826431683719614382016-08-23T20:55:00.002+01:002016-08-23T21:01:51.273+01:00CRYPTO 2016 – Backdoors, big keys and reverse firewalls on compromised systemsThe morning of the second day at CRYPTO’s 2016 started with a track on “Compromised Systems”, consisting of three talks covering different scenarios and attacks usually disregarded in the vast majority of the cryptographic literature. They all shared, as well, a concern about mass surveillance.<br /><br /><div style="margin: 0cm 0cm 8pt; text-align: justify;">Suppose Alice wishes to send a message to Bob privately over an untrusted channel. Cryptographers have worked hard on this scenario, developing a whole range of tools, with different notions of security and setup assumptions, between which one of the most common axioms is that Alice has access to a trusted computer with a proper implementation of the cryptographic protocol she wants to run. The harsh truth is that this is a naïve assumption. The Snowden revelations show us that powerful adversaries can and will corrupt users machines via extraordinary means, such as subverting cryptographic standards, intercepting and tampering with hardware on its way to users or using Tailored Access Operation units.</div><br /><div style="margin: 0cm 0cm 8pt; text-align: justify;">Nevertheless, the relevance of these talks was not just a matter of a “trending topic”, or distrust on the authoritarian and unaccountable practices of intelligence agencies. More frequently than we would like, presumably accidental vulnerabilities (such as Poodle, Heartbleed, etc.) are found in popular cryptographic software, leaving the final user unprotected even when using honest implementations. In the meantime, as Paul Kocher remembered on his invited talk the day after, for most of our community it passes without notice that, when we design our primitives and protocols, we blindly rely on a mathematical model of reality that sometimes has little to do with it. </div><br /><div style="margin: 0cm 0cm 8pt; text-align: justify;">In the same way as people from the CHES community has become more aware –mainly also the hard way– that relying on the wrong assumptions leads to a false confidence of the security of the deployed systems and devices, I think those of us not that close to hardware should also try to step back and look at how realistic are our assumptions. This includes, as these talks addressed in different ways, starting to assume that some standards might –and most of the systems will— be compromised at some point, and that we understand what can still be done in those cases. </div><br /><div style="margin: 0cm 0cm 8pt; text-align: justify;">How would a Cryptography that worries not only about prevention, but also about the whole security cycle look like? How can the cryptography and information security communities come closer?</div><h3 style="margin: 2pt 0cm 0pt;">Message Transmission with Reverse Firewalls— Secure Communication on Corrupted Machines</h3><br /><div style="margin: 0cm 0cm 8pt; text-align: justify;">The reverse firewalls framework was recently introduced by Mironov and Stephens-Davidowitz, with a paper that has already been discussed in our group’s seminars and this same blog. A secure reverse firewall is a third party that “sits between Alice and the outside world” and modifies her sent and received messages so that even if her machine has been corrupted, Alice’s security is still guaranteed.</div><br /><div style="margin: 0cm 0cm 8pt; text-align: justify;">Their elegant construction does not require the users to place any additional trust on the firewall, and relies on having the underlying cryptographic schemes to be rerandomizable. With this threat model and rerandomization capabilities, they describe impossibility results as well as concrete constructions. </div><br /><div style="margin: 0cm 0cm 8pt; text-align: justify;">For example, in the context of semantically secure public-key encryption, in order to provide reverse firewalls for Bob, the scheme must allow a third party to rerandomize a public key and map ciphertexts under the rerandomized public key to ciphertexts under the original public key. In the same context, Alice’s reverse firewall must be able to rerandomize the ciphertext she sends to Bob, in such a way that Dec(Rerand(Enc(m)))=m.</div><br /><h3 style="margin: 2pt 0cm 0pt;">Big-Key Symmetric Encryption: Resisting Key Exfiltration</h3><br /><span lang="EN-US" style="font-family: "cmr10"; font-size: 10.0pt;"></span><div style="line-height: normal; margin: 0cm 0cm 0pt; text-align: justify;"><span lang="EN-US" style="font-family: "cmr10"; font-size: 10.0pt;"><span lang="EN-US" style="mso-ansi-language: EN-US;"><span style="font-family: "calibri"; font-size: small;">The threat addressed in Bellare’s talk is that of malware that aims to exfiltrate a user's key, likely using her system’s network connection. On their work, they design a schemes that aim to protect against this kind of </span><i style="mso-bidi-font-style: normal;"><span style="font-family: "calibri"; font-size: small;">Advanced Persistent Threats</span></i><span style="font-family: "calibri"; font-size: small;"> by making secret keys so big that their undetected exfiltration by the adversary is difficult, yet making the user’s overhead almost exclusively in terms of storage instead of speed.</span></span></span></div><br /><span lang="EN-US" style="font-family: "cmr10"; font-size: 10.0pt;"> <div style="line-height: normal; margin: 0cm 0cm 0pt; text-align: justify;"></div><div style="margin: 0cm 0cm 8pt;"><span lang="EN-US" style="mso-ansi-language: EN-US;"><span style="font-family: "calibri"; font-size: small;">Their main result is a </span><i style="mso-bidi-font-style: normal;"><span style="font-family: "calibri"; font-size: small;">subkey prediction lemma</span></i><span style="font-family: "calibri"; font-size: small;">, that gives a nice bound on an adversary’s ability to guess a modest length subkey, derived by randomly selecting bits of a big-key from which partial information has already been leaked. This approach, known as the Bounded Retrieval Model, has been –in the words of the authors—largely a theoretical area of research, whereas they show a fully concrete security analysis with good numerical bounds, constants considered.</span></span></div><div style="line-height: normal; margin: 0cm 0cm 0pt; text-align: justify;"><span lang="EN-US" style="mso-ansi-language: EN-US;"><span style="font-family: "calibri"; font-size: small;">Other highlighted aspects of their paper were the concrete improvements over [ADW09] and the key encapsulation technique carefully based in different security assumptions (random oracle, standard model).</span></span></div></span><br /><h3 style="margin: 2pt 0cm 0pt;">Backdoors in Pseudorandom Number Generators: Possibility and Impossibility Results</h3><br /><div style="margin: 0cm 0cm 8pt; text-align: justify;">The last talk of the session focused on the concrete problem of backdoored Pseudorandom Number Generators (PRGs) and PRNGs with input, which are fundamental building blocks in cryptographic protocols that have already been successfully compromised, as we learnt when the DUAL_EC_DRBG scandal came to light.</div>On their paper, the authors revisit a previous abstraction of backdoored PRGs [DGA+15] which modeled the adversary (Big Brother) with weaker powers than it could actually have. By giving concrete “Backdoored PRG” constructions, they show how that model fails. Moreover, they also study robust PRNGs with input, for which they show that Big Brother is still able to predict the values of the PRNG state backwards, as well as giving bounds on the number of these previous phases that it can compromise, depending on the state-size of the generator.<br /><br /><div style="line-height: normal; margin: 0cm 0cm 0pt;"><br /></div>[ADW09] J. Alwen, Y. Dodis, and D. Wichs. Leakage-resilient public-key cryptography in the bounded-retrieval model. In S. Halevi, editor, CRYPTO 2009, volume 5677 of LNCS, pages 36{54. Springer, Heidelberg, Aug. 2009.<br /><br />[DGA+15] Yevgeniy Dodis, Chaya Ganesh, Alexander Golovnev, Ari Juels, and Thomas Ristenpart. A formal treatment of backdoored pseudorandom generators. In Elisabeth Oswald and Marc Fischlin, editors, EUROCRYPT 2015, Part I, volume 9056 of LNCS, pages 101–126, Soﬁa, Bulgaria, April 26–30, 2015. Springer, Heidelberg, Germany.Eduardo Soria-Vázquezhttp://www.blogger.com/profile/16749852289271603693noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-8126710024516559222016-08-23T20:32:00.000+01:002016-08-23T20:32:23.250+01:00CHES 2016: Flush, Gauss, and Reload – A Cache Attack on the BLISS Lattice-Based Signature Scheme<html><head><title>MathJax TeX Test Page</title><script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script><script async="" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script></head> <body> <div align="justify">Leon Groot Bruinderink presented at CHES a cache-attack against the signature scheme BLISS, a joint work with Andreas Hulsing, Tanja Lange and Yuval Yarom. </div> <div align="justify">The speaker first gave a brief introduction on BLISS (Bimodal Lattice Signature Scheme), a signature scheme whose security is based on lattice problems over NTRU lattices. Since such problems are believed to be hard even if in the presence of quantum computers, BLISS is a candidate for being a cryptographic primitive for the post-quantum world. In addition, its original authors proposed implementations making BLISS a noticeable example of a post-quantum algorithm deployable in real use-cases. </div> <div align="justify">Informally speaking, a message $\mu$ is encoded in a challenge polynomial $\mathbf{c}$, which is then multiplied by the secret key $\mathbf{s}$ according to the following formula: $$ \mathbf{z} = \mathbf{y} + (-1)^b ( \mathbf{s} \cdot \mathbf{c} ) $$ where the bit $b$ and the noise polynomial $\mathbf{y}$ are unknown to the attacker. It is easy to see that if the attacker gains information about the noise polynomial, some linear algebra operations would lead her to the secret key. The coordinates of $\mathbf{y}$ are independently sampled from a discrete Gaussian distribution, which is implementable in several ways. The ones targeted in the paper are CDT and rejection samplings. In particular, the first method was also covered during the talk therefore I am focusing only on that in this blog post. </div> <div align="justify">The idea behind CDT sampling is precomputing a table according to the cumulative distribution function of the discrete Gaussian, drawing a random element and considering it as an index in the table. The element in the cell indexed by the random number is returned. In the end, elements returned by such a procedure will be distributed statistically close to a discrete Gaussian. Although being fast, this has the drawback of needing to store a large table, fact that it is known to be vulnerable to cache-attacks. </div> <div align="justify">The peculiarity of the attack carried out by Bruinderink \emph{et al.} is that, since the algorithm does not return the exact cache lines in which the sampling table is accessed, the equations learned are correct up to a small error, say $\pm 1$. The authors managed to translate such an issue into a shortest vector problem over lattices. Then, they run LLL algorithm to solve the problem and retrieve correct equations. </div> </body></html>Marco Martinolihttps://plus.google.com/106563453646172877995noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-33893740973017536452016-08-18T01:01:00.001+01:002016-08-18T01:01:10.736+01:00Crypto & CHES 2016: 20 years of leakage, leakage, leakagePaul Kocher was invited to give an invited presentation at Crypto and CHES, which was certainly deserved on the 20th anniversary of his paper that has more than 3500 citations on Google Scholar. The content of his talk ranged from tales of his work over philosophical considerations on security to an outlook to the future.<br /><br />It was interesting to see how Kocher, a biologist by training, got into cryptography and managed to break implementations via side channels with rather cheap equipment, including parts from a toy electronics kit. He claimed that they could break every smart card at the time, which was of course disputed by the vendors.<br /><br />In the philosophy part of the talk, the speaker brought up various analogies that I found interesting even though they did not provide direct advice. For example, he compared security to fractals that provide ever more detail the closer you look. More practically, Kocher mentioned building codes and the aviation industry. Both minimize risks by best practices that include safety margins even though these incur extra costs. However, I could not help thinking that aviation killed a fair amount of people before the standards improved.<br /><br />On the outlook, Kocher did not seem particularly optimistic. The world is already complex and full of devices that regularly exhibit security problems, but it will get worse with the Internet of Things, where there will be more devices with longer life spans produced by vendors with less security knowledge while the impact of vulnerabilities will be even higher. He predicted that the security breaches will get worse for 3-5 years at least.<br /><br />In terms of constructive ideas, he suggested to move the security into chips because it won't be ruined by the lower layer there. There already has been a significant move in that direction with Intel's SGX, but there are of course other approaches.Marcel Kellerhttp://www.blogger.com/profile/11453335175588774943noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-54213284384755549672016-08-16T22:28:00.002+01:002016-08-16T22:28:25.372+01:00Crypto 2016: Network Oblivious Transfer<div style="text-align: justify;">On the first day of CRYPTO 2016, Adam Sealfon presented his work with Ranjit Kumaresan and Srinivasan Raghurama on <a href="https://eprint.iacr.org/2016/584.pdf" target="_blank">Network Oblivious Transfer</a>. Oblivious transfer (OT) is a two party protocol in which party $A$ inputs two strings and party $B$ a bit $b$: $B$ receives exactly one of the strings according to his bit and finds out nothing about the other string, while $A$ does not find out which of the two strings $B$ chose. If two parties are able to engage in an OT protocol, we say that there is an OT channel between them. OT channels are a good thing to study because they are:</div><ul style="text-align: justify;"><li>Useful: OT has been called MPC (multi-party computation) complete, and the Atom of MPC, since many MPC protocols can be realised using OT;</li><li>Achievable: e.g. trapdoor permutations can be used to realise them.</li></ul><div style="text-align: justify;">Suppose we have a network in which all parties have secure connections to all other parties, and some of the parties also have OT channels between them. What can we say about the ability of the network to allow computation of OT-based MPC? In 2007, Harnik et al. asked <a href="https://www.iacr.org/archive/crypto2007/46220279/46220279.pdf" target="_blank">How Many Oblivious Transfers are Needed for Secure Multiparty Computation?</a> and give a lower bound on the number of OT channels a network must have. The paper presented gave an upper bound which matches the lower bound of the aforementioned paper, and hence allows a complete characterisation of the networks in which OT channels can be established to enable secure MPC.<br /><br />For some intuition as to what this all means, consider the following three graphs. Nodes represent parties in the network, and edges represent OT channels. All parties are assumed to have secure connections to all other parties and we want to have an OT channel between $A$ and $B$.<br /><br /><a href="https://2.bp.blogspot.com/-eF4GzIQd0S4/V7NYp6roxtI/AAAAAAAAAWQ/QxD_C30lnuwzOCCSpUwfoIEnC2QUqz1CwCLcB/s1600/graphs.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="151" src="https://2.bp.blogspot.com/-eF4GzIQd0S4/V7NYp6roxtI/AAAAAAAAAWQ/QxD_C30lnuwzOCCSpUwfoIEnC2QUqz1CwCLcB/s400/graphs.PNG" width="400" /></a><br /><br />In Figure 1, $A$ and $B$ have an OT channel between them, so we're done. In Figure 2, it turns out that the connections in place already suffice to provide $A$ and $B$ with an OT channel. However, in Figure 3, we cannot form an OT channel between $A$ and $B$.<br /><br />The reason some graphs admit OT channels between certain parties and some do not concerns a property known as splittability. A graph $G$ is called $k$-unsplittable (for $k<n/2$) if for any two disjoint sets of $k$ vertices, there is an edge from a vertex in one set to a vertex in the other; $G$ is called $k$-splittable if this does not hold. The main theorem of the paper states that, assuming a semi-honest adaptive adversary controlling no more than $t$ parties, two parties, $A$ and $B$, in the network can establish an OT channel if and only if</div><ol style="text-align: justify;"><li>$t<n/2$, or</li><li>$t \ge n/2$ and the graph is $(n-t)$-splittable. </li></ol><div style="text-align: justify;">Adding the edge $(A,B)$ to Figures 2 and 3 shows this at least looks like it says the right thing, since doing so in Figure 3 shows every 2-split of the graph has an edge between the two partitions.<br /><br />In proving this theorem, the paper provides a good description of the limits of OT-based MPC.</div>Tim Woodnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-12616858323451330972016-08-16T01:19:00.001+01:002016-08-16T01:19:55.951+01:00Crypto 2016: A subfield lattice attack on overstretched NTRU assumptions<br />This year's Crypto kicked off this morning in sunny Santa Barbara. The early afternoon session in track A covered asymmetric Ccryptography and cryptanalysis. Shi Bai presented <b><a href="https://eprint.iacr.org/2016/127.pdf">A subfield lattice attack on overstretched NTRU assumptions: Cryptanalysis of some FHE and Graded Encoding Schemes</a></b>, which is joint work with Martin Albrecht and Leo Ducas. The talk consisted of three main parts, an introduction, a presentation of the subfield attack and a discussion on its implications.<br /><br /><div style="text-align: left;"><b>Introduction</b></div><br />The set-up of the problem is the usual one. Let $\Phi_m$ be a cyclotomic power-of-two polynomial and let $R$ be the ring $R = \mathbb{Z}[x]/\Phi_m$. We let $\lambda$ be the security parameter, $n=\phi(m)=poly(\lambda)$, $q=q(\lambda)$ and $\sigma = poly(\lambda)$. The <a href="http://grouper.ieee.org/groups/1363/lattPK/submissions/ntru.pdf">NTRU </a>problem is the following.<br /><br /><b>NTRU Problem</b>: We are given a ring $R$ of rank $n$, a modulus $q$, a distribution $D$ and a target norm $\tau$. Given an element $h = [gf^{-1}]_q$ (subject to $f$'s invertibility modulo $q$) for $f, g \leftarrow D$, the <b>NTRU</b>$(R,q,D,\tau)$ problem is to find a vector $(x,y)\neq (0,0) \in R^2 \mod q$ of Euclidean norm smaller than $\tau\sqrt{2n}$ in the lattice<br /><br /><div style="text-align: center;">$\Lambda_h^q = \{ (x,y)\in R^2 : hx-y = 0 \mod q \}$.</div><div style="text-align: center;"><br /></div>We call the above the NTRU lattice.<br /><br />What the authors mean by overstretched NTRU assumption is the use of super-polynomial modulus $q$ which is utilised in the context of NTRUEncrypt, signature schemes, Fully Homomorphic Encryption schemes and some candidate multilinear maps.<br /><br />The starting point of the attack is that whenever<br /><div style="text-align: center;"><br /></div><div style="text-align: center;">$ |f| \approx |g| \approx \sqrt{n}\sigma \ll \sqrt{nq}$,</div><br />then the NTRU lattice has an unusually short vector. We also note that, for some target norm, recovering a short enough vector is sufficient to carry the attack. In particular, finding a vector of length $o(q)$ would break applications such as encryption. We note however that in practice, parameters can indeed be set so as to avoid this attack.<br /><br /><b>The attack</b><br /><br />Let $K$ be the cylotomic field $\mathbb{Q}(x)/\Phi_m$ and $L = \mathbb{Q}(x)/\Phi_{m'}$ a subfield, where we have that $m'|m$ and we let $\zeta_m$ and $\zeta_m'$ be the $m^{th}$, respectively $m'^{th}$ roots of unity. The authors here work with power-of-two cyclotomics, but we note that such a subfield can always be found; indeed we can take the maximal real subfield.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-sCjNuHzWM4A/V7JZ3yO9WCI/AAAAAAAAAKQ/LWUUD3FTZVo67urCA42PRVr2OURfJambwCLcB/s1600/blog.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-sCjNuHzWM4A/V7JZ3yO9WCI/AAAAAAAAAKQ/LWUUD3FTZVo67urCA42PRVr2OURfJambwCLcB/s1600/blog.png" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div>The strategy is as follows. We use the fact that $L$ is a subfield of $K$ to use the norm map<br /><br /><div style="text-align: center;">$N_{K/L}: K \rightarrow L$</div><br />to map down NTRU instances to the subfield, assuming we are working on overstretched large modulus $q$. We then apply lattice reduction (e.g. BKZ) to the subfield, solving a potentially easier problem.<br /><br />For an NTRU instance $(h,f,g)$ in the full field, we norm it down to an instance $(h',g',g')$ of the subfield. Now the vector $(f',g')$ is in the subfield NTRU lattice $\Lambda_{h'}^q$ and depending on the parameters, it may be unusually short. The attack then proceeds by running a lattice reduction algorithm on the subfield, which produces a vector $(x',y')$. Then, if that vector is short enough, it is in fact an $\mathcal{O}_K$-multiple of $(f',g')$ and we have $(x',y')=v(f',g')$. This allows to lift $(x',y')$ to the full NTRU lattice $\Lambda_{h}^q$ and thus potentially recover non-trivial information on $f$ and $g$.<br /><br /><b>Consequences</b><br /><br />This produces a sub-exponential attack on bootstrappable <a href="https://eprint.iacr.org/2013/075.pdf">YASHE</a>. The work also implies an attack on the latest GGH construction without an encoding of zero. Depending on the multilinear degree, this can even go down to a polynomial attack. Compared to the prior state of the art, this is the best attack there is.<br /><br />In terms of limitations, if the normed down vector $(f',g')$ is not unusually short, then this attack fails. Equally, NTRU-743, NTRU-401 and <a href="http://eprint.iacr.org/2013/383.pdf">BLISS </a>are essentially immune. The conclusion of this talk was that in an NTRU assumption set-up, the presence of a subfield, a large modulus and a small $\sigma$ should be considered insecure.Anamaria Costachehttps://plus.google.com/111580579835398184759noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-40454250629423841872016-08-15T23:34:00.001+01:002016-08-15T23:41:17.297+01:00Crypto 2016: Provable Security for Symmetric CryptographyOn the morning that the CAESAR competition entered its <a href="https://groups.google.com/forum/#!forum/crypto-competitions">third round</a>, track A of CRYPTO 2016 begin with a session on provable security for symmetric cryptography. It contained 5 talks, all of which were very well presented. In each case the results were given in context, along with a sketch of the key techniques behind their proofs, and very clear diagrams.<br /><br />First up was Viet Tung Hoang, presenting joint work with Stefano Tessaro on the multi-user security of Key-alternating Ciphers. Key Alternating Ciphers can be seen as a generalisation of the Evan-Mansour construction, and are a natural idealisation of the AES design. Often work is done in the single-user setting, leaving multi-user security to be reaching via a hybrid argument. However, this leads to a reduction in security linear in the number of users.<br /><br />The speaker explained two ways in which their work improves upon the previous techniques for applying the H-coefficient techinque to bound adversarial advantages using the statistical distance between possible sets of transcripts, allowing them to achieve tighter bounds.would have possible previously. They termed the first of these the "Expectation Method", where they replace an upper bound with an expected value bound to significantly improve the tightness of one of the internal bounds (specifically, when one is measuring the total probability of an adversary being able to distinguish the systems from a good transcript), while the second is a tightening of the hybrid (by pushing the hybridisation step back to the transcript stage rather than waiting until the final bound has been collected). These are both very neat observations, and it will be interesting to see how easily they can be applied to other related problems.<br /><br />Next, Yannick Seurin gave the first of his two talks, on the Counter-in-Tweak (CTRT) mode for bootstrapping AE from a TBC, based on joint work with Thomas Peyrin. In this work, the authors set out to construct an AE scheme that was:<br /><ul><li>Beyond-Birthday-Bound Secure in the nonce-respecting case</li><li>Birthday-bound secure in the nonce-abusing case</li></ul>They do so using a generic-composition style approach, demonstrating that a slight variant of SIV mode can be used to combine an encryption and an authentication mechanism that each meet these security requirements such that their composition inherits this security. For their result, an encryption routine is required that takes both a random IV and a nonce. To get this, Yannick explained how one can use a Tweakable Block Cipher to improve upon the classic counter mode, by instead putting the counter into the tweak. Thus their scheme uses a counter (in the tweak) that is initialised with a random IV to encrypt the nonce, security of which is proven using a neat little balls-and-bins game.<br /><div><br /><br />After a short break, Bart Mennink introduced the XPX construction. His construction generalises single-round most tweakable Even-Mansour constructions by considering them all as being equal to the TBC</div><div><br /></div><div>\[ \begin{array}{cccccccc} & t_{11}K \oplus t_{12}P(K) & & t_{21}K \oplus t_{22}P(K) \\ & \downarrow & & \downarrow \\ m & \to \oplus \to & P & \to \oplus \to & c \\ \end{array} \] </div><div><br /> under certain sets of tweaks $(t_{11},t_{12},t_{21},t_{22}) \in \mathcal{T}$ (apologies for the terrible diagram!). After describing conditions for such Tweak sets to be weak (ie, totally insecure), he explains that all other sets are in fact reasonably secure. Developing this further, the work then investigates certain forms of related key security, and the conditions one must impose on the tweak set to achieve these. Bart then explained how these results apply to some preexisting schemes, recovering the security of the CAESAR candidates MinAlpha and Prost-COPA (for which the work also demonstrates a certain degree of related key security). Finally, he showed how these results can be applied to the Chaskey MAC algorithm, and suggested a possible modification that would (at the cost of slightly more expensive key rotation) provide some related key security, a method that might also be applicable to sponge-based schemes.<br /><br />The penultimate talk was on "Indifferentiability of 8-Round Feistel Networks" by<br />Yuanxi Dai describing his work with John Steinberger. It is next in a long line of papers seek to best describe the extent to which one can substitute a Fiestel network in for a random permutation, even when the adversary has access to the internal functions. The presentation was well delivered and described the overall intuition behind the proof, and the design of their simulator, but the details of such results are generally very complicated indeed.</div><div><br /></div><div>Finally, Yannick Seurin returned to describe "EWCDM", a block-cipher based MAC construction that one could use to more efficiently instantiate the CTRT mode described previously, based on joint research with Benoît Cogliati, which looks something like:</div><div>\[ \begin{array}{cccccccc} & & & & N & \to & \downarrow \\ & & & & \downarrow & & \downarrow \\ & & & & E_{k_1} & & \downarrow \\ & & & & \downarrow & & \downarrow \\ M&\to&\text{Hash} & \to & \oplus & \leftarrow & \leftarrow \\ & & & & \downarrow & & \\ & & & & E_{k_2} & & \\ & & & & \downarrow & & \\ & & & & T & & \\ \end{array} \] </div><div> It is secure up to ~$2^{n/2}$ queries under nonce-reuse, and achieves security for $2^{2n/3}$ queries in the nonce-respecting setting. Moreover, for the nonce-respecting case the actual security level might be even better, since the best known attack in the currently sits at around $2^{n}$ queries, leaving scope for further research.</div><div><br /></div>Guy Barwellhttps://plus.google.com/103638935619413162676noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-35556325017594718162016-08-11T18:08:00.000+01:002016-08-11T18:08:16.793+01:00USENIX 2016: How to Scrutinize "Password1"On the first day of USENIX, there was one <a href="https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/wheeler">talk</a> particularly catching my attention. Daniel Lowe Wheeler from Dropbox talked about a password strength estimation, and he started with the USENIX online account registration, which rates "password" as a fair password, "Password" as good, and "Password1" as strong, while "zjwca" is rated as weak. He argued that, while password guessing has improved over the last 35 years, password policy has not evolved much since 1979. Moreover, there are inconsistent and not always helpful password policies. Two previous studies have found 142 distinct policies on 150 sites and 50 distinct policies on 50 sites, respectively.<br /><br />To put an end to this, the authors propose a client-side piece of JavaScript that takes 3 ms to run and gives accurate estimates for online attacks by the best available algorithms. The core estimator takes a minimum rank of the input over lists such as top passwords ("password", "123456", etc.), top surnames (Li, Khan, etc.), and specific information (user name, etc.). It also considers word transformations such as 1337, caps, and reversing, as well as keyboard patterns and sequence patterns. All this information is combined into an estimate how many guesses a sophisticated algorithm would need to find the password.<br /><br />To evaluate the estimates, the authors used a large data set consisting of leaked passwords as well as other sources. On this data set, other password strength estimators perform quite badly, overestimating the number of attempts for a lot of passwords that would be found in less than 10^5 tries. A particular offender is NIST entropy, which is completely oblivious to real-world choices such as "password". In comparison, overestimating happens for very few passwords with zxcvbn.<br /><br />The software is available on <a href="https://github.com/dropbox/zxcvbn">https://github.com/dropbox/zxcvbn</a>, and it is already used by a number of companies, most notably WordPress.Marcel Kellerhttp://www.blogger.com/profile/11453335175588774943noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-4354824590867766992016-07-16T16:09:00.003+01:002016-07-20T11:55:24.520+01:00ECRYPT-NET Workshop on Crypto Design for IoT: talk on physical security<div style="text-align: justify;">IoT stands for "Internet of Things": thousands of interconnected devices sharing (sensitive) information and personal data. As many of them are small and embedded (not all: during a summary talk, Florian Böhl pointed out the existence of connected Caterpillars, for instance...<a href="http://www.industrytap.com/wp-content/uploads/2014/11/960x550xCaterpillar_Mining_Truck.jpg.pagespeed.ic.zgMnTNvd9i.jpg">these</a>, not <a href="https://upload.wikimedia.org/wikipedia/commons/f/f3/Chenille_de_Grand_porte_queue_(macaon).jpg">these</a>), this directly translates to the need for a careful evaluation of threats due to side-channel attacks.<br /><br />Benedikt Gierlichs gave a talk about such a crucial aspect of IoT deployment. He introduced the subject by means of possible applications and use-cases (many of which were the main focus of other talks) and by explaining common issues when securing IoT nodes. In particular, he gave a nice "equation" that succinctly describes them.<br /><br /><div style="text-align: center;"><i>IoT device = embedded device + network</i></div><br />Although being a quite simplistic representation of nodes, the equation suggests a very interesting peculiarity of IoT devices within the security framework: the possible points of failure are more in number and also more dangerous than usual non-connected devices. As Ingrid Verbauwhede also remarked during the discussion phase, many of those devices are secure by themselves; it's the fact of being part of a network that raises security issues. Indeed network security adds a non-trivial challenge to the already tough work of securing an embedded device. Since such a discussion is prohibitively broad for a workshop talk (in fact spanned the whole workshop), Benedikt outlined three essential components in IoT security. The nodes need:<br /><br /><ol><li><u>good cryptography</u>: self-explanatory;</li><li><u>secure interfaces</u>: nodes need to communicate among each other and with hubs, the cloud, servers, smartphones. Each of these exchange of information must happen in a formatted and standardised way, using protocols for instance. In these regards Johan Stokking, co-founder of The Things Network, said in his talk that many devices can't even support the IP protocol because it's too complicated. On top of this, at some point all the data should reach final users, for which secure GUIs and access points are needed;</li><li><u>secure implementations</u>.</li></ol><br />Taken the first two points for granted, the remaining of the talk focused on the third one by providing introductory notions on side-channels analysis, in particular an overview on possible attacks (active/passive, invasive/non-invasive). The speaker remarked the number of things that can possibly go wrong even in the situation in which good crypto and secure interfaces are deployed. If such an assumption is dropped, the scenario is even scarier. In the end the moral was that, within the framework of IoT, "protecting against physical attacks is perhaps not the most pressing issue". Arguably the most pressing issue is depicted by the following picture.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-cb1xb88baLs/V4pNWqYoNiI/AAAAAAAAAv0/QEI6flLNC8Q829ANFymLnTL9e43wM-PrwCLcB/s1600/pic1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="211" src="https://3.bp.blogspot.com/-cb1xb88baLs/V4pNWqYoNiI/AAAAAAAAAv0/QEI6flLNC8Q829ANFymLnTL9e43wM-PrwCLcB/s320/pic1.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>The graph is not based on real data, making it somewhat informal (and accordingly, it's been drawn with an informal graphic tool). The x-axis represents the number of IoT devices and the y-axis carries an extremely informal notion of "percentage of security". The story told is that the majority of devices come with almost no security, and a very small part delivers very strong security. A lot of effort has been put to target the left-most part of the graph: developing really secure protocols and algorithms to make the latter, perhaps already reasonably robust, better. What it should be done (more) in order to ship secure products in every house is pushing the overall "percentage of security" up in (almost) all devices.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-ztEfndQTASw/V4pNWkL4G5I/AAAAAAAAAvw/2COqZmNPU78ND1-FwdRVS3hC1G4w83jiQCLcB/s1600/pic2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="210" src="https://1.bp.blogspot.com/-ztEfndQTASw/V4pNWkL4G5I/AAAAAAAAAvw/2COqZmNPU78ND1-FwdRVS3hC1G4w83jiQCLcB/s320/pic2.png" width="320" /></a></div></div>Marco Martinolihttps://plus.google.com/106563453646172877995noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-59245671830652075782016-07-07T11:24:00.001+01:002016-07-07T11:24:56.509+01:00WHEAT 2016: The Pre-history of Lattice-based Cryptanalysis<div dir="ltr" style="text-align: left;" trbidi="on"><div><div style="text-align: justify;">On the second day of the <a href="https://wheat2016.lip6.fr/">Workshop HEAT</a>, Antoine Joux gave an invited talk on the history of cryptanalytic applications of <i>lattice-basis reduction</i> algorithms.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">Lattice is a discrete subgroup of $\mathbb{R}^n$. Equivalently, it is a span of integer-linear combinations of $\mathbb{R}$-linearly independent vectors in $\mathbb{R}^n$. Finding a short basis in $\mathbb{R}^2$ is relatively easy and <i>Gauss' lattice-basis reduction</i> algorithm is effective for this purpose. As the speaker emphasised, things become more complicated when we move to higher dimensions. He then went on to discuss the details of the famous <i>LLL</i> algorithm, invented by Lenstra, Lenstra and Lovász in 1982, for lattice-basis reduction in arbitrary dimensions. LLL is a polynomial-time algorithm but returns a short vector within an exponential approximation factor of the length of the shortest vector in the lattice. This algorithm, which performs well in practice, may be viewed as a combination of the Gauss lattice reduction and the <i>Gram-Schmidt orthogonalisation</i> methods.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">An early application of lattice-basis reduction is the cryptanalysis of <i>knapsack-based</i> cryptosystems. The knapsack problem, a.k.a. the <i>subset-sum</i> problem, is given integers $a_1, \ldots, a_n$, and $S$, find $\epsilon_1,\ldots,\epsilon_n \in \{0,1\}$ such that </div><div style="text-align: justify;">\[S = \overset{n}{\underset{i=1}{\sum}} \epsilon_i \cdot a_ i.\] </div><div style="text-align: justify;">This problem is NP-hard though some cases are easy, and it served as a basis for cryptosystems such as <i>Merkle-Hellman</i> cryptosystem. The basic idea is to hide an easy knapsack in a hard-looking one. However, at CRYPTO 1982, Shamir broke this cryptosystem using lattice-basis reduction. This attacks works for <i>super-increasing</i> knapsacks, where we have $a_i > \sum_{j=1}^{i-1} a_j$. Other works starting with that of <i>Lagarias-Odlyzko</i> in 1985 lead to improved attacks on <i>low-density </i>knapsacks.</div><div style="text-align: justify;"><br /></div><div style="text-align: justify;">The speaker also briefly spoke about the application of lattie-basis reduction to the problems of</div><div style="text-align: justify;"><br /></div><ul style="text-align: justify;"><li> <i>reconstructing small linear dependencies</i>: let $(x_1,\ldots,x_n)$ be a sequence of real numbers, we need to find small coefficients $v_i$ such that $\sum v_i\cdot x_i =0$, </li><li><i>constructing polynomials</i> for use in the <i>Number Field Sieve</i> for solving the <i>Discrete Logarithm Problem</i>, and</li><li><i>finding small roots</i> of univariate and bivariate integer polynomials <i>modulo</i> composite numbers: <i>Coppersmith's attack</i>.</li></ul></div></div>Srinivas Vivekhttps://plus.google.com/110838877883478858848noreply@blogger.com0