tag:blogger.com,1999:blog-27520587001544672262024-03-16T01:09:55.581+00:00Bristol Cryptography BlogA blog for the cryptography group of the University of Bristol. To enable discussion on cryptography and other matters related to our research.Bogdanhttp://www.blogger.com/profile/13266116282208635140noreply@blogger.comBlogger447125tag:blogger.com,1999:blog-2752058700154467226.post-86398684328962090612017-09-28T07:39:00.001+01:002017-09-28T07:39:33.331+01:00CHES 2017 Taipei, Taiwan<a href="https://ches.iacr.org/" target="_blank">CHES 2017</a> was held September 25th - 28th in Taipei, Taiwan. This being my first trip to CHES, I was glad to see a mix of academics and people in industry whom had ties with cryptographic hardware on embedded systems.<br /><br />Although I have a limited frame of reference, I feel the standard of the conference was quite high - the presenters all knew what they were talking about in great detail, and were able to accurately describe the contribution they had made to their respective fields.<br /><br />My favourite talks were in the 'Side-Channel Analysis' and the 'Emerging Attacks' sessions, as the talks in these two sessions in particular were engaging and close to the work I have been doing during my PhD.<br /><br />However, my obligatory post-conference blog post will be on '<a href="https://eprint.iacr.org/2017/627.pdf" target="_blank">Sliding right into disaster: Left-to-right sliding windows leak</a>', a joint work by Daniel J. Bernstein, Joachim Breitner, Daniel Genkin, Leon Groot Bruinderink, Nadia Heninger, Tanja Lange, Christine van Vredendaal, and Yuval Yarom (I wasn't aware so many people could work on one paper at the same time!).<br /><br />The contribution of the paper was showing that although the Right-to-Left sliding window didn't provide leak a great deal of information, the Left-to-Right sliding window provided just enough to recover the full key (in some cases).<br /><br />For a brief recap, RSA uses modular exponentiation, and in many implementations the 'sliding window' method is used for efficiency. This can be done either Left-to-Right or Right-to-Left, and although they are very similar, they have very slight differences: the Right-to-Left method tends to be easier to program, uses the same number of multiplications as Left-to-Right, but requires a bit more storage. Both are used in practice: the paper shows that the Libgcrypt crypto library uses the Left-to-Right method (and hence they provide an attack against this implementation).<br /><br />One way to think about it is that if you want to compute <i>x</i>^25, you would convert the exponent 25 into binary, manipulate this bitstring in some way (depending on whether you are going Left-to-Right or Right-to-Left, and also on the size of your window <i>w</i>), and then parse the bitstring: for every non-zero bit, perform a multiply; for every zero bit, perform a square (or something to that effect)<br /><br />In this manipulated bitstring in the Right-to-Left method, due to the way the bitstring is created, we are guaranteed to have <i>w </i>- 1 zero bits after a non-zero bit. From a leakage point of view, this doesn't provide much information.<br /><br />However, in the Left-to-Right method, two non-zero bits can be as close as adjacent. This allows us to infer certain details about the bitstring by applying certain rules to what we know (the leakage), and in some cases, working out the value of the key.<br /><br />If we are able to recover >50% of the key bits this way, we can implement an efficient <a href="https://link.springer.com/chapter/10.1007%2F978-3-642-03356-8_1" target="_blank">Heninger-Shacham attack</a> to recover the remaining bits.<br /><br />The paper was presented by Leon Groot Bruinderink, and he explained it in such a way that I found it clear to understand how the attack works, and how one would prevent against this kind of attack (not using Left-to-Right would be a start). They also contacted Libgcrypt with details of the attack, and it has been <a href="http://seclists.org/oss-sec/2017/q3/67" target="_blank">fixed in version 1.7.8</a>.<br /><br />Aside from the papers, CHES has been pretty amazing: the venue was a <a href="http://www.regenthotels.com/regent-taipei" target="_blank">5 star hotel</a> in the centre of Taipei, the food was absolutely incredible (even the banquet, which introduced me to the wonders of sea cucumber), and the excursion to the <a href="https://www.npm.gov.tw/en/" target="_blank">Taipei Palace Museum</a> was exceptionally educational (which as we all know is the best kind of fun).<br /><br />I would definitely recommend CHES to anyone interested in the more practical side of cryptography, although if it ever takes place in Taiwan again, I strongly suggest you Youtube how to use chopsticks. Unfortunately I never learnt, and after a humiliating trip to the ShiLin Night Market, am now featured on several locals' phones in a video named 'The Tourist who couldn't eat Beef Noodle Soup'.<br />Unknownnoreply@blogger.com010491, Taiwan, Taipei City, Zhongshan District, Section 2, Zhongshan N Rd, 41號號25.0541721 121.52281659999994-3.4133644000000025 80.214222599999943 53.5217086 162.83141059999994tag:blogger.com,1999:blog-2752058700154467226.post-84360876952387154622017-09-05T14:10:00.000+01:002017-09-06T14:30:52.985+01:00Crypto 2017 - How Microsoft Wants to Fix the Internet (Spoiler: Without Blockchains)In the second invited talk at Crypto, <a href="https://www.microsoft.com/en-us/research/people/fournet/">Cédric Fournet</a> from Microsoft Research presented the recent efforts of <a href="https://www.microsoft.com/en-us/research/project/project-everest-verified-secure-implementations-https-ecosystem/">Project Everest</a> (Everest VERified
End-to-end Secure Transport), which seems an attempt to fix implementing TLS once and for all. Appropriately for such a gigantic task, more than a dozen researchers on three continents (and the UK) work on making it verifiable and efficient at the same time.<br />
<br />
As with every self-respecting talk in the area, it started with the disaster porn that is the history of TLS (our lifes depend on it, but it's complicated, there have been 20 years of failures, insert your favourite attack here). However, the Crypto audience hardly needs preaching to (just a reminder that the work isn't done with a proof sketch; performance and error handling also matters), so the talk swiftly moved on to the proposed solutions.<br />
<br />
The story starts in 2013 with miTLS, which was the first verified standard-compliant implementation. However, it still involved hand-written proofs and was more of an experimental platform. Enter Everest: They want to tick all the boxes by providing verified drop-in replacements for the HTTPS ecosystem. It covers the whole range from security definitions to code with good performance and side-channel protection.<br />
<br />
As an example, Cédric presented Poly1305, a MAC that uses arithmetic modulo $2^{130}-5$ and forms part of the upcoming TLS 1.3 specification. Unsurprisingly, there have already been found bugs in OpenSSL's implementation. Project Everest have implemented Poly1305 with ~3,000 lines of code in Low*, a subset of F* (a functional language) that allows both C-style programming (with pointer arithmetic) as well as verification. Compiling this code with KreMLin (another output of Project Everest) results in machine code that is as fast as hand-written C implementations. The same holds for ChaCha2 and Curve25519.<br />
<br />
However, hand-written assembly is still faster. The project aims to catch up on this with Vale, which was published at Usenix this year. Vale promises extensible, automated assembly verification and side-channel protection.<br />
<br />
So what is the way forward? TLS 1.3 is on the horizon, bringing various improvements at the cost of a considerable re-design. This requires new implementations, and the project hopes to be first to market with an efficient and verifiable one.<br />
<br />
For the rest of talk, Cédric gave more details on how F* naturally lends itself to security games, and how they proved concrete security bounds for the TLS 1.2 and 1.3 record ciphersuites.<br />
<br />
All in all, I think it was a great example for an invited talk at Crypto, putting cryptography in a bigger context and bringing work that isn't necessarily on a cryptographer's radar to our attention.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-90949869861758695072017-08-22T01:56:00.000+01:002017-08-24T01:15:56.204+01:00Crypto 2017 - LPN DecodedCrypto 2017 kicked off this morning in Santa Barbara. After a quick eclipse-watching break, the lattice track ended with the presentation of <a href="https://eprint.iacr.org/2017/078.pdf">LPN Decoded</a> by Andre Esser, Robert Kübler, and Alexander May.<br />
<br />
Learning Parity with Noise (LPN) is used as the underlying hardness problem in many cryptographic protocols. It is equivalent to decoding random linear codes, and thus offers strong security guarantees. The authors of this paper propose a memory-efficient algorithm to the LPN problem. They also propose the first quantum algorithm for LPN.<br />
<br />
Let's first recall the definition of the LPN problem. Let a secret $s \in \mathbb{F}_2^k$ and samples $(\mathbb{a}_i, b_i)$, where $b_i$ equals $\langle \mathbb{a}_i,s \rangle + e_i$ for some $e \in \{0,1\}$ with $Pr(e_i=1)= \tau < 1/2$. From the samples $(\mathbb{a}_i,b_i)$, recover $s$. We see that the two LPN parameters are $k$ and $\tau$. Notice that this can be seen as a sub-case of the Ring Learning with Errors problems; in fact, <a href="http://www.cims.nyu.edu/~regev/papers/qcrypto.pdf">LWR originated as an extension of LPN</a>.<br />
<div>
<br /></div>
<div>
If $\tau$ is zero, we can draw $k$ independent samples and solve for $s$ by Gaussian elimination. This can also be extended to an algorithm for $\tau < 1/2$, by computing a guess $s'$ and testing whether $s'=s$. This works well for small $\tau$, for example $\tau = 1/\sqrt{k}$, used in some public key encryption schemes. Call this approach Gauss.<br />
<br />
For larger constant $\tau$, the best current algorithm is <a href="http://dl.acm.org/citation.cfm?id=792543">BKW</a>. However, although BKW has the best running time, it cannot be implemented for even medium size LPN parameters because of its memory consumption. Further, BKW has a bad running time dependency on $\tau$. Both algorithms also require many LPN oracle calls. </div>
<div>
<br /></div>
<div>
The authors take these two as a starting point. They describe a Pooled Gauss algorithm, which only requires a polynomial number of samples. From those, they look for error-free samples, similar to Gauss. The resulting algorithm has the same space and running time complexity, but requires significantly less oracle calls. It has the additional advantage of giving rise to a quantum version, where Grover search can be applied to save a square root faction in the running time. </div>
<div>
<br /></div>
<div>
They then describe a second hybrid algorithm, where a dimension reduction step is added. Thus Well-Pooled Gauss proceeds in two steps. First, reduce the dimension $k$ to $k'$ (for example, using methods such as BKW). Then, decode the smaller instance via Gaussian elimination. The latter step is improved upon by using the <a href="http://www5.rz.rub.de:8032/imperia/md/content/isd.pdf">MMT </a>algorithm.<br />
<br />
For full results, see the original paper. Their conclusion is that Decoding remains the best strategy for small $\tau$. It also has quantum optimisations and is memory-efficient. The Hybrid approach has in fact no advantage over this for small values of $\tau$. For larger values however, they manage to solve for the first time an LPN instance of what they call medium parameters - $k =
243$, $\tau =
1/8$ - in 15 days.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-15362343172527697892017-05-02T12:58:00.001+01:002017-05-02T12:58:54.366+01:00Eurocrypt 2017 - Parallel Implementations of Masking Schemes and the Bounded Moment Leakage Model<html>
<head>
<title>MathJax TeX Test Page</title>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}});
</script>
<script async="" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript">
</script>
</head>
<body>
<div style="text-align: justify;">
Side-channel analysis made its way into Eurocrypt this year thanks to two talks, the first of which given by François-Xavier Standaert on a new model to prove security of implementations in. When talking about provable security in the context of side-channel analysis, there is one prominent model that comes to mind: the <i>d</i>-probing model, where the adversary is allowed to probe <i>d</i> internal variables (somewhat related to <i>d</i> wires inside an actual implementation) and learn them. Another very famous model, introduced ten years later, is the noisy leakage model in which the adversary is allowed probes on all intermediate variables (or wires) but the learnt values are affected by errors due to noise. To complete the picture, it was proved that security in the probing model implies security in the noisy leakage one.
</div>
<br />
<div style="text-align: justify;">
The work of Barthe, Dupressoir, Faust, Grégoire, Standaert and Strub is motivated precisely by the analysis of these two models in relation to how they specifically deal with parallel implementation of cryptosystems. On one hand, the probing model admits very simple and elegant description and proofs' techniques but it is inherently oriented towards serial implementations; on the other hand, the noisy leakage model naturally includes parallel implementations in its scope but, admitting the existence of noise in leakage functions, it lacks simplicity. The latter is particularly important when circuits are analysed with automated tools and formal methods, because these can rarely deal with errors.
</div>
<br />
<div style="text-align: justify;">
The contribution of the paper can then be summarised in the definition of a new model trying to acquire the pros of both previous models: the Bounded Moment leakage Model (BMM). The authors show how it relates to the probing model and give constructions being secure in their model. In particular, they prove that BMM is strictly weaker than the probing model in that security in the latter implies security in the former but they give a counterexample that the opposite does not hold. The informal definition of the model given during the talk is the following:
<blockquote class="tr_bq" style="text-align: justify;">
<i>An implementation is secure at order o in the BMM if all mixed statistical moments of order up to o of its leakage vectors are independent of any sensitive variable manipulated.</i>
</blockquote>
</div>
<br />
<div style="text-align: justify;">
A parallel multiplication algorithm and a parallel refreshing algorithm are the examples brought to show practical cases where the reduction between models stated before holds, the statement of which is the following:
<blockquote class="tr_bq" style="text-align: justify;">
<i>A parallel implementation is secure at order o in the BMM if its serialisation is secure at order o in the probing model.</i>
</blockquote>
The falsity of the converse is shown in a slightly different setting, namely the one of continuous leakage: the adversary does not just learn values carried by some wires by probing them, but such an operation can be repeated as many times as desired and the probes can be moved adaptively. Clearly this is a much stronger adversary in that accumulation of knowledge over multiple probing sessions is possible, which is used as a counterexample to show that security in the continuous BMM does not imply security in the continuous probing model. The refreshing scheme mentioned above can easily be broken in the latter after a number of iterations linear in the number of shares, but not in the former as adapting the position of the probes does not help: an adversary in the BMM can already ask for leakage on a bounded function of all the shares.
</div>
<br />
<div style="text-align: justify;">
Both <a href="https://eurocrypt2017.di.ens.fr/slides/A01-parallel-implementations.pdf" target="new">slides</a> and <a href="https://eprint.iacr.org/2016/912.pdf" target="new">paper</a> are already available.
</div>
</body>
</html>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-39449804249570292502017-05-02T11:38:00.000+01:002017-05-02T11:38:36.171+01:00Eurocrypt 2017: On dual lattice attacks against small-secret LWE and parameter choices in HElib and SEALThis morning, Martin gave a great talk on lattice attacks and parameter choices for Learning With Errors (LWE) with small and sparse secret. The work presents new attacks on LWE instances, yielding revised security estimates. This leads to a revised exponent of the dual lattice attack by a factor of <b>2L/(2L+1)</b>, for <b>log q = Θ(L*log n)</b>. The paper exploits the fact that most lattice-based FHE schemes use short and sparse secret. We will write <b>q</b> to denote the LWE modulus throughout.<br />
<br />
Let's first have a look at the set-up. Remember LWE consists of distinguishing between pairs <b>(A, As+e)</b> and <b>(A,b)</b>. In the first instance, <b>A</b> is selected uniformly at random and <b>b</b> is selected from a special (usually Gaussian) distribution. In the second one, both <b>A</b> and <b>b</b> are uniformly random. Selecting <b>s</b>, as this work shows, is perhaps trickier than previously thought. Theory says that, in order to preserve security, selecting a short and sparse secret <b>s</b> means the dimension must be increased to <b>n*log_2(q)</b>. Practice says just ignore that and pick a small secret anyway. More formally, HElib typically picks a secret <b>s</b> such that exactly <b>h=64</b> entries are in <b>{-1,1}</b> and all the rest are 0. SEAL picks uniformly random secrets in <b>{-1,0,1}</b>.<br />
<br />
We also recall that the dual lattice attack consists of finding a short vector <b>w</b> such that <b>Aw = 0</b>, then checking if<br />
<div style="text-align: center;">
<b><Aw, (As+e)w> = <w,e></b></div>
is short. If we are in the presence of an LWE sample, <b>e</b> is short, so the inner product is short. Short*short = short, as any good cryptographer can tell you.<br />
<br />
The improvements presented in this paper rely on three main observations. Firsly, a revised dual lattice attack is presented. This step is done by adapting BKW-style algorithms in order to increase efficiency and can be done in general, i.e. does not depend on either shortness or sparseness of the secret. It is achieved by applying BKZ to the target basis, then re-randomising the result and applying BKZ again, with different block size.<br />
<br />
The second optimisation exploits the fact that we have small secrets. We observe that we can relax the condition on <b>w</b> somewhat. Indeed, if <b>s</b> is short, then finding <b>w</b> such that <b>Aw</b> is short instead of 0 is good enough. Therefore, we look for vectors <b>(v,w)</b> in the lattice<br />
<br />
<div style="text-align: center;">
<b>L = {(y,x): yA = x (mod q)}</b>.<br />
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Now in small secret LWE instances, <b>||s||<||e||</b> and so we may allow <b>||v||>||w||</b> such that</div>
</div>
<b>||<w,s>|| ≈ ||<v,e>||</b>.<br />
<br />
Finally, the sparsity of the small secret is exploited. This essentially relies on the following observation: when <b>s</b> is very sparse, most of the columns of <b>A</b> become irrelevant, so we can just ignore them. <br />
<br />
The final algorithm SILKE is the combination of the three above steps. The steps are the following.<br />
<ul>
<li>Perform BKZ twice with different block sizes to produce many short vectors</li>
<li>Scale the normal form of the dual lattice</li>
<li>If sparse, ignore the presumed zero columns, correct for mistakes by checking shifted distribution</li>
</ul>
<br />
As usual, Martin wins Best Slides Award for including kittens.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-49216501560015375302017-04-12T15:54:00.000+01:002017-04-12T15:54:21.170+01:00Is Your Banking App Secure?Last week I was in Malta for <a href="http://fc17.ifca.ai/">Financial Cryptography and Data Security 2017</a> to present my <a href="https://eprint.iacr.org/2017/134">recent work</a> on securing the PKCS#11 cryptographic API.<br />
<br />
One talk that stood out for me was by researchers from the University of Birmingham, who looked for vulnerabilities in the mobile apps provided by major UK banks.<br />
<br />
<b>Sadly, they found major weaknesses in apps from 5 of the 15 banks they investigated.</b><br />
<br />
Several apps use <i>certificate pinning</i>, where the app hard-codes a certificate from a trusted CA and only accepts public keys that are signed by the pinned certificate.<br />
This is good practice, as an attacker can add their own certificate to the phone's trust store, but it won't be accepted by the app.<br />
However, two Android apps (for Natwest and Co-op) accepted <i>any</i> public key signed by the pinned certificate, without checking the domain name!<br />
So the attack works as follows:<br />
<br />
<ol>
<li>Purchase a certificate for a domain you own from the trusted CA</li>
<li>The app will accept your public key with this certificate</li>
<li>Man-in-the-middle all the encrypted traffic between the user and their bank.</li>
</ol>
<br />
Curiously, the authors note: "Co-op [...] hired two penetration
testing companies to test their apps, both of which had missed this vulnerability". It seems odd that such an obvious mistake wasn't picked up in testing.<br />
<br />
The group also found that several banks - Santander, First Trust and Allied Irish - served adverts to their app users over unencrypted HTTP, meaning an attacker could spoof these ads and mount a phishing scam, perhaps by displaying a fake 'security warning' and directing users to re-enter their account details on a malicious page. It was pointed out in the talk that we're much more likely to 'feel safe' within an app (and hence trust all the content we see) than, say, visiting a webpage using a laptop, so this kind of in-app phishing scam could be very effective.<br />
<br />
There are even more exploits described in the <a href="http://fc17.ifca.ai/preproceedings/paper_83-2.pdf">paper</a>.<br />
<br />
It was refreshing to hear that the vulnerable banks responded well to the disclosures made by the Birmingham group and patched their apps as a result. But I'm a little baffled that these basic errors were ever made in such security critical applications.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-55092396011342380132017-03-29T16:22:00.002+01:002017-03-30T15:10:46.663+01:00PKC 2017: Kenny Paterson accepting bets on breaking TLS 1.3<div dir="ltr" style="text-align: left;" trbidi="on">
The member of the TLS 1.3 working group is willing to bet for a beer that the 0-RTT handshake of TLS 1.3 will get broken in the first two years.<br />
<br />
In his invited talk, Kenny managed to fill a whole hour on the history of SSL/TLS without even mentioning symmetric cryptography beyond keywords, thus staying within the topic of the conference. Despite all versions of SSL being broken to at least some degree, the standardised TLS became the most import security protocol on the Internet.<br />
<br />
The core part of TLS is the handshake protocol, which establishes the choice of ciphers and the session key. Kenny highlighted the high complexity stemming from the many choices (e.g., using a dedicated key exchange protocol or not) and the possible interaction with other protocols in TLS. Together with further weaknesses of the specification, this created the space for the many attacks we have seen. On the upside, these attacks express an increased attention by academics, which comes together with an increased attention by the society as whole. Both have laid the ground for improvements in both the deployment and future versions of TLS. For example, the support of forward secrecy has increased from 12 percent to 86 according to SSL pulse.<br />
<br />
Turning to concrete attacks, most important in the area of PKC is the Bleichenbacher attack published already at Crypto 1998 (a human born then would a considered a full adult at the conference venue now). Essentially, it exploits that RSA with the padding used in TLS is not CCA-secure, and it recovers the session key after roughly $2^{20}$ interactions with a server. Nevertheless, the TLS 1.0 specification published shortly after Bleichenbacher's publication incorporates the problematic padding (recommending mitigation measures), and later versions retain it for compatibility. The DROWN shows the danger of this by exploiting the fact that many servers still offer SSLv2 (about 8% of Alexa top 200k) and that it is common to use the same key for several protocol versions. An attacker can recover the session key of a TLS session by replaying a part of it in an SSLv2 session that uses the same key.<br />
<br />
On a more positive note, Kenny presented the upcoming TLS 1.3, which is under development since 2014. It addresses a lot of the weaknesses of previous versions by involving academics from an early stage and doing away with a lot of the complexity (including reducing options and removing ciphers). It furthermore aims to decrease latency of the handshake by allowing the parties to send encrypted data as early as possible, reducing the round trip time to one in many cases. The goal of low latency has also led to the inclusion of QUIC, which provides zero round trip time, that is, the client can send data already in the first message when resuming a session. However, QUIC is not fully forward-secure and therefore confined to a separate API. Nevertheless, Kenny predicts that the sole availability will be too tempting for developers, hence the bet offered.<br />
<br />
Concluding, he sees three major shifts in TLS this far: from RSA to elliptic-curve Diffie-Hellman, to Curve25519, and away from SHA-1 in certificates. A fourth shift might happen with the introduction of post-quantum algorithms such as Google's deployment of New Hope. Less optimistically, he expects that implementation vulnerabilities will continue to come up.<br />
<br />
<i>Update:</i> An earlier version of this post mentioned the non-existing Curve255199 instead of Curve25519, and it attributed New Hope to Google.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-1916907959351001992017-03-27T18:20:00.000+01:002017-03-27T18:21:37.736+01:00Tools for proofs<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><br />Security proof for even simple cryptographic systems are dangerous and ugly beasts. Luckily, they are only rarely seen: they are usually safely kept in the confines of ``future full-versions'' of papers, or only appear in cartoon-ish form, generically labelled as ... ``proof sketch". </span></span><br />
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><br /></span></span>
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;">The following two quotes frame the problem in less metaphorical terms. </span></span><br />
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><br /></span></span>
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;">``<i>In our opinion, many proofs in cryptography have become </i></span></span><span style="background-color: white; color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif; font-size: 15.36px;"><i>essentially unverifiable. Our field may be approaching a </i></span><span style="background-color: white; color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif; font-size: 15.36px;"><i>crisis of rigor</i>". <br /><br /> Bellare and Rogaway (cca 2004)</span><br />
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><br /></span></span>
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><br /></span></span><br />
<div style="text-align: justify;">
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><span style="font-size: 15.36px;">``</span><i style="font-size: 15.36px;">Do we have a problem with cryptographic proofs? Yes, we</i></span></span></div>
<br />
<div style="text-align: justify;">
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><i>do [...] We generate more proofs than we carefully verify</i></span></span></div>
<div style="text-align: left;">
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><i>(and as a consequence some of our published proofs are</i></span></span></div>
<div style="text-align: justify;">
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><i>incorrect)</i>". </span></span></div>
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"> </span></span><br />
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"> Halevi (cca 2005)</span></span><br />
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><br /></span></span><span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><br /></span></span>
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;">Solutions developed by cryptographers e.g. </span></span><span style="background-color: white; color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif; font-size: 15.36px;">compositional reasoning and </span><span style="background-color: white; color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif; font-size: 15.36px;">the game-hopping technique, help to structure proofs and reduce their complexity and therefore alleviate to some extent the pain of having to develop rigorous proofs. Yet, more often than not proofs are still sketchy and shady.</span><br />
<br />
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;">There is help that comes from the programming languages community which has a long experience with developing tools for proving that programs work correctly and...c</span></span><span style="background-color: white; color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif; font-size: 15.36px;">ryptographic systems are just programs. Recent progress, e.g. automated verification of parts of TLS, fully verified security proofs of implementation masking schemes to defeat leakage, </span><span style="background-color: white; color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif; font-size: 15.36px;">is impressive and exciting. More work is under way. </span><br />
<br />
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;">If you want to learn more about <i>how can </i></span></span><i style="color: #444444; font-family: 'lucida sans unicode', verdana, sans-serif; font-size: 15.36px;">you</i><i style="color: #444444; font-family: 'lucida sans unicode', verdana, sans-serif; font-size: 15.36px;"> get someone else to do the proofs for you</i><span style="background-color: white; color: #444444; font-family: 'lucida sans unicode', verdana, sans-serif; font-size: 15.36px;"> or, more realistically, learn about what existent tools can currently do, what they could do in the future, and discuss what is needed and which way to go, then you should attend the </span><br />
<div style="text-align: center;">
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><b><a href="https://www.cs.bris.ac.uk/Research/CryptographySecurity/MTP/" style="font-size: 15.36px;" target="_blank">Workshop on Models and Tools for Security Analysis and Proofs</a></b><span style="font-size: 15.36px; text-align: left;"><br /> -- Paris, 29th of April; co-located with EuroSnP and Eurocrypt --</span></span></div>
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><br />which the Bristol Crypto group helps organize. The workshop features as speakers some of the most prominent researchers that are contributing to this direction. You can register for the workshop <a href="https://secure.iacr.org/conferences/eurocrypt2017/register/" target="_blank">HERE</a>. Early registration ends March 31st!<br /><br /><br />But wait...there is more. If you want to explore this area beyond what a one-day workshop allows, then you should consider attending the</span><br />
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><br /></span></span>
<br />
<div style="text-align: center;">
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><b><a href="https://www.cs.bris.ac.uk/proofschool/" target="_blank">Summer School on Models and Tools for Proofs</a> </b></span></span></div>
<div style="text-align: center;">
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><b><br />-- </b>Nancy, France, July 10th - 13th --</span></span></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><br /></span></span></div>
<div style="text-align: left;">
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;">See you all in Paris and/or Nancy!</span></span></div>
<div style="text-align: left;">
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><br /></span></span></div>
<div style="text-align: left;">
<span style="color: #444444; font-family: "lucida sans unicode" , "verdana" , sans-serif;"><span style="background-color: white; font-size: 15.36px;"><br /></span></span></div>
</div>
Bogdanhttp://www.blogger.com/profile/13266116282208635140noreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-3985643442700164482017-02-21T18:36:00.000+00:002017-02-21T18:46:54.697+00:00Homomorphic Encryption API Software Library<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: justify;">
The <i>Homomorphic Encryption Application Programming Interface</i> (HE-API) software library is an open source software library being developed as part of the <a href="http://heat-project.eu/">Homomorphic Encryption Applications and Technology</a> (HEAT) project, and is available <a href="http://github.com/bristolcrypto/HEAT">here</a>. The main purpose of this software library is to provide a common easy-to-use interface for various existing Somewhat Homomorphic Encryption (SHE) libraries. Limited support for fixed-point arithmetic is also provided by this library. Note that the HE-API library is still a work in progress.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Fully Homomorphic Encryption (FHE) is a cryptographic primitive that allows meaningful manipulation of ciphertexts. In spite of several recent advances, FHE remains out of practical reach. Hence a reasonable restriction to make is to limit the set of evaluated circuits to a specified subclass, usually determined by the multiplicative depth of the circuit. Such encryption schemes are called as SHE schemes. Various libraries such as <a href="http://github.com/shaih/HElib">HElib</a>, <a href="http://sealcrypto.codeplex.com/">SEAL</a>, <a href="http://github.com/CryptoExperts/FV-NFLlib">FV-NFLlib</a>, <a href="http://github.com/tricosset/HElib-MP">HElib-MP</a>, etc., are already available that implement these SHE schemes.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The purpose of this HE-API software library is to provide a common, generic, easy-to-use interface for various existing libraries that implement SHE schemes. The SHE libraries that are currently integrated in the HE-API library are HElib and FV-NFLlib. It may be noted that the FV-NFLlib library is itself an outcome of the HEAT project. At a high-level, the HE-API software library abstracts out the technicalities present in the underlying SHE libraries. For instance, the HElib library implements the <a href="http://eprint.iacr.org/2011/277">BGV SHE scheme</a>, while the FV-NFLlib implements the <a href="http://eprint.iacr.org/2012/144">FV SHE scheme</a>. Needless to say, the syntax for various classes and routines in the individual libraries will be different, though the underlying semantics are very similar. The HE-API library integrates the underlying SHE libraries under a single interface, thereby shielding the user from syntactic differences. Another feature of the HE-API library is that it contains minimal, yet complete, set of routines to perform homomorphic computations. The design of this library is motivated by the ease of use for non-experts.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<b>Supported Data Types</b></div>
<div style="text-align: justify;">
</div>
<div style="text-align: justify;">
The following application data types are supported by the HE-API software library. </div>
<div style="text-align: justify;">
<ul>
<li>Boolean</li>
<li><span style="font-family: inherit;">Unsigned long integers</span></li>
<li><a href="http://gmplib.org/">GMP</a>'s arbitrary precision integers class: <span style="font-family: "courier new" , "courier" , monospace;">mpz_class</span></li>
<li>Polynomials with coefficients of type: <span style="font-family: inherit;">unsigned long integers</span> or <span style="font-family: "courier new" , "courier" , monospace;">mpz_class</span></li>
<li><span style="font-family: "courier new" , "courier" , monospace;"></span>Vectors of : <span style="font-family: inherit;">unsigned long integers</span> or <span style="font-family: "courier new" , "courier" , monospace;">mpz_class</span><span style="font-family: "courier new" , "courier" , monospace;"></span></li>
<li>Fixed-point numbers</li>
</ul>
Note that all the data types and routines described above may <i>not</i> be currently supported by every underlying SHE library.</div>
<div style="text-align: justify;">
<br /></div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-65848202892037199432017-01-13T16:56:00.001+00:002017-01-13T16:56:05.930+00:00RWC 2017 - Secure MPC at Google<div style="text-align: justify;">
This talk was given by Ben Kreuter and its focus was on the apparent disparity between what we research in academia versus what is required in the real world, specifically in the field of multi-party computation (MPC). MPC is the idea of allowing multiple parties to compute some function on their combined input without any party revealing anything about their input to the other parties (other than what can be learnt from the output alone).<br />
<br />
While significant work has been done on making MPC efficient in practice (for example, the work of Yehuda Lindell et al. on high-throughput MPC which was presented by Lindell in the preceding talk), the focus tends to be on generic protocols (e.g. general logic circuits) with strong security guarantees (e.g. malicious security), which invariably leads to large computational overhead. In practice, we usually require only specific protocols, which can therefore be optimised, and comparatively weak security guarantees.<br />
<br />
In the real world, network cost is the salient factor, rather than the speed of the protocol, since the parties who are involved in a computation often have to use networks (such as the Internet) which are being used by many other people at the same time and cannot make the best use of the network's full capabilities. The MPC at Google is about computation amongst, for example, mobile phones, laptops and servers; this introduces issues like battery constraints and the possibility of the computation not completing; these considerations, firmly grounded in the real world, are important when developing MPC techniques in research.</div>
<h4>
<br /></h4>
<h4>
Business applications</h4>
<div style="text-align: justify;">
A large portion of Google's revenue is generated by advertising: the tech giant, well-known for its aptitude for accurately determining users' desired search results even when queries are expressed ineloquently, specialises in creating personalised adverts to its wide spectrum of users. The efficacy of an advert is generally measured by the proportion of viewers of it who later become customers. Clearly this can be done by businesses comparing their database of customers' transactions with Google's databases of who has been shown which adverts. This, however, would be an invasion of privacy: instead, Google and the business can do MPC: more specifically, a private set intersection protocol.<br />
<br />
In a private set intersection protocol, the parties involved compute how large the intersection is amongst the sets input by each party, or even some function on those elements in the intersection. So if the business and Google compute a private set intersection protocol on their data, they can determine how well the advertising went.<br />
<br />
Roughly speaking, the MPC Google does in the real world is as follows: Google has a set $\{g_1,g_2,...,g_n\}$ of field elements which encodes a set of people who have been shown an advert for a certain product, and a business has a set $\{b_1,b_2,...,b_m\}$ of field elements which encodes a set of people who have been sold the product in question; Google raises each of its elements to a power $G$ and sends the set $\{g_1^G,g_2^G,...,g_n^G\}$ to the business. The business does the same with its elements for some exponent $B$ to get $\{b_1^B,b_2^B,...,b_m^B\}$, encrypts a set of binary vectors under Paillier encryption (which is additively homomorphic), one corresponding to each element in its set, encoding some other property of the sales (like the amount paid), and also computes the set $\{g_1^{GB},g_2^{GB},...,g_n^{GB}\}$. The business sends Google the set of pairs $\{(b_1^B,P(v_1)),(b_2^B,P(v_2)),...,(b_m^B,P(v_m))\}$ along with $\{g_1^{GB},g_2^{GB},...,g_n^{GB}\}$, and Google computes $\{b_1^{GB},b_2^{GB},...,b_m^{GB}\}$ and adds together all encrypted vectors $P(v_i)$ for which there exists some $j$ such that $g_i^{GB}=b_j^{GB}$. It sends this ciphertext back to the business, which decrypts and interprets the result.<br />
<br />
This protocol is very simple, and it is only passively secure (in which players are assumed to execute the protocol faithfully but will possibly try to learn things by inspecting their communication transcripts). An interesting, perhaps somewhat orthogonal concern, to how we approach research from an academic point of view is that it is important that we can convey the security and efficiency of our protocols to lawyers, managers and software engineers who will eventually be sanctioning, authorising or implementing the protocols. "The lawyers are interesting because you can show them a proof, and two plus two equals four is a negotiable statement here... managers usually trust your expertise...and software engineers are the worst because they already assume [the protocol] is impossible."<br />
<br />
An alternative solution using garbled circuits was explored in the recent past, but it turned out that their use required some subtle assumptions regarding the computation and communication which would have made the protocol impractical.<br />
<br />
Future work would involve getting a (not too much more expensive) maliciously secure protocol and developing the use of the homomorphic encryption to allow different functions to be computed on the data in the intersection.<br />
<br />
<h4>
Consumer applications</h4>
The Android keyboard app by Google, Gboard, logs what a user types so that it can guess words for auto-completing in the future. This data could be used for training machine learning models, and merging results from many local models would enable the formation of guessing algorithms that work well for everyone. However, to do this, the server would need to receive a set large dataset of words typed by a user from each phone so that this processing could be done. Clearly there is an issue of privacy here; moreover, there is also potentially a differential privacy issue.<br />
<br />
This is clearly a good situation in which to use MPC. Each party masks their data using a basic additive secret-sharing scheme: if each party has a vector to input, for every coordinate, every pair of parties agrees on some random field element, one subtracts and one adds this to that coordinate of their vector. When the parties send this to Google, the masks will therefore cancel when added together.<br />
<br />
In practice,they use a PRG and perform a key exchange (in which one key is given to each pair of parties, for every possible pair) at the beginning to achieve the same effect but with much smaller communication overhead. They also have a trick for dealing with device failures (which is important given the application).<br />
<br />
<br />
This talk provided helpful and relevant insight into the the importance of matching what we research with what we require in the real world, which is, after all, one of the main reasons for having conferences such as Real World Crypto. Many of the talks are available to watch online <a href="https://www.realworldcrypto.com/rwc2017" target="_blank">here</a>, and I would highly recommend doing so if interested.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-85111859976511475372017-01-12T12:50:00.000+00:002017-01-12T12:50:27.629+00:00RWC 2017 - Is Password Insecurity Inevitable?Fresh back from an enlightening trip across the pond, I wanted to write about one of my favourite talks, all about password (in)security, from this year's Real World Cryptography conference.<br />
<br />
As we know: <br />
<ol>
<li>Passwords protect <i>everything.</i></li>
<li>Passwords are <i>terrible.</i></li>
</ol>
But happily, Hugo Krawczyk from IBM Research spoke about some great new work to resolve these two seemingly incompatible statements. There were a lot of details in the talk that I'll have to miss out here (slides are available <a href="http://webee.technion.ac.il/~hugo/rwc17.pdf">online</a>). In particular, I'm going to focus on 'Part I: Take the burden of choosing and memorising passwords off humans'.<br />
<br />
The basic idea - this isn't new - is to have the user memorise a single master password that they use to access a password store. Then the password store derives unique pseudorandom passwords for each service the user wants to access (Google, Facebook, etc.) The problem with this solution is that the password store becomes a single point of failure: if it is compromised, then an offline dictionary attack to find the master password will compromise all of the user's accounts at once.<br />
<br />
Krawczyk et al. suggest an improvement: SPHINX, which amusingly stands for "a password Store that Perfectly Hides from Itself (No eXaggeration)". The first idea is for the password store to not keep hold of (even a hash of) the master password - instead it has an independent secret key $k$, and any time the user wants to log in to a service $S$, they send the master password $pwd$ to the store, the store computes a PRF $PRF(k, pwd | S)$ and this will be sent to $S$ as the user's password for $S$. This means that if the store is compromised, the master password and the account passwords can't be learned unless the user communicates with the store. So this works well if the store is in local, offline hardware, where the user is unlikely to use the store after it is compromised by an attacker.<br />
<br />
However, the authors go further and replace the PRF with an <i>oblivious </i>PRF. This means the store computes an "encrypted" version of $PRF(k, pwd | S)$ from an "encrypted" $pwd|S$, so doesn't learn the plaintext values of the master password or the service password. In practice this can be achieved by the user (i.e. the user's machine) hashing the string $pwd | S$ into an element $g$ of a Diffie-Hellman group, then computing $h = g^r$, where $r$ is a fresh, random exponent, and sending $h$ to the password store. The store's secret key is an exponent $a$, so it computes $h^a$ and sends this back to the user. The user removes the blinding exponent $r$ (i.e. computes $(h^a)^{r^{-1}} = g^a$) and the result is the unique password for $S$. Now even when the password store is compromised and even if the user communicates with the store, the master password and the account passwords can't be learned.<br />
<br />
In principle an attacker could recover all the account passwords by compromising both the password store <i>and </i>a service $S$, learning the secret key $a$ and the service password $g^a$, computing $g = H(pwd|S)$ and perfoming an offline dictionary attack to find $pwd|S$. Then for any other service $S'$, the password can be computed via $H(pwd|S')^a$. But as long as $S$ follows good practice and only stores a hash $H'(g^a)$ of the service password, this attack fails: an offline dictionary attack to recover $g^a$ is unfeasible as it's essentially a random group element. <br />
<br />
There are no particularly expensive computations involved in using SPHINX, the communication between the user and SPHINX does not need to be secure (so it could be online somewhere) and the store will work regardless of what password protocol is used by the service, so it's extremely flexible. SPHINX therefore strikes me as both useful and practical, which is surely the definition of Real World Cryptography.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-86472394985453419042017-01-09T22:22:00.002+00:002017-01-09T22:22:46.285+00:00RWC 2017 - Post-quantum cryptography in the real-world<div class="MsoNormal" style="text-align: justify;">
A new year takes off and, along
with it, thousands of resolutions are formulated. Although I am not the right
person to talk about them (my diet will begin next Monday), I wish to discuss a
resolution that the cryptographic community as a whole has set for itself in this 2017. Because that's what people do at Real World Crypto (RWC): they talk
about new threads, topics could be worth exploring during the new year,
directions for their researches and interests. This year, for the first time in RWC, post-quantum cryptography (PQC) was given an entire
session, clear sign that time is changing and the moment has come to bring the
discussion to the real world. The message is clear: even if quantum computers
are not popping up in tomorrow's newspapers, we can't postpone any longer.<o:p></o:p></div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
A very simple reason for this was
given by Rene Peralta, of the NIST PQC team, during the overture of the
session: standardisation takes time, up to seven years if we start right now,
and full transition takes even longer. I found Rene's presentation to be neat
and direct: our public-key cryptography fails against quantum computers and our
symmetric one needs some (non-drastic) modifications. The resolution is to "start thinking about it this year, possibly by November 30<sup>th</sup>, 2017".
However, a question arises quite naturally: are we ready?<o:p></o:p></div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
The other three talks of the
session tried to answer in the affirmative. Among the several PQC proposals
that are around in theoretical papers, two made their ways into RWC: the well-stablished lattice-based cryptography and the new-born isogeny-based
cryptography, which nevertheless carries the pride and sympathy of ECC.<o:p></o:p></div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
<u>Lattices and funny names: NewHope
and Frodo and Crystals<o:p></o:p></u></div>
<div class="MsoNormal" style="text-align: justify;">
Lattice-based cryptography has
three representatives in the run for PQC schemes. Valeria Nikolaenko showed
two: the first one is called NewHope and is a key agreement protocol based
on the hardness of Ring-LWE. The latter is a problem very favourable to applications
because it combines sound theoretical security (worst-case to average-case
reduction) to fast implementations thanks to specific choices of parameters
which allow for speed-ups in the computations: NewHope turns out to be even
faster than ECC and RSA, but at the price of a larger communication.
However, there are some concerns on the security of LWE when the ring structured
is added. Thus, Frodo ("take off the ring") is designed to achieve the same
goal using only standard LWE. The drawback is a degradation in performance,
since the tricks hinted above cannot be used anymore and keys are generally
bigger.<o:p></o:p></div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
The third lattice-based scheme
was presented by Tancrede Lepoint and is a suite called Crystals. This is based on yet another kind of lattices: module lattices, for which it is also known a worst-case to average-case reduction. These are less structured lattices (hence possibly calming down the detractors of ring structure) in which similar implementation speed-ups are possible: the timing is indeed comparable to NewHope's, while the communication is improved.<o:p></o:p></div>
<div class="MsoNormal" style="text-align: justify;">
<br /></div>
<div class="MsoNormal" style="text-align: justify;">
<u>"Make elliptic curves great again"</u></div>
Michael Naehrig presented a new proposal for PQC: do you remember curves with plenty of small subgroups where to easily solve the discrete logarithm problem? Now they come in handy again: all the subgroups (of order 2 and 3) are considered to be nodes of a graph, whose edges are the isogenies (a.k.a. bijetive homorphisms between curves). In this new context, given two curves in the graph, it is difficult to come up with the isogeny linking the two. However, such a new approach doesn't really stand against other solutions: keys are small but performance is not a pro (so to speak).Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-50781950769338482882017-01-09T12:26:00.001+00:002017-01-09T12:26:43.893+00:00RWC 2017 - Erasing Secrets from RAM<span style="font-family: Arial,Helvetica,sans-serif;">One of my favourite talks from the <a href="https://www.realworldcrypto.com/rwc2017" target="_blank">Real World Crypto 2017</a> conference was given by Laurent Simon, on Erasing Secrets from RAM.</span><br />
<span style="font-family: Arial,Helvetica,sans-serif;"><br /></span>
<span style="font-family: Arial,Helvetica,sans-serif;">In short, it was found that in practice, many non-malicious programs handling keys and other sensitive data do not erase the RAM correctly. This would allow an attacker (that has access to all of a system's volatile memory and CPU state) access to any unerased sensitive data.</span><br />
<br />
<span style="font-family: Arial,Helvetica,sans-serif;">It was thought that compiler optimisation played a part in the lack of erasion. Take the code below:</span><br />
<br />
<span style="color: blue;"><span style="font-family: "Courier New",Courier,monospace;">void sensitive_function(...) {</span></span><br />
<br />
<span style="color: blue;"><span style="font-family: "Courier New",Courier,monospace;"> u8 sensitive_buffer[KEY_MAX] = "\0";</span></span><br />
<span style="color: blue;"><span style="font-family: "Courier New",Courier,monospace;"> ...</span></span><br />
<span style="color: blue;"><span style="font-family: "Courier New",Courier,monospace;"> <span style="color: red;">zeromem</span>(sensitive_buffer, KEY_MAX);</span></span><br />
<br />
<span style="color: blue;"><span style="font-family: "Courier New",Courier,monospace;">} </span></span><br />
<br />
<span style="font-family: Arial,Helvetica,sans-serif;">The compiler may choose to remove the</span> <span style="font-family: "Courier New", Courier, monospace;">zeromem <span style="font-family: Arial, Helvetica, sans-serif;">line, as the <span style="font-family: "Courier New",Courier,monospace;">sensitive_buffer</span> is going out of scope anyway. This would leave sensitive data on the RAM, unbeknownst to the programmer.</span></span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;">So, the paper presents a tool that allows developers to mark sensitive variables in their code, and then see (post-compilation) any potential leakage of sensitive data.</span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;">They call this tool <a href="https://github.com/lmrs2/secretgrind" target="_blank">Secretgrind</a>, based off the popular <a href="http://valgrind.org/" target="_blank">Valgrind. </a></span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;">Anyway, as it turns out, the compiler optimisation problem mentioned above wasn't actually a problem in practice - they didn't once encounter this problem in all their testing. Instead, the majority of sensitive leaks were down to developers' mistakes; they had forgotten to erase sensitive variables on both the stack and the heap.</span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;">There were a few issues with IO API's caching optimisations, though - such as when you read a line from a PEM file using <span style="font-family: "Courier New",Courier,monospace;">mmap</span>, it often loads the whole file into memory to save you the time. However, this is not immediately obvious, and when you go to delete the line from RAM, the rest of the file is still in memory!</span><br />
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span>
<span style="font-family: Arial, Helvetica, sans-serif;">Laurent concluded the talks saying Secretgrind was still in development, and although referring to it as a 'hack' (due to it's fragility), wishes for it to be used to "help you guys check your code".</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-56099936354959799052017-01-05T22:06:00.001+00:002017-01-05T22:06:15.419+00:00RWC 2017 - A Formal Security Analysis of the Signal Messaging ProtocolReal World Crypto 2017 kicked off yesterday in New York City. This afternoon, Luke Garratt presented his and his colleagues' work, <a href="https://eprint.iacr.org/2016/1013.pdf">A Formal Security Analysis of the Signal Messaging Protocol</a>. The signal protocol is used by the Facebook messenger app, WhatsApp and the Signal app (to name but a few). It is therefore surprising that there had been no formal security analysis, and their work addresses this issue. The paper is motivated by the questions<br />
<br />
<div style="text-align: center;">
What should Signal achieve?</div>
<div style="text-align: center;">
<br /></div>
and<br />
<br />
<div style="text-align: center;">
Does it?</div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: left;">
Or, put in more modern language (spoiler alert),</div>
<div class="separator" style="clear: both; text-align: center;">
<span style="font-size: x-large;">Why what</span><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjONCsUhUpHyK8T9nWm8BBIySBJKu9Nfow57Xl1Op0dyAPnmZq5sVZ_X4snB6Z22WzuY-D2ioLzTE9fewrb7igx4enYqqm2lAxpQXvPncUfQipxkV3jvthJ72UFtDQD0NGDg7GUKs-VDiRT/s1600/whatsapp-logo-PNG-Transparent.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjONCsUhUpHyK8T9nWm8BBIySBJKu9Nfow57Xl1Op0dyAPnmZq5sVZ_X4snB6Z22WzuY-D2ioLzTE9fewrb7igx4enYqqm2lAxpQXvPncUfQipxkV3jvthJ72UFtDQD0NGDg7GUKs-VDiRT/s1600/whatsapp-logo-PNG-Transparent.png" /></a> <span style="font-size: x-large;">does is</span><span style="font-size: large;"> </span><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVbiIfoT-DYIQsuMbrSHA2ncs6CVI1fTtgMLnIYF3JfR9jIQl0a4YQ8dvctEexdHMYgJbBblyfj8z3yYzhJtjRbgNmAX9EzQ5tHF4cP2qG0aYw9A2ZKKxzJAuR8NbI7UE7Z1qB9DeUgoyH/s1600/like_icon.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVbiIfoT-DYIQsuMbrSHA2ncs6CVI1fTtgMLnIYF3JfR9jIQl0a4YQ8dvctEexdHMYgJbBblyfj8z3yYzhJtjRbgNmAX9EzQ5tHF4cP2qG0aYw9A2ZKKxzJAuR8NbI7UE7Z1qB9DeUgoyH/s1600/like_icon.png" /></a></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Let's first look at what kind of security we can hope for. We speak of Forward Secrecy, which ensures that past messages are not compromised even if the communication is at some point in the future. This is to prevent an adversary storing all the conversations and waiting until an attack is successful to then recover all the communication. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Post-compromise security pushes this even further. If we have post-compromise security, not only past conversations are not compromised, but also future ones. The Signal protocol achieves this using a technique called ratcheting, which involves session keys being updated with each message sent. Why is this useful? Well, it makes the adversary's life much harder. In order to attack a system with post-compromise security, an adversary must obtain long-term keys and immediately attack and continue attacking if they want to compromise future sessions. As opposed to forward security, where an adversary would obtain a long-term key and wait for an interesting target to launch a man-in-the-middle attack (e.g. TLS-DHE) or to no forward security, where they would just store all ciphertext traffic until they obtain a long-term key and decrypt everything (e.g. TLS-RSA). Times are hard. </div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Their security model captures: </div>
<div style="text-align: left;">
</div>
<ul>
<li>Full network control by the adversary;</li>
<li>Perfect forward secrecy;</li>
<li>Post-compromise security;</li>
<li>Key compromise impersonation attacks;</li>
<li>Some (but not all) random numbers compromise.</li>
</ul>
<div>
Their proof is too long to be featured in this blog post, but Luke promises it is tedious rather than complex. Their conclusion? So far, so good. </div>
<div>
<br /></div>
<div>
In terms of limitations, they note that they have not analysed any implementation of the protocol, so this is a theoretical analysis only. They also assume an honest key distribution and have not (yet) considered multiple devices. </div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-88335968751588178132016-11-10T11:10:00.000+00:002016-11-10T13:52:31.389+00:00Study Group - All Your Queries Are Belong to Us: The Power of File-Injection Attacks on Searchable EncryptionPublished at USENIX 2016 and written by Yupeng Zhang, Jonathan Katz, and Charalampos Papamanthou from University of Maryland this paper proves again how difficult is to get proper security in searchable encryption.<br />
<br />
Some of you may wonder why I chose this paper. Reason is that I wanted to get a grasp of how research looks like outside of the MPC area at a top security conference. And what other conference to pick than this year's USENIX?<br />
<br />
After I finished reading this paper I realised a cool thing: that you need little to none a priori knowledge about Searchable Encryption (SE) [1]. Fortunately I didn't find myself diving into the literature of SE in order to fully understand the ideas presented there - how many crypto papers have you read and still be able to say this once you're done with them?<br />
<br />
Let's introduce first the notion of SE. The setup consists in 2 parties, a client and an encrypted mail server. The client wants to obtain all e-mails which have 'alice' and 'work' as tags. He then computes a token in a <b>deterministic</b> manner, then the server replies with the e-mails corresponding to the query:<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhIjTM90CjOYNzilXf9K7sWSlc_qiLgx2-rbSXWtJnQAfujqPOK9Y3Z6Dmidawm0DAWJqYojp7goHGm-IRykeyYVb52AkciMk29P9ocVJRoc50Ng6w5cCfi1kbCL64jIXIHnaP0eYTUek/s1600/Searchable-Encryption%25282%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="207" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhIjTM90CjOYNzilXf9K7sWSlc_qiLgx2-rbSXWtJnQAfujqPOK9Y3Z6Dmidawm0DAWJqYojp7goHGm-IRykeyYVb52AkciMk29P9ocVJRoc50Ng6w5cCfi1kbCL64jIXIHnaP0eYTUek/s320/Searchable-Encryption%25282%2529.png" width="320" /></a></div>
<br />
To ease the presentation we can assume that instead of e-mails the server will respond with <b>identifiers</b> (urls).<br />
<br />
An adversary wins if he breaks the token privacy, namely it recovers keywords from tokens.<br />
<br />
A file injection attack is when an attacker has encrypted e-mails of his choice on the server and can see the tokens which are queried by the client. More simple, the server behaves in a honest-but-curious way and stores encrypted e-mails of his choice by spamming the client.<br />
<br />
From a first sight this seems harmless. But if you combine it with some leaked e-mails (lots of them these days) the authors managed to have a 70% recovery rate for a keyword from a token with only 20% of leaked files. This was tested using an Enron email dataset of ~30000 emails. [2]<br />
<br />
To understand how this works it's important to look at a simpler attack:<br />
<br />
<b>Binary search attack.</b><br />
<br />
Suppose an adversary has a keyword universe $K$, with size $|K|$ a power for $2$. He can inject $\log_2{|K|}$ files and then recover keywords from every token with $100\%$ success.<br />
<br />
Each file $F_i$, $i \in [0, \log_2{|K|}] $ contains keyword $k_j$ for which $j$ has $i$'th most <span data-dobid="hdw">significative </span>bit equal to $1$. To see why this works let's look at an example inspired from the paper. <br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDF_AwjBT4v8ylXWRTGOicFodK2wqg2PpfHdz1fzM03kF0Y8TOYMA993AHWBA-6Y5-iKvJNyXvvOvWukUpUPZWJ5PQ6UhEITSOvFzTL5ebE9Z4Yk5LSY0bfb6JG4TqX_nKZGVgACfHZ0M/s1600/Searchable-Encryption-Result.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="166" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDF_AwjBT4v8ylXWRTGOicFodK2wqg2PpfHdz1fzM03kF0Y8TOYMA993AHWBA-6Y5-iKvJNyXvvOvWukUpUPZWJ5PQ6UhEITSOvFzTL5ebE9Z4Yk5LSY0bfb6JG4TqX_nKZGVgACfHZ0M/s320/Searchable-Encryption-Result.png" width="320" /></a></div>
<br />
In this case we have $|K| = 8$. Black cells are the keywords inserted in file $i$. Since the search result is $F_1$, $F_3$ ($101$) we conclude that the token was derived from the keyword $k_5$.<br />
A server which inserts one email per day can execute an attack in $2$ weeks time for $10,000$ keyword universe.<br />
<br />
<b>Countermeasure.</b><br />
<br />
One crucial observation is that we insert $|K|/2$ keywords per file. So one can think that limiting the keywords per file to some threshold $T << |K|/2$ will fix the problem.<br />
<br />
The threshold countermeasure turns out to be ineffective. The authors proved this with an attack which works almost like the first one after splitting the keywords in chunks of size $2T$. (denoted as hierarchical search attack).<br />
<br />
For $|K| = 5,000$ and $T=200$ an adversary should can inject $131$ files in order to recover keywords for every token.<br />
<br />
<b>Attacks with partial leakage.</b><br />
<br />
Sometimes e-mails leak. And when that happens...SE is almost useless against adaptive injection attacks as the authors prove. We say adaptive because it needs a forward insecure SE scheme.<br />
<br />
What if we have want to recover keywords from multiple tokens?<br />
<br />
The idea is to compute the distribution for each keyword $f^*(k)$ from the leaked files. Then the assumption that $f^*(k)$ is close to the queried tokens distribution $f(t)$ is made - which will turn true latter.<br />
Next, a small candidate set of keywords is chosen (also called ground truth) and files are injected using the binary search attack. The rest of the tokens are recovered by taking joint distributions with previous tokens.<br />
<br />
For more attacks and countermeasures which fail I strongly recommend reading the article [3].<b></b><br />
<b><br /></b>
At the end of the article one can wonder if building secure SE schemes would really help against these access pattern attacks...The authors suggest that an interesting direction is to look into lower bounds on these attacks but this seems a far from trivial task.<br />
<br />
[1] <a href="http://crypto.stanford.edu/~dabo/pubs/papers/encsearch.pdf">http://crypto.stanford.edu/~dabo/pubs/papers/encsearch.pdf</a><br />
[2] <a href="https://www.cs.cmu.edu/~./enron/">https://www.cs.cmu.edu/~./enron/</a><br />
[3] <a href="https://eprint.iacr.org/2016/172">https://eprint.iacr.org/2016/172</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-24106409228362950332016-11-04T13:24:00.000+00:002016-11-14T09:11:47.803+00:00What is SPDZ? Part 3: SPDZ specificsThis blog post is the final in a series of three in which we describe what SPDZ is. The first part can be found <a href="https://bristolcrypto.blogspot.co.uk/2016/10/what-is-spdz-part-1-mpc-circuit.html" target="_blank">here</a>, and the second <a href="https://bristolcrypto.blogspot.co.uk/2016/10/what-is-spdz-part-2-circuit-evaluation.html" target="_blank">here</a>.<br />
<br />
In this part, we consider what it is that makes SPDZ SDPZ. <br />
<h2>
</h2>
<br />
<h3 style="text-align: justify;">
SPDZ MPC</h3>
<div style="text-align: justify;">
Using
the term SPDZ is perhaps a little misleading. There are actually a few
protocols which we call SPDZ because they are really just improvements
on the original. As the online phase of SPDZ is already ‘pretty good’,
optimisations to the protocol normally involve improving the offline
phase.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
In
what follows, we talk about the methods used in SPDZ for preprocessing
(generating the triples) and then explain some improvements that have
been made.</div>
<div style="text-align: justify;">
<br /></div>
<h4 style="text-align: justify;">
<span style="font-size: small;">
Original SPDZ</span></h4>
<div style="text-align: justify;">
Here we will outline some of the methods in the original SPDZ protocol, as in [5].</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<i>Preprocessing</i></div>
<div style="text-align: justify;">
In
the first SPDZ paper, the authors show how to use somewhat homomorphic
encryption (SHE) to generate the triples. This is a (pretty expensive)
public key cryptosystem in which the encryption operation is ring
homomorphic - that is,
\[\textsf{Enc}_{\textsf{pk}}(a)\boxplus\textsf{Enc}_{\textsf{pk}}(b)=\textsf{Enc}_{\textsf{pk}}(a+b)\]
and
\[\textsf{Enc}_{\textsf{pk}}(a)
\boxdot \textsf{Enc}_{\textsf{pk}}(b) = \textsf{Enc}_{\textsf{pk}}(a
\cdot b)\]
</div>
<div style="text-align: justify;">
(For
those who know about FHE: we can do MPC with FHE, as you’d hope; however, at
the time of writing no FHE scheme which is competitive with current
secret-sharing MPC currently exists. SPDZ is what the authors call the
‘ideal use of FHE in MPC’: raw material is computed locally so that the
online computations are minimal and communication overhead is small.)</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
During the offline phase, we also generate lots of random values which are used by the parties to provide input to the circuit.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<i>Sacrificing</i></div>
<div style="text-align: justify;">
In
order to check that the adversary has not introduced errors into the
triples, for each triple we want to use we have to ‘sacrifice’ a triple
by taking another triple from the preprocessing and checking that the
third element is the product of the first two using Beaver’s method
outlined in the <a href="https://bristolcrypto.blogspot.co.uk/2016/10/what-is-spdz-part-2-circuit-evaluation.html" target="_blank">previous post</a>. Fortunately the method by which triples are generated is
very efficient.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<i>ZKPoPKs</i></div>
<div style="text-align: justify;">
Zero-Knowledge
Proofs of Plaintext Knowledge (ZKPoPKs) are a way of ensuring parties
encrypt as they should: when encrypting in the SHE scheme, there are
conditions on the plaintext and noise that need to be met to ensure the
ciphertext is well-formed.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<i>MACs</i></div>
<div style="text-align: justify;">
Before
any preprocessing computation takes place, the parties agree on some
MAC key $\alpha$ with which they MAC all their data and which they
secret-share amongst themselves so that no individual party knows the
MAC key.<br />
<br />
This MAC key is used to MAC all the data in the circuit. For each secret-shared <i>value</i>, there is also a secret-shared MAC on that value.<br />
<br />
After
the circuit has been evaluated, the parties open the secret
corresponding to the circuit output and also the MAC on it, and check
that the MAC is correct. If an actively corrupt party cheats at any
point in the circuit
evaluation, the final MAC check reveals this has happened. Note that
this means the parties could be computing on invalid data.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
In
the original paper, each party also MACked the global MAC key with a
personal MAC key. This was not needed in subsequent works.</div>
<br />
<h4 style="text-align: justify;">
Updated SPDZ: SDPZ2</h4>
<div style="text-align: justify;">
While
the online phase of SPDZ was (almost) the same, in SPDZ2 [4] the
authors made significant changes to the offline phase of the original
protocol. We outline the major changes here.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
They solved an open problem left by the first paper in giving a secure protocol which generated a shared SHE decryption key.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
They
also offered various performance enhancements, including the
introduction of ‘squares’ and ‘bits’ in the preprocessing, which allowed
faster online computations of specific operations in the circuit.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
A
major improvement was allowing MACs to be checked on data before the
end of the computation. This involved checking the MAC of a public value
that depended on the underlying data. The big advantage of this is that
the preprocessed data MACked under the same key can still be used after
this MAC check, but not in the original protocol since the check
reveals the key.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The
new protocol had covert and malicious variants, the former having
lesser parameter requirements. A covertly secure protocol ensures the
adversary wins with probability at most $1/n$ for some $n \in
\mathbb{N}$. The reason for this was that it was believed more closely
to match the real-world situation.</div>
<div style="text-align: justify;">
<br /></div>
<h4 style="text-align: justify;">
New SPDZ: MASCOT</h4>
<div style="text-align: justify;">
The
most recent improvements to the SPDZ protocol is a work known as MASCOT
[3]. As with SPDZ2, the online phase of SPDZ was left as it was; this
paper improves on the offline phase, showing how to use oblivious
transfer (OT) to achieve much faster preprocessed data than by using
SHE. It is authored by Bristol’s own Marcel, Emmanuela and Peter, and
appeared at <a href="https://www.sigsac.org/ccs/CCS2016/" target="_blank">CCS</a> last week.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
(Oblivious
transfer is a process in which one party inputs two strings and a
second party chooses exactly one; this is done in such a way that the
first party doesn’t know which string the second chose, and the second
does not learn anything about the string it didn’t choose.)</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The
authors of MASCOT note that the heavy machinery of public-key
cryptography required in the original SPDZ paper to generate the
preprocessing (namely, the use of SHE) prevents the communication from
being a bottleneck since the local computation takes so long.<br />
<br />
The
protocol is actively secure and performs significantly faster than
previous SPDZ2 implementations, with much lower communication overhead.</div>
<div style="text-align: justify;">
<br /></div>
<h4 style="text-align: justify;">
The Future of SPDZ</h4>
<div style="text-align: justify;">
MASCOT has only just been released, and there are already whispers of a new paper. Watch this space...</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
<br />
<h4>
References</h4>
</div>
<div style="text-align: justify;">
[1] B. Applebaum, Y. Ishai, and E. Kushilevitz. <a href="http://www.eng.tau.ac.il/%7Ebennyap/pubs/AGCfull.pdf" target="_blank">How to garble arithmetic circuits</a>. 52nd FOCS, pp120–129. IEEE Computer Society Press, 2011</div>
<div style="text-align: justify;">
[2] D. Beaver. <a href="https://link.springer.com/chapter/10.1007%2F3-540-46766-1_34" target="_blank">Efficient Multiparty Protocols using Circuit Randomisation</a>. In J. Feigenbaum, editor, CRYPTO, volume 576 of Lecture Notes in Computer Science, pp420-432, Springer, 2012.</div>
<div style="text-align: justify;">
[3] R. Bendlin, I. Damgard, C. Orlandi, and S. Zakarias. <a href="https://eprint.iacr.org/2010/514" target="_blank">Semi-homomorphic encryption and multiparty computation</a>. In EUROCRYPT, pp169-188, 2011.</div>
<div style="text-align: justify;">
[4] I. Damgard, M. Keller, E. Larraia, V. Pastro, P. Scholl, N. P. Smart. <a href="https://eprint.iacr.org/2012/642" target="_blank">Practical covertly secure MPC for dishonest majority - or: Breaking the SPDZ limits</a>.
In ESORICS (2013), J. Crampton, S. Jajodia, and K. Mayes, Eds., vol.
8134 of Lecture Notes in Computer Science, Springer, pp. 1–18.</div>
<div style="text-align: justify;">
[5] I. Damgard, V. Pastro, N. P. Smart, and S. Zakarias. <a href="https://eprint.iacr.org/2011/535" target="_blank">Multiparty computation from somewhat homomorphic encryption</a>. In Advances in Cryptology – CRYPTO 2012, volume 7417 of LNCS, pp643–662. Springer, 2012.</div>
<div style="text-align: justify;">
[6] I. Damgard and S. Zakarias. <a href="https://eprint.iacr.org/2012/512" target="_blank">Constant-overhead secure computation of</a></div>
<div style="text-align: justify;">
<a href="https://eprint.iacr.org/2012/512" target="_blank">boolean circuits using preprocessing</a>. In TCC, pp621-641, 2013.</div>
<div style="text-align: justify;">
[7] M. Keller and E. Orsini and P. Scholl. <a href="https://eprint.iacr.org/2016/505" target="_blank">MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer</a>. Cryptology ePrint Archive, Report 2016/505, 2016.</div>
<div style="text-align: justify;">
[8] J. Buus Nielsen, P. Nordholt, C. Orlandi, and S. Burra. <a href="https://eprint.iacr.org/2011/091" target="_blank">A new approach to practical active-secure two-party computation</a>.
In Reihaneh Safavi-Naini and Ran Canetti, editors, Advances in
Cryptology CRYPTO 2012, volume 7417 of Lecture Notes in Computer
Science, pp681-700. Springer Berlin Heidelberg, 2012.</div>
<div style="text-align: justify;">
[9] A. Shamir. <a href="https://dl.acm.org/citation.cfm?id=359176" target="_blank">How to Share a Secret</a>. In Communications of the ACM, Volume 22 Issue 11, Nov. 1979, pp612-613.</div>
<div style="text-align: justify;">
[10] A. Yao. <a href="https://dl.acm.org/citation.cfm?id=1382944" target="_blank">How to generate and exchange secrets</a>. In SFCS '86 Proceedings of the 27th Annual Symposium on Foundations of Computer Science, pp162–167. IEEE, 1986.</div>
<div style="text-align: justify;">
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-63522857802012812242016-11-03T12:27:00.000+00:002016-11-03T12:27:15.193+00:00Study Group - Dedup est MachinaThe title of the paper is '<a href="http://www.ieee-security.org/TC/SP2016/papers/0824a987.pdf" target="_blank">Dedup Est Machina: Memory Deduplication as an Advanced Exploitation Vector</a>'. <br />
<br />
I saw the paper being presented at the IEEE Security and Privacy Symposium, and it was one of the more enjoyable talks due to the nature of the exploit, and the enthusiasm of Erik Bosman presenting it. It also won a 'PWNIE' award for the 'most innovative research on hacking' at Black Hat USA this year.<br />
<br />
The idea of the paper is simple - to combine the Memory Deduplication exploit with a Rowhammer attack. <br />
<br />
They use this to craft a JavaScript-based attack against the Microsoft Edge browser, to gain arbitrary memory read and write access in the browser.<br />
<br />
<h3>
<b>Memory Deduplication</b></h3>
<br />
The idea is to reduce the total memory footprint of a running system by sharing memory pages with identical content across independent processes. <br />
<br />
A good example of this is the page cache in modern operating systems. The page cache stores a single cached copy of file system contents in memory, and shares the copy across different processes. <br />
<br />
In running processes, two or more pages with the same content in run-time memory are always deduplicated, even if the pages are completed unrelated, and their equivalence is fortuitous. <br />
<br />
So, in order to keep a single copy of a number of identical pages, the memory deduplication system needs to perform three tasks:<br />
<ol>
<li><i>Detect</i>. The system needs to detect memory pages with the same content, usually done at regular and predetermined intervals during normal system operations</li>
<li><i>Only Keep One</i>. We only want to keep one physical copy, and return the others to the memory allocator; to do this, the deduplication system updates the page-table entries of the owning processes, so that the virtual addresses now point to a single shared copy; these entries are also marked as read-only to support copy-on-write semantics</li>
<li><i>Create Private Copy on Write</i>. Specifically, when one of the owning processes writes to the read only page, a copy-on-write page fault occurs; at this point, the memory deduplication system can create a private copy of the page, and map it into the corresponding page table entries of the faulting process</li>
</ol>
On Windows 8.1 onwards, it's known as memory combining.<br />
<br />
<h3>
<b>Memory Deduplication as a Side Channel</b></h3>
<br />
So you may have spotted how you could exploit this as a side channel. Writing to a shared page from any of the owning processes results in a page fault and a subsequent page copy.<br />
<br />
Due to these additional expensive operations, a write to a shared page takes significantly longer compared to a write to a regular page (by significantly longer, we're talking up to one order of magnitude). This timing difference provides an attacker with a side channel to detect whether a given page exists in the system.<br />
<br />
To do this, the attacker has four easy steps:<br />
<ol>
<li><i>Craft</i> a page with the exact same content as the target page</li>
<li><i>Wait</i> for the memory deduplication system to do its thing</li>
<li><i>Write</i> to the crafted page</li>
<li><i>Time</i> how long it takes for the write operation to finish</li>
</ol>
If the write takes longer than a write to a non-deduplicated page, the attacker can conclude that a page with the same content exists.<br />
<br />
So just using this, an attacker could detect whether a user was visiting a particular web page, or running a particular program.<br />
<br />
Of course, false positives are possible due to noise etc, so the attacker can use redundancy (repeated attempts) to disclose the intended information in a reliable way.<br />
<br />
At the moment, the memory deduplication side channel is looking a little bit pathetic, as it's a slow single-bit side channel that can only be used for leaking a limited number of bits from a victim process.<br />
<br />
However, we can abuse memory deduplication to get some stronger primitives.<br />
<br />
<h3>
<b>Primitives</b></h3>
<h4>
<i>Primitive #1: Alignment Probing</i> </h4>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPRITTyYnVa2Ey4Xep1ciooIdjG0jQ13AWtu6olVhOm3D5nh_0VJxvqIjfPMV8q9IEYphGxobhU_KTQSQJsPLE7OVxkuPTnLlWRT_SYDelv4Cg7pW5LNyntS_zpPkC59VEGB7d99Vr0Fk/s1600/dedup_ap.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="143" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPRITTyYnVa2Ey4Xep1ciooIdjG0jQ13AWtu6olVhOm3D5nh_0VJxvqIjfPMV8q9IEYphGxobhU_KTQSQJsPLE7OVxkuPTnLlWRT_SYDelv4Cg7pW5LNyntS_zpPkC59VEGB7d99Vr0Fk/s320/dedup_ap.png" width="320" /></a></div>
<br />
First up, we have a primitive that allows a byte-by-byte probing of secret information by controlling its alignment in memory. So, this primitive is applicable when the secret data has <i>weak alignment properties.</i> <br />
<br />
Imagine in this case, the secret is a password stored in memory immediately after a blob of attacker-provided input bytes. What the attacker can do is provide some more input bytes, which effectively push the second part of the secret out of the target page. Now the attacker can brute force the first part of the secret (e.g. one byte) using deduplication. <br />
<br />
All the attacker needs to do now is provide fewer input bytes to bring another portion of the secret back into the page. Just like before, the attacker uses deduplication to get the rest of the secret. For larger secrets, you just keep repeating these steps until all of the secret is known.<br />
<br />
This alignment probing primitive is apparently very effective, and is used to disclose code pointers in Microsoft Edge, and a password hash in nginx.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<h4>
<i>Primitive #2: Partial Reuse</i></h4>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgLKey12DwZtlUlFqhjRAJNpSbOIBdiUHYuU4GfhU9vqp8WLuWyNPXXHLFKvD58UVitGOUt3JPc9mRBJZAU_DMcfXlahyI5nXHDXUnWRJNBNbV8DyEEOaN7SgcePA2MCnMEtiOInaJOvg/s1600/dedup_pr.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="153" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgLKey12DwZtlUlFqhjRAJNpSbOIBdiUHYuU4GfhU9vqp8WLuWyNPXXHLFKvD58UVitGOUt3JPc9mRBJZAU_DMcfXlahyI5nXHDXUnWRJNBNbV8DyEEOaN7SgcePA2MCnMEtiOInaJOvg/s320/dedup_pr.png" width="320" /></a></div>
<br />
When the secret has strong memory alignment properties, the first primitive falls through.<br />
<br />
Up next is a primitive that can be used when attacker-controlled input can partially overwrite stale secret data with<i> predictable reuse properties.</i> User-space memory allocators encourage memory reuse and do not normally zero out deallocated buffers for performance reasons.<br />
<br />
This means that a target application often reuses the same memory page and selectively overwrites the content of a reused page with new data.<br />
<br />
So, same set up as before. This time, when the attacker gives a larger input, it overwrites the first part of the secret. Now the attacker can brute force the second part using memory deduplication.<br />
<br />
Once this is known, the attacker can brute force the first part of the secret, by deduplicating against a page without an overwritten secret.<br />
<br />
This primitive is apparently fairly common in practice, and is used in the paper to break heap ASLR (Address Space Layout Randomisation) in nginx.<br />
<br />
<h4>
<i>Primitive #3: Birthday Heap Spray</i></h4>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJGi3r8-31zRbwVFAovwrqr7s0cdy8CUcXoZyd4KsATUWw8Rt3N8hzHTFCjIgQfChc2CmPpPmg06qQivRjL9rpFy6ICK8fMWRnxNH_RdONIcI937VRUWxXvkBMPqR-kBQkSs1_FmsCPzw/s1600/dedup_bhs.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="82" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJGi3r8-31zRbwVFAovwrqr7s0cdy8CUcXoZyd4KsATUWw8Rt3N8hzHTFCjIgQfChc2CmPpPmg06qQivRjL9rpFy6ICK8fMWRnxNH_RdONIcI937VRUWxXvkBMPqR-kBQkSs1_FmsCPzw/s400/dedup_bhs.png" width="400" /></a></div>
Last one. This primitive can leak a secret even when the attacker has no control over memory alignment or reuse. It relies on a generative approach that revolves around the birthday paradox, which we all know and love.<br />
<br />
This primitive is applicable when the attacker can force the application to controllably<i> spray</i> target secrets over memory.<br />
Up until now we have only assumed there is one secret we want to leak. <br />
<br />
If a partially masked secret has $P$ possible values, we use memory deduplication to perform $1 \times P$ comparisons between the $P$ probe pages and the single target page - essentially brute forcing the secret.<br />
<br />
For large $P$, a very large amount of memory is required, for the attack, and for the redundancy. However, if the attacker can cause the target application to generate many secrets, memory deduplication provides a much stronger primitive than brute forcing.<br />
<br />
For instance, an attacker can generate a large number of secret heap pointers by creating a large number of objects from JavaScript, each referencing another object.<br />
<br />
So, assume the attacker causes the application to generate $S$ such pages, each with a different secret pointer. The attacker now also creates $P$ probe pages, with $P$ being roughly the same size as $S$. <br />
<br />
Each probe page uses the same encoding as the secret pages, except that, not knowing the secret pointers, the attacker needs to 'guess' their values. Each probe page contains a different guessed value. The idea is to find at least one of the probe pages matching <i>any</i> of the secret pages. So we have a classic Birthday Problem, where the secret and probe values are the birthdays.<br />
<br />
Since memory deduplication compares any page with any other page in each deduplication pass, it automatically tests all $P$ probe pages against the $S$ target secret pages.<br />
<br />
A hit on any of the $P$ possible values immediately exposes a target secret. This birthday primitive reduces the memory requirements of the attack by a factor of $S$. It's especially useful when leaking the location of randomised pointers. For exploitation purposes, it's not important which pointer the attacker hits, as long as at least one of them is hit.<br />
<br />
This primitive is used to leak a randomised heap pointer in Microsoft Edge, and also used to craft a reliable Rowhammer exploit.<br />
<br />
<b>Rowhammer Exploit</b><br />
<br />
Anyway, now we get to the good bit; the Rowhammer exploit. The Rowhammer bug was first publicly disclosed in June 2014 by Kim et al. However, it was not until March 2015 that someone published a working Linux kernel privilege escalation exploit using Rowhammer.<br />
<br />
Essentially, Rowhammer is DRAM vulnerability that allows an attacker to flip bits in a (victim) memory page by repeatedly reading from other (aggressor) memory pages. These bit flips are deterministic, meaning that once a vulnerable memory location is identified, it is possible to reproduce the bit flip patterns by reading again the same set of aggressor pages.<br />
<br />
There are two variations of the Rowhammer attack. Single-sided Rowhammer repeatedly activates a single row to corrupt its neighbouring rows' cells. Double-sided Rowhammer targets a single row by repeatedly activating both its neighbours rows. Although double-sided is more effective than single-sided in practice, it requires you to know not only the location of what you want to target, but also the two adjacent memory pages. Therefore, the authors of this paper choose to use the single-sided Rowhammer attack.<br />
<br />
So, if you want to use the Rowhammer attack to do something useful, rather than to just corrupt random memory pages, then first you need to find a vulnerable memory location. To do this, you can allocate a large array filled with doubles. If you do it right, you can end up with an encoded double value such that all bits are set to 1.<br />
<br />
Then, you find some eviction sets so you can bypass the cache, and you hammer 32 pages at a time (though it doesn't say where they get the number 32 from). By hammer, I mean you read from each page two million times before moving to the next page. Once you've gone through all 32 pages, you check the entire Array for bit flips. After scanning a sufficient number of pages, you will know which bits can be flipped at which offsets.<br />
<br />
Next, you want to determine what to place in these vulnerable memory locations to craft the exploit. The idea behind the Rowhammer attack in this paper, is to place some data in the array which, after a bit flip, can yield a reference to a controlled counterfeit object. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9fbxmUtP97RqUXOJo5bsw541WBCl6L3xnyjiEk3txVRVLrP5M31H0KMwzeZ4-SpkHb3N9BTzT-O2QJLFclMnKUZjVakyuYdPRgw2_svRXTq91YWtskCTMUPwuV9IruFkPpsQGlp2u6gM/s1600/dedup_rh.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="183" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9fbxmUtP97RqUXOJo5bsw541WBCl6L3xnyjiEk3txVRVLrP5M31H0KMwzeZ4-SpkHb3N9BTzT-O2QJLFclMnKUZjVakyuYdPRgw2_svRXTq91YWtskCTMUPwuV9IruFkPpsQGlp2u6gM/s320/dedup_rh.png" width="320" /></a></div>
<br />
<br />
So once we find out which bit is flipped, we can store a valid object reference at the vulnerable memory location and pivot to a counterfeit object with Rowhammer.<br />
<br />
So that's the attack - using memory deduplication to leak the code and heap pointers, and Rowhammer to pivot them to a counterfeit object. In practice, the paper says it takes about 30 minutes for the two deduplication passes and 508MB of memory. It also takes 1GB of memory to find bit flips, and 32MB to find the cache eviction sets. The timing of the Rowhammer depends on the vulnerable DRAM chips, so they say it varies from seconds to hours in practice.<br />
<br />
<br />
<b>Mitigation </b><br />
<br />
So finally, the last part of the paper talks about how one would prevent attacks like this. A fairly obvious observation would be to just no deduplicate, as that prevents all exploits like this. However, deduplication contributes heavily to using physical memory efficiently, so we would like to keep it if possible.<br />
<br />
This paper suggests that you only deduplicate<i> zero pages.</i> A zero page is a page with a starting address of zero, or rather, the series of memory addresses at the absolute beginning of a computer's address space. The paper doesn't go into much detail about why, but claims they've done experiments to show that it's nearly as efficient as full deduplication, but entirely eliminates the possibility for any attacks targeting memory deduplication.<br />
<br />
<b>Fin</b><br />
<br />
So that's the end of paper, they finish by saying memory deduplication is dangerous in the wrong hands, and they've been working with their contacts at Microsoft to find an immediate solution.<br />
<br />
I had a look round on the internet but no-ones has published anything further since the paper, so who knows?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-28191143903540878682016-10-28T17:16:00.000+01:002016-10-28T17:16:02.776+01:00Study Group: Towards Sound Fresh Re-keying with Hard (Physical) Learning Problems<html>
<head>
<title>MathJax TeX Test Page</title>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}});
</script>
<script async="" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript">
</script>
</head>
<body>
<div align="justify">
The first study group on side-channel analysis in the academic year 2016/2017 is "Towards Sound Fresh Re-keying with Hard (Physical) Learning Problems" from Stefan Dziembowski, Sebastian Faust, Gottfried Herold, Anthony Journault, Daniel Masny and Francois-Xavier Standaert, and recently published at Crypto this year. There are mainly two reasons why I chose to present it: it is somehow close to my personal research and, generally speaking, I like to read of different areas influencing each other. In this particular case, learning problems which are usually associated to post-quantum cryptography (more specifically to lattice-based cryptography) have been used to prove a security result in leakage-resilient cryptography.
</div>
<br>
<div align="justify">
In one sentence and very loosely speaking, the content of the paper which has been covered in the study group can be summarised as: the authors construct a fresh re-keying scheme and show it is secure under the LPN assumption. The remaining of this blog post is structured as a number of questions and answers on the previous statement that will hopefully clarify and detail its meaning.
</div>
<br>
<div align="justify"><b>
Q: what is a fresh re-keying scheme?
</b></div>
<div align="justify">
<i>A:</i> intuitively, a fresh re-keying scheme is a set of cryptographic algorithms which allows the generation of session keys from a master secret key and some publicly known randomness. A session key is used to encrypt a message and then discarded. The following diagram nicely represents what is going on.
<br>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKtUnxUCpcApbatvKrPUQWaCbMCB9JSBJ-2GOzKv07Oh8InKWK31hnjkuJYQsh7MHfvsBQZR7lQfZ8VfwNHrWU-kl_pYfBGqzMO_eP3oU1cHM5Bm3rsMluLKDQYArTq46Cuuh_sSl7ka4/s1600/rk.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKtUnxUCpcApbatvKrPUQWaCbMCB9JSBJ-2GOzKv07Oh8InKWK31hnjkuJYQsh7MHfvsBQZR7lQfZ8VfwNHrWU-kl_pYfBGqzMO_eP3oU1cHM5Bm3rsMluLKDQYArTq46Cuuh_sSl7ka4/s400/rk.jpg" width="400" height="316" /></a></div>
<br>
Protecting block ciphers (or other cryptographic primitives) against DPA is an expensive procedure that introduces overhead in the design and whose security is often based on heuristic arguments. Such attacks exploit the dependence of multiple power traces with the same secret key to retrieve it. Hence, fresh re-keying schemes represent a solution that try to structurally avoid the threat by using each session key only once. Both the receiver and the sender can compute the same session key by means of the shared master secret key and some randomness associated with the ciphertext. At that point, the underlying block cipher should retain security only against SPA, while the algorithm the session key is computed by should resist DPA to prevent the adversary to gain information about the master secret key thanks to leakage. The overall scheme is designed to be more efficient than a one which applies DPA countermeasures directly to the block cipher because the GenSK algorithm is built to be easier to protect.
</div>
<br>
<div align="justify"><b>
Q: what is the LPN assumption?
</b></div>
<div align="justify">
<i>A:</i> the Learning Parity with Noise (LPN) problem asks to find a vector $\mathbf{k}\in\mathbb{Z}_2^n$ given query access to an oracle that outputs
$$
(\mathbf{r},<\mathbf{r},\mathbf{k}>\oplus e) \in \mathbb{Z}_2^n\times\mathbb{Z}_2
$$
where $\mathbf{r}\in\mathbb{Z}_2^n$ is a uniformly random vector and $e\leftarrow \mathcal{B}_\tau$ is an error bit drawn from the Bernoulli distribution of a fixed parameter $0<\tau<1/2$, <i>i.e.</i> $\mathbb{P}(e=1)=\tau$. The decisional version, instead, asks to distinguish the above distribution from the uniform one on $\mathbb{Z}_2^n\times\mathbb{Z}_2$. The first step in the overall proof is to define a similar version of such a problem, what the author call the Learning Parity with Leakage (LPL) problem, in which everything is the same bar the distribution of the noise, which is now taken to be the Gaussian distribution over the reals. Note that the second component of the LPL distribution is then a real number. It is shown that:
$$
\text{LPN is hard} \Rightarrow \text{LPL is hard}.
$$
</div>
<br>
<div align="justify"><b>
Q: what does it mean for a fresh re-keying scheme to be secure?
</b></div>
<div align="justify">
<i>A:</i> since we deal with an environment where leakage exists, we should specify both what kind of security model the proof follows and what kind of leakage the adversary is allowed to query.
<br>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjW7ZlmL6cgarZ-XyxGNm-yrDTxpDTWLhNbFzqDOxA4Pj0oRzL8af4a-5ZQg_g_a2vNsLOs9Z23RwB8NmuFmswYB_rm1moF6j9XaViW_7dXecYnkF9ZQvB18K8iT-SWTb-h9Bk9P_8eCQE/s1600/sec.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjW7ZlmL6cgarZ-XyxGNm-yrDTxpDTWLhNbFzqDOxA4Pj0oRzL8af4a-5ZQg_g_a2vNsLOs9Z23RwB8NmuFmswYB_rm1moF6j9XaViW_7dXecYnkF9ZQvB18K8iT-SWTb-h9Bk9P_8eCQE/s400/sec.jpg" width="400" height="171" /></a></div>
<br>
Security can be stated as the indistinguishability game in the above picture. In the real part on the left, the adversary plays with and queries a real oracle, that is to say an oracle that generates all keys and randomness according to the scheme to be secured. Hence also the leakage is computed on the sensitive variables. Instead, the right-hand side depicts the random game: an ideal oracle generates keys at random, which are then passed to a simulator which computes leakage on them. For the security claim to be valid, the (computational) adversary shouldn't distinguish the oracles she is playing with.
<br>
The leakage model is the $t$-noisy probing model. If you imagine a circuit, such a model can be depicted as the adversary being allowed to put probes on up to $t$ wires and to read a noisy version of the values being carried on them. In the specific case of their construction, since there isn't a circuit from which the adversary can get leakage from, the authors specify a set of variables on which noisy bit-selector functions are applied.
</div>
<br>
<div align="justify"><b>
Q: which is their fresh re-keying scheme?
</b></div>
<div align="justify">
<i>A:</i> even if the scheme, called $\Pi_{noisy}$ in the paper, is composed of a set of algorithms, I only specify the core of GenSK, which is responsible for generating the session key. I assume the inputs are some randomness $R$ and a set of shares of the master secret key $\{msk_i\}_{i\leq d}$, which are needed in order to secure the algorithm against DPA.
$$
\begin{align*}
1.\ & u_i \leftarrow R\cdot msk_i \\
2.\ & u \leftarrow \sum_{i\leq d} u_i \\
3.\ & sk \leftarrow H(u) \\
\end{align*}
$$
The sum in the second line is computed iteratively as $((\dots (u_1 +u_2)+u_3)+\dots )+u_d$ for security reasons, while the $H$ in the third line is an hash function modelled as a random oracle. In the game, the adversary is given $sk$ and $R$ and can query leakage on a certain amount of bits of: $msk_i$, $R\cdot msk_i$ and $\sum u_i$. A further step not specified in the previous list is the application of a refreshing algorithm to the shares of the master secret key, in such a way that they look like new shares when the next session key is generated. Finally, the author prove the following:
$$
\text{LPL is hard} \Rightarrow \Pi_{noisy} \text{ is secure}.
$$
</div>
<br>
<div align="justify">
For many other details, the paper is available on <a href="https://eprint.iacr.org/2016/573", target=blank>ePrint</a>.
</div>
</body>
</html>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-85883375209169490492016-10-28T13:24:00.000+01:002018-01-19T15:20:11.431+00:00What is SPDZ? Part 2: Circuit Evaluation<div style="text-align: justify;">
This blog post is the second in a series of three in which we describe what SPDZ is. The first part can be found <a href="https://bristolcrypto.blogspot.co.uk/2016/10/what-is-spdz-part-1-mpc-circuit.html" target="_blank">here</a> and the third <a href="https://bristolcrypto.blogspot.co.uk/2016/11/what-is-spdz-part-3-spdz-specifics.html" target="_blank">here</a>.</div>
<div style="text-align: justify;">
<br /></div>
In this part we discuss how we perform the
additions and multiplications of shared secret values as in the SPDZ protocol.<br />
<br />
<h3>
The Preprocessing Model </h3>
The preprocessing model is a framework
in which we assume the existence of a 'trusted dealer' who distributes
'raw material' to the parties before any circuit evaluation takes place.
The reason for doing this is that if we have the 'raw material' already
in place, the circuit can be evaluated very efficiently. <br />
<br />
We
realise this by splitting the protocol into two phases: a
'preprocessing' (or 'offline') phase where we generate the preprocessed
data, and an 'online' phase where we evaluate the circuit. (The term
'offline' is a little misleading, since the parties still communicate.)<br />
<br />
<h3>
Evaluating the Circuit</h3>
We now suppose that the parties have agreed on some arithmetic circuit they want to evaluate jointly. <br />
<div style="text-align: justify;">
<br /></div>
<h4 style="text-align: justify;">
Providing input</h4>
<div style="text-align: justify;">
The
first step of the circuit evaluation is to read in values from (some
of) the parties. In SPDZ, this means getting each party with input,
$P_i$ with $x^i$ (the superscript $i$ is an index to avoid confusion
with $x_i$ which is a share of $x$), to secret-share input $x^i$ to the
other parties.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
To
do this, we need to use a share of a random value: each party $P_i$
holds some $r_i$ and the value $r = \sum_i r_i$ is uniformly random and
not known by any party. First, every party sends their share $r_j$ of
$r$ to party $P_i$. This is called a 'partial opening' because $P_i$ now
discovers the value of $r$ (by summing the shares).<br />
<br />
Party
$P_i$ then broadcasts the value $x^i - r$. Party $P_1$ sets its share
of $x^i$ as $x_1^i=r_1 + x^i - r$, and $P_j$ for $j > 1$ sets its
share of $x^i$ as $x^i_j=r_j$. (In practice, it is not always $P_1$ who
does a different computation.)<br />
<br />
Thus we have turned $x^i$ into $\langle x^i \rangle$.<br />
<br /></div>
<h4 style="text-align: justify;">
Addition</h4>
<div style="text-align: justify;">
Suppose
we want to compute some arithmetic circuit on our data; that is, some
series of additions and multiplications. We have the following very
useful properties of an additive sharing scheme:</div>
<ul style="text-align: justify;">
<li>If we have some secret-shared field elements $\langle a \rangle$ and
$\langle b \rangle$, so party $P_i$ has $a_i$ and $b_i$, each party can
locally compute $a_i + b_i$, and hence, since $\sum_i a_i + \sum_i b_i =
\sum_i (a_i+b_i)$, they obtain a secret sharing $\langle a + b \rangle$
of the value $a+b$.</li>
<li>If each party multiplies its share by some public value $\alpha$,
then since $\sum_i \alpha a_i = \alpha \sum_i a_i = \alpha a$ the
parties obtain a secret sharing $\langle \alpha a \rangle$ of the value
$\alpha a$.</li>
</ul>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
In
other words, the secret-sharing is linear; this is good because it
means there is no communication cost for doing these operations.
Unfortunately, we have to do a bit more work to multiply secret-shared
elements.</div>
<div style="text-align: justify;">
<br /></div>
<h4 style="text-align: justify;">
Multiplication</h4>
<div style="text-align: justify;">
In
1991, Donald Beaver [2] observed the following: suppose we want to
compute $\langle x y \rangle$ given some $\langle x \rangle$ and
$\langle y \rangle$, and we already have three secret-shared values,
called a ‘triple’, $\langle a \rangle$, $\langle b \rangle$ and $\langle
c \rangle$ such that $c = a b$. Then note that if each party broadcasts
$x_i-a_i$ and $y_i - b_i$, then each party $P_i$ can compute $x-a$ and
$y-b$ (so these values are publicly known), and hence compute
\[ z_i=c_i + (x-a) b_i + (y-b) a_i\]
Additionally, one party (chosen arbitrarily) adds on the public value $(x-a)(y-b)$ to their share so that summing all the shares up, the parties get
\[\sum_i z_i = c + (x-a)b + (y-b)a + (x-a)(y-b) = xy\]
and so they have a secret sharing $\langle z \rangle$ of $xy$.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The
upshot is that if we have lots of triples, then we can perform many
multiplications on secret-shared data and hence we can compute any
arithmetic circuit. (Note that a triple cannot be reused because this
would reveal information about the secrets we are trying to multiply -
i.e. since $x-a$ and $y-b$ are made public in the process above)</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Importantly,
observe that not only do these triples not depend on the input
secret-shared values they are used to multiply, they are also completely
independent of the circuit to be evaluated. This means that we can
generate these triples at any point prior to evaluating the circuit. The
values of $a$, $b$ and $c$ are not known by any parties when generated -
each party only knows a share of each of 'some values' for which they
are told this relation holds.</div>
<div style="text-align: justify;">
<br /></div>
Moreover,
if these triples are generated in the offline phase at some point
before the circuit evaluation, since addition, scalar multiplication and
field multiplication are inexpensive in terms of communication and
computation, the online phase is both highly efficient and
information-theoretically secure.<br />
<br />
Combining the above, we have now established roughly how SPDZ evaluates an arithmetic circuit.<br />
<br />
<i><b>Next time</b></i>: In <a href="https://bristolcrypto.blogspot.co.uk/2016/11/what-is-spdz-part-3-spdz-specifics.html" target="_blank">the final part</a>, we will look at what makes SDPZ different from other MPC protocols which achieve the same thing.<br />
<br />
<h4>
References </h4>
<div style="text-align: justify;">
[1] B. Applebaum, Y. Ishai, and E. Kushilevitz. <a href="http://www.eng.tau.ac.il/%7Ebennyap/pubs/AGCfull.pdf" target="_blank">How to garble arithmetic circuits</a>. 52nd FOCS, pp120–129. IEEE Computer Society Press, 2011</div>
<div style="text-align: justify;">
[2] D. Beaver. <a href="https://link.springer.com/chapter/10.1007%2F3-540-46766-1_34" target="_blank">Efficient Multiparty Protocols using Circuit Randomisation</a>. In J. Feigenbaum, editor, CRYPTO, volume 576 of Lecture Notes in Computer Science, pp420-432, Springer, 2012.</div>
<div style="text-align: justify;">
[3] R. Bendlin, I. Damgard, C. Orlandi, and S. Zakarias. <a href="https://eprint.iacr.org/2010/514" target="_blank">Semi-homomorphic encryption and multiparty computation</a>. In EUROCRYPT, pp169-188, 2011.</div>
<div style="text-align: justify;">
[4] I. Damgard, M. Keller, E. Larraia, V. Pastro, P. Scholl, N. P. Smart. <a href="https://eprint.iacr.org/2012/642" target="_blank">Practical covertly secure MPC for dishonest majority - or: Breaking the SPDZ limits</a>.
In ESORICS (2013), J. Crampton, S. Jajodia, and K. Mayes, Eds., vol.
8134 of Lecture Notes in Computer Science, Springer, pp. 1–18.</div>
<div style="text-align: justify;">
[5] I. Damgard, V. Pastro, N. P. Smart, and S. Zakarias. <a href="https://eprint.iacr.org/2011/535" target="_blank">Multiparty computation from somewhat homomorphic encryption</a>. In Advances in Cryptology – CRYPTO 2012, volume 7417 of LNCS, pp643–662. Springer, 2012.</div>
<div style="text-align: justify;">
[6] I. Damgard and S. Zakarias. <a href="https://eprint.iacr.org/2012/512" target="_blank">Constant-overhead secure computation of</a></div>
<div style="text-align: justify;">
<a href="https://eprint.iacr.org/2012/512" target="_blank">boolean circuits using preprocessing</a>. In TCC, pp621-641, 2013.</div>
<div style="text-align: justify;">
[7] M. Keller and E. Orsini and P. Scholl. <a href="https://eprint.iacr.org/2016/505" target="_blank">MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer</a>. Cryptology ePrint Archive, Report 2016/505, 2016.</div>
<div style="text-align: justify;">
[8] J. Buus Nielsen, P. Nordholt, C. Orlandi, and S. Burra. <a href="https://eprint.iacr.org/2011/091" target="_blank">A new approach to practical active-secure two-party computation</a>.
In Reihaneh Safavi-Naini and Ran Canetti, editors, Advances in
Cryptology CRYPTO 2012, volume 7417 of Lecture Notes in Computer
Science, pp681-700. Springer Berlin Heidelberg, 2012.</div>
<div style="text-align: justify;">
[9] A. Shamir. <a href="https://dl.acm.org/citation.cfm?id=359176" target="_blank">How to Share a Secret</a>. In Communications of the ACM, Volume 22 Issue 11, Nov. 1979, pp612-613.</div>
<div style="text-align: justify;">
[10] A. Yao. <a href="https://dl.acm.org/citation.cfm?id=1382944" target="_blank">How to generate and exchange secrets</a>. In SFCS '86 Proceedings of the 27th Annual Symposium on Foundations of Computer Science, pp162–167. IEEE, 1986.</div>
<div style="text-align: justify;">
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-2857246783304155242016-10-21T13:29:00.002+01:002016-11-14T09:09:58.856+00:00What is SPDZ? Part 1: MPC Circuit Evaluation Overview<div style="text-align: justify;">
This blog post is the first in a series of three in which we look at
what MPC circuit evaluation is, an outline of how MPC protocols in the so-called 'preprocessing model' work, and
finally the specifics of SPDZ. They will come in weekly instalments. </div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
In this part, we will introduce the idea of MPC circuit evaluation. </div>
<div style="text-align: justify;">
<br />
</div>
<h2 style="text-align: justify;">
</h2>
<div style="text-align: justify;">
</div>
<h3 style="text-align: justify;">
Introduction</h3>
<div style="text-align: justify;">
If you do research in the field of cryptography, at some point you’ve quite possibly come across the curiously named SPDZ ('speedz'). The aim of this blog post is to explain what it is and why it’s used. In order to keep this post as short and accessible as possible, lots of the details are omitted, and where new concepts are introduced, they are kept superficial.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
We start by defining secure multi-party computation (MPC): MPC is a way by which multiple parties can compute some function of their combined secret input without any party revealing anything more to the other parties about their input other than what can be learnt from the output.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Let’s make this more concrete: suppose there are two millionaires who want to know which of them has more money without revealing exactly how much money they have. How can they do this? Clearly we can do it with MPC, providing it exists.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Thankfully, MPC does exist. It is used in many different contexts and has various applications, ranging from the 'simple' and specific such as oblivious transfer (more on this later), to the relatively general-purpose functionality of joint circuit computation. SDPZ is an MPC protocol allowing joint computation of arithmetic circuits.</div>
<div style="text-align: justify;">
<br /></div>
<h4 style="text-align: justify;">
Circuit Garbling vs Secret Sharing</h4>
<div style="text-align: justify;">
There are two main constructions of MPC protocols for circuit evaluation: circuit garbling and secret sharing.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The answer to the so-called millionaire’s problem was first found in the 1980s with Yao’s garbled circuits [10]. As circuit garbling is somewhat parallel to the MPC model we work with in SPDZ, we will not discuss it here.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Contrasting this, the SPDZ protocol is a secret-sharing-based MPC protocol.</div>
<div style="text-align: justify;">
<br /></div>
<h3 style="text-align: justify;">
Secret-Sharing-Based MPC</h3>
<div style="text-align: justify;">
Whereas circuit garbling involves encrypting and decrypting keys in a specific order to emulate a circuit evaluation (originally a Boolean circuit, but now arithmetic circuits too [1]), SPDZ instead ‘secret shares’ inputs amongst all parties and uses these shares to evaluate a circuit.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
SPDZ is neither the first nor the only secret-sharing-based MPC protocol. Other well known constructions include BDOZ [3], TinyOT [8] and MiniMAC [6]. MASCOT [7] can be seen as an oblivious-transfer-based version of SPDZ. This will be discussed in a little more detail later on.</div>
<div style="text-align: justify;">
<br /></div>
<h4 style="text-align: justify;">
What is secret sharing?</h4>
<div style="text-align: justify;">
Suppose I have some field element $a \in \mathbb{F}$, split it up ‘at random’ (uniformly) into two pieces, $a = a_1 + a_2$, and give party $P_1$ the value $a_1$ and $P_2$ the value $a_2$. Neither party knows the value $a$, but together they can recover it. We will write $\langle a \rangle$ to mean that the value $a$ is secret-shared between all parties (i.e. for each i, party $P_i$ has $a_i$, where $\sum_i a_i = a$).</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Of course, there are different ways of secret sharing data (e.g. the analogous multiplicative sharing $a = a_1 \cdot a_2$, and also more complicated schemes like Shamir’s [9]), but it turns out that the additive scheme is particularly useful for MPC applications, as we shall see.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The basic overview of secret-sharing MPC of arithmetic circuits (SSMPCoAC?) is the following: </div>
<ol>
<li>The parties first secret-share their inputs; i.e. input $x^i$ is shared so that $\sum_j x_j^i = x^i$ and party $P_j$ holds $x_j^i$ (and $P_i$ which provides input is included in this sharing, even though it knows the sum).</li>
<li>The parties perform additions and multiplications on these secret values by local computations and communication of certain values (in methods specified below). By construction, the result of performing an operation is automatically shared amongst the parties (i.e. with no further communication or computation).</li>
<li>Finally, the parties 'open' the result of the circuit evaluation. This last step involves each party sending their 'final' share to every other party (and also performing a check that no errors were introduced by the adversary along the way).</li>
</ol>
These are the steps we follow in a few different MPC circuit evaluation protocols, as we have discussed. The way we compute the circuit differs (slightly) with the protocol.<br />
<br />
<i><b>Next time</b></i>: In the <a href="https://bristolcrypto.blogspot.co.uk/2016/10/what-is-spdz-part-2-circuit-evaluation.html" target="_blank">next part</a> in this series, we will see how to use these secret-shared values to evaluate an arithmetic circuit as in the SDPZ protocol.<br />
<div style="text-align: justify;">
<h4>
<b> </b></h4>
<h4>
<b>References</b> </h4>
[1] B. Applebaum, Y. Ishai, and E. Kushilevitz. <a href="http://www.eng.tau.ac.il/%7Ebennyap/pubs/AGCfull.pdf" target="_blank">How to garble arithmetic circuits</a>. 52nd FOCS, pp120–129. IEEE Computer Society Press, 2011</div>
<div style="text-align: justify;">
[2] D. Beaver. <a href="https://link.springer.com/chapter/10.1007%2F3-540-46766-1_34" target="_blank">Efficient Multiparty Protocols using Circuit Randomisation</a>. In J. Feigenbaum, editor, CRYPTO, volume 576 of Lecture Notes in Computer Science, pp420-432, Springer, 2012.</div>
<div style="text-align: justify;">
[3] R. Bendlin, I. Damgard, C. Orlandi, and S. Zakarias. <a href="https://eprint.iacr.org/2010/514" target="_blank">Semi-homomorphic encryption and multiparty computation</a>. In EUROCRYPT, pp169-188, 2011.</div>
<div style="text-align: justify;">
[4] I. Damgard, M. Keller, E. Larraia, V. Pastro, P. Scholl, N. P. Smart. <a href="https://eprint.iacr.org/2012/642" target="_blank">Practical covertly secure MPC for dishonest majority - or: Breaking the SPDZ limits</a>.
In ESORICS (2013), J. Crampton, S. Jajodia, and K. Mayes, Eds., vol.
8134 of Lecture Notes in Computer Science, Springer, pp. 1–18.</div>
<div style="text-align: justify;">
[5] I. Damgard, V. Pastro, N. P. Smart, and S. Zakarias. <a href="https://eprint.iacr.org/2011/535" target="_blank">Multiparty computation from somewhat homomorphic encryption</a>. In Advances in Cryptology – CRYPTO 2012, volume 7417 of LNCS, pp643–662. Springer, 2012.</div>
<div style="text-align: justify;">
[6] I. Damgard and S. Zakarias. <a href="https://eprint.iacr.org/2012/512" target="_blank">Constant-overhead secure computation of</a></div>
<div style="text-align: justify;">
<a href="https://eprint.iacr.org/2012/512" target="_blank">boolean circuits using preprocessing</a>. In TCC, pp621-641, 2013.</div>
<div style="text-align: justify;">
[7] M. Keller and E. Orsini and P. Scholl. <a href="https://eprint.iacr.org/2016/505" target="_blank">MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer</a>. Cryptology ePrint Archive, Report 2016/505, 2016.</div>
<div style="text-align: justify;">
[8] J. Buus Nielsen, P. Nordholt, C. Orlandi, and S. Burra. <a href="https://eprint.iacr.org/2011/091" target="_blank">A new approach to practical active-secure two-party computation</a>.
In Reihaneh Safavi-Naini and Ran Canetti, editors, Advances in
Cryptology CRYPTO 2012, volume 7417 of Lecture Notes in Computer
Science, pp681-700. Springer Berlin Heidelberg, 2012.</div>
<div style="text-align: justify;">
[9] A. Shamir. <a href="https://dl.acm.org/citation.cfm?id=359176" target="_blank">How to Share a Secret</a>. In Communications of the ACM, Volume 22 Issue 11, Nov. 1979, pp612-613.</div>
<div style="text-align: justify;">
[10] A. Yao. <a href="https://dl.acm.org/citation.cfm?id=1382944" target="_blank">How to generate and exchange secrets</a>. In SFCS '86 Proceedings of the 27th Annual Symposium on Foundations of Computer Science, pp162–167. IEEE, 1986.</div>
<div style="text-align: justify;">
<br /></div>
<br />
<ol>
</ol>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-65645785826782814092016-10-06T11:48:00.001+01:002016-10-06T11:48:48.030+01:00Study Group: Crying Wolf: An Empirical Study of SSL Warning Effectiveness<div style="text-align: justify;">
Today's study group was on the now a little dated paper of 2009 'Crying Wolf: An Empirical Study of SSL Warning Effectiveness' [1], which was published at USENIX. In cryptography research, it is easy to overlook implementation and usability and instead focus on theory. As is succinctly explained in <a href="https://xkcd.com/538/" target="_blank">Randall Munroe's well-known comic</a>, the weaknesses in our cryptographic solutions are seldom in the constructions themselves, but in their real-world application.<br />
<br />
This paper explores the use and design of warnings which modern (!) browsers present to a user when SSL certificates cannot be verified, and in particular the user's reaction to them. There is little point in a cryptographically secure system of authentication if the end user ignores and proceeds past warnings when presented with them. The authors suggests that when browsers '<a href="https://en.wikipedia.org/wiki/The_Boy_Who_Cried_Wolf" target="_blank">cry wolf</a>' upon encountering SSL errors, users become desensitised over time, learn to ignore these warnings, and thus become susceptible to having their data stolen.</div>
<br />
<h4 style="text-align: justify;">
What is SSL?</h4>
<div style="text-align: justify;">
(The initiated can skip this.)<br />
SSL stands for Secure Sockets Layer, and is a method by which a client can access a web server securely. The SSL Handshake protocol uses a so-called SSL certificate to verify a server's authenticity to a client. An SSL certificate specifies whom the certificate was issued to, whom it was issued by, the period of validity and the server's public key. (Old SSL protocols have been superseded by <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security" target="_blank">TLS</a>, but the principles involved are essentially the same.) At a very high level, the protocol proceeds as follows:</div>
<ol style="text-align: justify;">
<li> The client sends a 'hello' message to the server, requesting content. </li>
<li> The server sends the client its certificate, which contains its public key.</li>
<li> The client checks that the certificate is valid.</li>
<li> If the check passes, the client generates a session key, encrypts using the server's public key, and sends this to the server. If the check fails, the client aborts.</li>
<li> The server decrypts the session key using its secret key.</li>
</ol>
<div style="text-align: justify;">
The client and the server can now encrypt all data sent between them using the (symmetric) session key.</div>
<div style="text-align: justify;">
<br /></div>
<h4 style="text-align: justify;">
What can go wrong?</h4>
<div style="text-align: justify;">
If the certificate is invalid, the client aborts. The problems this study considers are: </div>
<ul style="text-align: justify;">
<li> Expired certificate: the certificate is no longer valid.</li>
<li> Unknown certificate authority: the issuing authority is not known.</li>
<li> Domain mismatch: the domain of the web server and the certificate's listed domain do not match.</li>
</ul>
<div style="text-align: justify;">
If one of the above occurs, the web browser will alert the user. The purpose of the study was to assess the effectiveness of the browser in conveying the severity of the problem to the user: strong warnings where the risks are small cause people to assume high-risk situations given the same warning are just as innocuous.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
</div>
<h3 style="text-align: justify;">
The Studies</h3>
<div style="text-align: justify;">
</div>
<h4 style="text-align: justify;">
Survey</h4>
<div style="text-align: justify;">
Using a survey, the authors gathered data from 409 web users on their reactions to SSL warnings and their overall comprehension of the risks involved in ignoring them.<br />
<br />
They found that context (i.e. the type of website visited) made little difference to whether or not a user would heed the warnings.<br />
<br />
According to the data, respondents who understood 'Domain mismatch' and 'Unknown certificate authority' warnings were less likely to proceed than those who did not, whereas those who understood certificate expiry errors were more likely to proceed. In fact, the experimenters found that users consistently rated risk of an expired certificate lower than the other two errors.<br />
<br />
The authors additionally report some wonderful responses from users, including:<br />
<ul>
<li> 'I use a Mac, so nothing bad would happen'</li>
<li> 'Since I use FreeBSD, rather than Windows, not much [risk]'</li>
<li> 'On my Linux box, nothing significantly bad would happen'</li>
</ul>
<br /></div>
<h4 style="text-align: justify;">
Laboratory Experiment</h4>
<div style="text-align: justify;">
A set of 100 participants were asked to use four websites to complete different tasks. One website was a banking website with an invalid certificate, one a library website with an invalid certificate, and two were other sites used as dummies.<br />
<br />
The participants were shown either Internet Explorer 7 (IE7), Firefox 2 (FF2), Firefox 3 (FF3), or one of two newly-designed SSL warnings. The <a href="https://www.sslshopper.com/assets/images/ie7-certificate-not-trusted.png" target="_blank">IE7 warning</a> is whole page but requires just one click to ignore. The <a href="https://www.sslshopper.com/assets/images/Firefox2-certificate-not-trusted.png" target="_blank">FF2 warning</a> is a pop-up window but also only requires one click to ignore. The first version of the <a href="https://www.sslshopper.com/assets/images/ff3_self_signed_error.png" target="_blank">FF3 warning</a> needed 11 steps. 'They made the original version of the warning so difficult for users to override, that only an expert could be likely to figure out how to do it.' The first new design was multi-page and asked users to specify the nature of the website they were visiting, presenting severe warnings for websites requiring a high level of security and milder warnings otherwise. The second new design was similar to the FF3 warning but 'looked more severe'. Images can be found in the paper.</div>
<br />
<div style="text-align: justify;">
For the library website, the IE7, FF2 and multi-page warnings did not prevent people from proceeding compared to the FF3 warning, and the single-page warning was similar to the previous warnings.<br />
<br />
For the banking website, the two new warnings did prevent people from accessing the website, but no more than the FF3 warning. The new warnings and the FF3 warning outperformed the IE7 and FF2 warnings in preventing people from accessing the website.<br />
<br />
<h4>
Conclusions</h4>
In conclusion, the authors say that the average user does not understand the dangers of SSL warnings, and as such the decision of whether or not to proceed should essentially be made for them by the browser in most cases.<br />
<br />
More recently, Chrome recently redesigned its SSL warnings due to the large proportion of users who simply ignored all SSL warnings [2].<br />
<br />
To see different SSL warnings in your current browser, visit <a href="http://badssl.com/">badssl.com</a>.
<br />
<br />
<h3>
References </h3>
[1] Crying Wolf: An Empirical Study of SSL Warning Effectiveness by Joshua Sunshine, Serge Egelman, Hazim Almuhimedi, Naha Atri and Lorrie Faith Cranor. In Proceedings of the 18th Conference on USENIX Security Symposium, 2009; <a href="http://dl.acm.org/citation.cfm?id=1855793" target="_blank">link</a>.<br />
[2] Improving SSL Warnings: Comprehension and Adherence by Adrienne Porter Felt, Alex Ainslie, Robert W. Reeder, Sunny Consolvo, Somas Thyagaraja, Alan Bettes, Helen Harris and Jeff Grimes. In CHI 2015; <a href="http://dl.acm.org/citation.cfm?id=2702442" target="_blank">link</a>.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-62652047678763982352016-09-29T11:42:00.000+01:002016-09-29T11:42:41.002+01:00Study Group: On the Impossibility of Tight Cryptographic ReductionsToday I kicked off the study groups for 2016/17. I spoke about <i><a href="https://eprint.iacr.org/2015/374">On the Impossibility of Tight Cryptographic Reductions</a>, </i>from this year's Eurocrypt. Keen readers of the blog might recall that this paper is a <a href="http://bristolcrypto.blogspot.co.uk/2016/05/some-reductions-will-always-be-lossy.html">particular favourite of mine</a>.<br />
<br />
Since I've wrote about it before, I won't go into much detail about the paper. Instead I'll say why I (still) like it, and a bit about how it's shaped my own work this year.<br />
<br />
So, why choose this paper again? First and foremost, it's just <b>really good</b>. It's well written and the result - that certain reductions are necessarily lossy - has big implications for the way we do things in provable security. There is an increasing drive for theoreticians to produce work that has a meaningful impact on the real world. Choosing security parameters is an important part of that picture, but this paper shows that the traditional tools of provable security can sometimes be inadequate in this regard - especially in a setting like the internet, with billions of users of cryptography.<br />
<br />
Is it that our methods need changing? Or should practitioners ignore the theory and 'go with their gut' when choosing parameters? Do we need to actively avoid using those crytographic primitives for whom reductions are always lossy, like rerandomisable signatures and encryption schemes where each public key has a unique secret key? These are profound questions for the community.<br />
<br />
Another reason I chose to talk about this paper is that it's nicely self-contained. This is not an incremental result about something obscure. Almost everyone here at Bristol has encountered reductions, and after recalling the standard EUF-CMA definition for signatures it was easy to build up to the main theorem of the paper (or at least the particular case of signatures in the main theorem). If any new PhD students are looking for some theory to get their teeth into, this paper would be a great starting point.<br />
<br />
Finally, I cheated a bit by giving my presentation about a paper that I've become very familiar with in the last few months, as I'm trying to extend it. At the moment, the result only applies to certain public-key primitives; I'd like to say something about multi-key to single-key reductions for <i>symmetric </i>encryption (which is of particular relevance to my PhD project, on Key Management APIs).<i> </i>I hope to have more to say on this in the not-too-distant future.Anonymousnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-55669075533904324712016-08-28T02:59:00.001+01:002016-09-07T18:52:38.505+01:00Crypto 2016: Breaking the Circuit Size Barrier for Secure Computation Under DDH<div dir="ltr" style="text-align: left;" trbidi="on">
The CRYPTO 2016 Best Paper Award went to Boyle et al [1]. The paper provides several new protocols based on a DDH assumption with applications to 2PC (2 party-computation), private information retrieval as well as function secret sharing.<br />
<br />
Even more interesting, the authors present a protocol where 2PC for branching programs is realized in a way that communication complexity depends only on the input size and the computation is linear in circuit size.<br />
<br />
The central idea develops around building efficient evaluation of RMS (restricted multiplication straight line) programs. The special feature of RMS is that they allow multiplications only with memory and input values; the additions come for free between memory values. Although this class seems quite restrictive it covers the class of branching programs (logaritmic depth boolean circuits with polynomial size and bounded input).<br />
<br />
In the 2PC evaluation of RMS, suppose there is a linear shared memory value $[y]$ between the parties $P_1$ and $P_2$. When $P_1$ wants to share an input value $x$ to $P_2$ it sends an ElGamal encryption of $x$, $g^{xc}$ where $c$ is a symmetric ElGamal key. Clearly, the encryption is homomorphic with respect to multiplication, but how can we make any operations between a linear SS (secret shared) value and an ElGamal encryption?<br />
<br />
This is solved by introducing a distributive DLog procedure which converts the El-Gamal ciphertexts into linear SS values. The method uses a truncated PRF which counts the number of steps until the PRF evaluated in the ElGamal encryption equals to $0$. Unfortunately this algorithm has a probability of outputting an incorrect result but it can be fixed by evaluating multiple instances of the same protocol in parallel and then use an MPC protocol to select the result majority.<br />
<br />
Of course, there are some caveats at the beginning of the scheme such as converting the key generation procedure to a public key one and removing circularity key assumptions. These are gradually presented by the authors so that it can ease the reader's understanding of the ideas.<br />
<br />
What I find neat is that at the end of the paper we can see easily how to reduce the communication for general 'dense' arithmetic circuits by splitting them in multiple reduced depth chunks and then apply the RMS programs for each gate (because an addition or multiplication gate can be represented as a branching program).<br />
<br />
Of course we can spot some open problems left as future work such as:<br />
<ol>
<li>Extend the protocols for larger classes other than branching programs.</li>
<li>Protocol only works for $2$ parties. Can we find something with constant communication for multiple parties without using FHE?</li>
<li>Can we convert the protocol for malicious parties in some other way rather than a generic complier as in [2]?</li>
</ol>
<br />
[1]: Boyle, Elette, Niv Gilboa, and Yuval Ishai. "Breaking the Circuit Size Barrier for Secure Computation Under DDH."<br />
[2]: Ishai, Yuval, et al. "Cryptography with constant computational overhead." <i>Proceedings of the fortieth annual ACM symposium on Theory of computing</i>. ACM, 2008.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-88075245012603708012016-08-26T14:30:00.001+01:002016-08-26T14:30:10.052+01:00CHES 2016: On the Multiplicative Complexity of Boolean Functions and Bitsliced Higher-Order Masking<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: justify;">
During the morning session on the final day of CHES 2016, Dahmun Goudarzi presented his <a href="https://eprint.iacr.org/2016/557">paper</a>, co-authored with Matthieu Rivain, on bit-sliced higher-order masking.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Bit-sliced higher-order masking of S-boxes is an alternative to higher-order masking schemes where an S-box is represented by a polynomial over binary finite field. The basic idea is to <i>bit-slice </i>Boolean circuits of all the S-boxes used in a cipher round. Securing a Boolean AND operation, needed in the case of bit-sliced approach, is significantly faster than securing a multiplication over a binary finite field, needed in the case of polynomial-based masking schemes. But now the number of such AND operations required is significantly higher in the former case than the number of field multiplications required in the latter case. However, the use of bit-slicing with relatively large registers (for instance, 32-bit registers) previously lead the same authors to demonstrate significant improvements over polynomial-based masking schemes for specific block ciphers such as AES and PRESENT [<i>GR16</i>]. However, no generic method to apply bit-sliced higher-order masking to arbitrary S-boxes were previously known, and proposing such a method is one of the main contributions of the current work.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The running time and the randomness requirement of the bit-sliced masking technique mainly depends on the <i>multiplicative complexity</i>, i.e., the number of AND gates in the masked circuit. Indeed, a more precise measure is the <i>parallel </i>multiplicative complexity<i>. </i>While from previous works it is already known how to obtain optimal circuits (w.r.t. multiplicative complexity) for small S-boxes by using SAT solvers, solving the same problem for 6-bit or larger S-boxes had remained as an interesting problem. In the current work, the authors propose a new heuristic method to obtain boolean circuits of low multiplicative complexity for arbitrary S-boxes. The proposed method follows the same approach as a previous work [<i>CRV14</i>] that computes efficient polynomial representation of S-boxes over binary finite fields. The authors provide a heuristic analysis of the multiplicative complexity of their proposed method that is quite close to the experimental results for S-box sizes of practical relevance. Finally, an implementation of the bit-sliced masking technique evaluating sixteen 4-bit S-boxes in parallel and another implementation evaluating sixteen 8-bit S-boxes in parallel on a 32-bit ARM architecture is performed. The timing results seem to indicate that the bit-sliced masking method performs way better than the polynomial-based masking methods when the number of shares is greater than a certain bound.</div>
<div style="text-align: justify;">
</div>
<div style="text-align: justify;">
<b>References:</b> </div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
[<i>CRV14</i>] Jean-Sébastien Coron, Arnab Roy, <i>Srinivas Vivek</i>: <a href="http://dx.doi.org/10.1007/978-3-662-44709-3_10" target="_blank">Fast Evaluation of Polynomials over Binary Finite Fields and Application to Side-Channel Countermeasures</a>. CHES 2014 & JCEN 2015.</div>
<div style="text-align: justify;">
</div>
<div style="text-align: justify;">
[<i>GR16</i>] Dahmun Goudarzi and Matthieu Rivain. <a href="https://eprint.iacr.org/2016/264">How Fast Can Higher-Order Masking Be in Software?</a> Cryptology ePrint Archive, 2016.</div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2752058700154467226.post-67826431683719614382016-08-23T20:55:00.002+01:002016-08-23T21:01:51.273+01:00CRYPTO 2016 – Backdoors, big keys and reverse firewalls on compromised systemsThe morning of the second day at CRYPTO’s 2016
started with a track on “Compromised Systems”, consisting of three talks
covering different scenarios and attacks usually disregarded in the vast
majority of the cryptographic literature. They all shared, as well, a concern
about mass surveillance.<br />
<br />
<div style="margin: 0cm 0cm 8pt; text-align: justify;">
Suppose Alice wishes to send a message to Bob
privately over an untrusted channel. Cryptographers have worked hard on this
scenario, developing a whole range of tools, with different notions of security
and setup assumptions, between which one of the most common axioms is that
Alice has access to a trusted computer with a proper implementation of the
cryptographic protocol she wants to run. The harsh truth is that this is a
naïve assumption. The Snowden revelations show us that powerful adversaries can
and will corrupt users machines via extraordinary means, such as subverting cryptographic standards, intercepting and tampering with hardware on its
way to users or using Tailored Access Operation units.</div>
<br />
<div style="margin: 0cm 0cm 8pt; text-align: justify;">
Nevertheless, the relevance of these talks was
not just a matter of a “trending topic”, or distrust on the authoritarian and
unaccountable practices of intelligence agencies. More frequently than we would
like, presumably accidental vulnerabilities (such as Poodle, Heartbleed, etc.)
are found in popular cryptographic software, leaving the final user unprotected
even when using honest implementations. In the meantime, as Paul Kocher
remembered on his invited talk the day after, for most of our community it
passes without notice that, when we design our primitives and protocols, we
blindly rely on a mathematical model of reality that sometimes has little to do
with it. </div>
<br />
<div style="margin: 0cm 0cm 8pt; text-align: justify;">
In the same way as people from the CHES
community has become more aware –mainly also the hard way– that relying on the
wrong assumptions leads to a false confidence of the security of the deployed
systems and devices, I think those of us not that close to hardware should also
try to step back and look at how realistic are our assumptions. This includes,
as these talks addressed in different ways, starting to assume that some
standards might –and most of the systems will— be compromised at some point, and that we
understand what can still be done in those cases. </div>
<br />
<div style="margin: 0cm 0cm 8pt; text-align: justify;">
How would a Cryptography that worries not only
about prevention, but also about the whole security cycle look like? How can
the cryptography and information security communities come closer?</div>
<h3 style="margin: 2pt 0cm 0pt;">
Message Transmission with
Reverse Firewalls— Secure Communication on Corrupted Machines</h3>
<br />
<div style="margin: 0cm 0cm 8pt; text-align: justify;">
The reverse firewalls framework was recently
introduced by Mironov and Stephens-Davidowitz, with a paper that has already
been discussed in our group’s seminars and this same blog. A secure reverse
firewall is a third party that “sits between Alice and the outside world” and modifies
her sent and received messages so that even if her machine has been corrupted,
Alice’s security is still guaranteed.</div>
<br />
<div style="margin: 0cm 0cm 8pt; text-align: justify;">
Their elegant construction does not require the
users to place any additional trust on the firewall, and relies on having the
underlying cryptographic schemes to be rerandomizable.
With this threat model and rerandomization capabilities, they describe
impossibility results as well as concrete constructions. </div>
<br />
<div style="margin: 0cm 0cm 8pt; text-align: justify;">
For example, in the context of semantically
secure public-key encryption, in order to provide reverse firewalls for Bob,
the scheme must allow a third party to rerandomize a public key and map
ciphertexts under the rerandomized public key to ciphertexts under the original
public key. In the same context, Alice’s reverse firewall must be able to
rerandomize the ciphertext she sends to Bob, in such a way that
Dec(Rerand(Enc(m)))=m.</div>
<br />
<h3 style="margin: 2pt 0cm 0pt;">
Big-Key Symmetric
Encryption: Resisting Key Exfiltration</h3>
<br />
<span lang="EN-US" style="font-family: "cmr10"; font-size: 10.0pt;"></span><div style="line-height: normal; margin: 0cm 0cm 0pt; text-align: justify;">
<span lang="EN-US" style="font-family: "cmr10"; font-size: 10.0pt;"><span lang="EN-US" style="mso-ansi-language: EN-US;"><span style="font-family: "calibri"; font-size: small;">The threat
addressed in Bellare’s talk is that of malware that aims to exfiltrate a user's
key, likely using her system’s network connection. On their work, they design a
schemes that aim to protect against this kind of </span><i style="mso-bidi-font-style: normal;"><span style="font-family: "calibri"; font-size: small;">Advanced Persistent Threats</span></i><span style="font-family: "calibri"; font-size: small;"> by making secret keys so big that their
undetected exfiltration by the adversary is difficult, yet making the user’s
overhead almost exclusively in terms of storage instead of speed.</span></span></span></div>
<br />
<span lang="EN-US" style="font-family: "cmr10"; font-size: 10.0pt;">
<div style="line-height: normal; margin: 0cm 0cm 0pt; text-align: justify;">
</div>
<div style="margin: 0cm 0cm 8pt;">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><span style="font-family: "calibri"; font-size: small;">Their main
result is a </span><i style="mso-bidi-font-style: normal;"><span style="font-family: "calibri"; font-size: small;">subkey prediction lemma</span></i><span style="font-family: "calibri"; font-size: small;">,
that gives a nice bound on an adversary’s ability to guess a modest length
subkey, derived by randomly selecting bits of a big-key from which partial
information has already been leaked. This approach, known as the Bounded
Retrieval Model, has been –in the words of the authors—largely a theoretical
area of research, whereas they show a fully concrete security analysis with
good numerical bounds, constants considered.</span></span></div>
<div style="line-height: normal; margin: 0cm 0cm 0pt; text-align: justify;">
<span lang="EN-US" style="mso-ansi-language: EN-US;"><span style="font-family: "calibri"; font-size: small;">Other
highlighted aspects of their paper were the concrete improvements over [ADW09]
and the key encapsulation technique carefully based in different security
assumptions (random oracle, standard model).</span></span></div>
</span><br />
<h3 style="margin: 2pt 0cm 0pt;">
Backdoors in Pseudorandom
Number Generators: Possibility and Impossibility Results</h3>
<br />
<div style="margin: 0cm 0cm 8pt; text-align: justify;">
The last talk of the session focused on the
concrete problem of backdoored Pseudorandom Number Generators (PRGs) and PRNGs
with input, which are fundamental building blocks in cryptographic protocols
that have already been successfully compromised, as we learnt when the
DUAL_EC_DRBG scandal came to light.</div>
On their paper, the authors revisit
a previous abstraction of backdoored PRGs [DGA+15] which modeled the adversary (Big Brother) with weaker powers than it
could actually have. By giving concrete “Backdoored PRG” constructions, they
show how that model fails. Moreover, they also study robust PRNGs with input,
for which they show that Big Brother is still able to predict the values of the
PRNG state backwards, as well as giving bounds on the number of these previous
phases that it can compromise, depending on the state-size of the generator.<br />
<br />
<div style="line-height: normal; margin: 0cm 0cm 0pt;">
<br /></div>
[ADW09] J. Alwen, Y. Dodis, and D. Wichs. Leakage-resilient
public-key cryptography in the bounded-retrieval model. In S. Halevi, editor, CRYPTO 2009, volume 5677 of LNCS, pages
36{54. Springer, Heidelberg, Aug. 2009.<br />
<br />
[DGA+15] Yevgeniy Dodis, Chaya Ganesh, Alexander Golovnev, Ari Juels, and Thomas Ristenpart. A formal treatment of backdoored pseudorandom generators. In Elisabeth Oswald and Marc Fischlin, editors, EUROCRYPT 2015, Part I, volume 9056 of LNCS, pages 101–126, Sofia, Bulgaria, April 26–30, 2015. Springer, Heidelberg, Germany.Unknownnoreply@blogger.com0