Sunday, August 28, 2016

Crypto 2016: Breaking the Circuit Size Barrier for Secure Computation Under DDH

The CRYPTO 2016 Best Paper Award went to Boyle et al [1]. The paper provides several new protocols based on a DDH assumption with applications to 2PC (2 party-computation), private information retrieval as well as function secret sharing.

Even more interesting, the authors present a protocol where 2PC for branching programs is realized in a way that communication complexity depends only on the input size and the computation is linear in circuit size.

The central idea develops around building efficient evaluation of RMS (restricted multiplication straight line) programs. The special feature of RMS is that they allow multiplications only with memory and input values; the additions come for free between memory values. Although this class seems quite restrictive it covers the class of branching programs (logaritmic depth boolean circuits with polynomial size and bounded input).

In the 2PC evaluation of RMS, suppose there is a linear shared memory value $[y]$ between the parties $P_1$ and $P_2$. When $P_1$ wants to share an input value $x$ to $P_2$ it sends an ElGamal encryption of $x$, $g^{xc}$ where $c$ is a symmetric ElGamal key. Clearly, the encryption is homomorphic with respect to multiplication, but how can we make any operations between a linear SS (secret shared) value and an ElGamal encryption?

This is solved by introducing a distributive DLog procedure which converts the El-Gamal ciphertexts into linear SS values. The method uses a truncated PRF which counts the number of steps until the PRF evaluated in the ElGamal encryption equals to $0$. Unfortunately this algorithm has a probability of outputting an incorrect result but it can be fixed by evaluating multiple instances of the same protocol in parallel and then use an MPC protocol to select the result majority.

Of course, there are some caveats at the beginning of the scheme such as converting the key generation procedure to a public key one and removing circularity key assumptions. These are gradually presented by the authors so that it can ease the reader's understanding of the ideas.

What I find neat is that at the end of the paper we can see easily how to reduce the communication for general 'dense' arithmetic circuits by splitting them in multiple reduced depth chunks and then apply the RMS programs for each gate (because an addition or multiplication gate can be represented as a branching program).

Of course we can spot some open problems left as future work such as:
  1. Extend the protocols for larger classes other than branching programs.
  2. Protocol only works for $2$ parties. Can we find something with constant communication for multiple parties without using FHE?
  3. Can we convert the protocol for malicious parties in some other way rather than a generic complier as in [2]?

[1]: Boyle, Elette, Niv Gilboa, and Yuval Ishai. "Breaking the Circuit Size Barrier for Secure Computation Under DDH."
[2]: Ishai, Yuval, et al. "Cryptography with constant computational overhead." Proceedings of the fortieth annual ACM symposium on Theory of computing. ACM, 2008.

Friday, August 26, 2016

CHES 2016: On the Multiplicative Complexity of Boolean Functions and Bitsliced Higher-Order Masking

During the morning session on the final day of CHES 2016, Dahmun Goudarzi presented his paper, co-authored with Matthieu Rivain, on bit-sliced higher-order masking.

Bit-sliced higher-order masking of S-boxes is an alternative to higher-order masking schemes where an S-box is represented by a polynomial over binary finite field. The basic idea is to bit-slice Boolean circuits of all the S-boxes used in a cipher round. Securing a Boolean AND operation, needed in the case of bit-sliced approach, is significantly faster than securing a multiplication over a binary finite field, needed in the case of polynomial-based masking schemes. But now the number of such AND operations required is significantly higher in the former case than the number of field multiplications required in the latter case. However, the use of bit-slicing with relatively large registers (for instance, 32-bit registers) previously lead the same authors to demonstrate significant improvements over polynomial-based masking schemes for specific block ciphers such as AES and PRESENT [GR16]. However, no generic method to apply bit-sliced higher-order masking to arbitrary S-boxes were previously known, and proposing such a method is one of the main contributions of the current work.

The running time and the randomness requirement of the bit-sliced masking technique mainly depends on the multiplicative complexity, i.e., the number of AND gates in the masked circuit. Indeed, a more precise measure is the parallel multiplicative complexity. While from previous works it is already known how to obtain optimal circuits (w.r.t. multiplicative complexity) for small S-boxes by using SAT solvers, solving the same problem for 6-bit or larger S-boxes had remained as an interesting problem. In the current work, the authors propose a new heuristic method to obtain boolean circuits of low multiplicative complexity for arbitrary S-boxes. The proposed method follows the same approach as a previous work [CRV14] that computes efficient polynomial representation of S-boxes over binary finite fields. The authors provide a heuristic analysis of the multiplicative complexity of their proposed method that is quite close to the experimental results for S-box sizes of practical relevance. Finally, an implementation of the bit-sliced masking technique evaluating sixteen 4-bit S-boxes in parallel and another implementation evaluating sixteen 8-bit S-boxes in parallel on a 32-bit ARM architecture is performed. The timing results seem to indicate that the bit-sliced masking method performs way better than the polynomial-based masking methods when the number of shares is greater than a certain bound.
    
References:

[CRV14] Jean-Sébastien Coron, Arnab Roy, Srinivas Vivek: Fast Evaluation of Polynomials over Binary Finite Fields and Application to Side-Channel Countermeasures. CHES 2014 & JCEN 2015.
 
[GR16] Dahmun Goudarzi and Matthieu Rivain. How Fast Can Higher-Order Masking Be in Software? Cryptology ePrint Archive, 2016.

Tuesday, August 23, 2016

CRYPTO 2016 – Backdoors, big keys and reverse firewalls on compromised systems

The morning of the second day at CRYPTO’s 2016 started with a track on “Compromised Systems”, consisting of three talks covering different scenarios and attacks usually disregarded in the vast majority of the cryptographic literature. They all shared, as well, a concern about mass surveillance.

Suppose Alice wishes to send a message to Bob privately over an untrusted channel. Cryptographers have worked hard on this scenario, developing a whole range of tools, with different notions of security and setup assumptions, between which one of the most common axioms is that Alice has access to a trusted computer with a proper implementation of the cryptographic protocol she wants to run. The harsh truth is that this is a naïve assumption. The Snowden revelations show us that powerful adversaries can and will corrupt users machines via extraordinary means, such as subverting cryptographic standards, intercepting and tampering with hardware on its way to users or using Tailored Access Operation units.

Nevertheless, the relevance of these talks was not just a matter of a “trending topic”, or distrust on the authoritarian and unaccountable practices of intelligence agencies. More frequently than we would like, presumably accidental vulnerabilities (such as Poodle, Heartbleed, etc.) are found in popular cryptographic software, leaving the final user unprotected even when using honest implementations. In the meantime, as Paul Kocher remembered on his invited talk the day after, for most of our community it passes without notice that, when we design our primitives and protocols, we blindly rely on a mathematical model of reality that sometimes has little to do with it.

In the same way as people from the CHES community has become more aware –mainly also the hard way– that relying on the wrong assumptions leads to a false confidence of the security of the deployed systems and devices, I think those of us not that close to hardware should also try to step back and look at how realistic are our assumptions. This includes, as these talks addressed in different ways, starting to assume that some standards might –and most of the systems will— be compromised at some point, and that we understand what can still be done in those cases.

How would a Cryptography that worries not only about prevention, but also about the whole security cycle look like? How can the cryptography and information security communities come closer?

Message Transmission with Reverse Firewalls— Secure Communication on Corrupted Machines


The reverse firewalls framework was recently introduced by Mironov and Stephens-Davidowitz, with a paper that has already been discussed in our group’s seminars and this same blog. A secure reverse firewall is a third party that “sits between Alice and the outside world” and modifies her sent and received messages so that even if her machine has been corrupted, Alice’s security is still guaranteed.

Their elegant construction does not require the users to place any additional trust on the firewall, and relies on having the underlying cryptographic schemes to be rerandomizable. With this threat model and rerandomization capabilities, they describe impossibility results as well as concrete constructions.

For example, in the context of semantically secure public-key encryption, in order to provide reverse firewalls for Bob, the scheme must allow a third party to rerandomize a public key and map ciphertexts under the rerandomized public key to ciphertexts under the original public key. In the same context, Alice’s reverse firewall must be able to rerandomize the ciphertext she sends to Bob, in such a way that Dec(Rerand(Enc(m)))=m.

Big-Key Symmetric Encryption: Resisting Key Exfiltration


The threat addressed in Bellare’s talk is that of malware that aims to exfiltrate a user's key, likely using her system’s network connection. On their work, they design a schemes that aim to protect against this kind of Advanced Persistent Threats by making secret keys so big that their undetected exfiltration by the adversary is difficult, yet making the user’s overhead almost exclusively in terms of storage instead of speed.

Their main result is a subkey prediction lemma, that gives a nice bound on an adversary’s ability to guess a modest length subkey, derived by randomly selecting bits of a big-key from which partial information has already been leaked. This approach, known as the Bounded Retrieval Model, has been –in the words of the authors—largely a theoretical area of research, whereas they show a fully concrete security analysis with good numerical bounds, constants considered.
Other highlighted aspects of their paper were the concrete improvements over [ADW09] and the key encapsulation technique carefully based in different security assumptions (random oracle, standard model).

Backdoors in Pseudorandom Number Generators: Possibility and Impossibility Results


The last talk of the session focused on the concrete problem of backdoored Pseudorandom Number Generators (PRGs) and PRNGs with input, which are fundamental building blocks in cryptographic protocols that have already been successfully compromised, as we learnt when the DUAL_EC_DRBG scandal came to light.
On their paper, the authors revisit a previous abstraction of backdoored PRGs [DGA+15] which modeled the adversary (Big Brother) with weaker powers than it could actually have. By giving concrete “Backdoored PRG” constructions, they show how that model fails. Moreover, they also study robust PRNGs with input, for which they show that Big Brother is still able to predict the values of the PRNG state backwards, as well as giving bounds on the number of these previous phases that it can compromise, depending on the state-size of the generator.


[ADW09] J. Alwen, Y. Dodis, and D. Wichs. Leakage-resilient public-key cryptography in the bounded-retrieval model. In S. Halevi, editor, CRYPTO 2009, volume 5677 of LNCS, pages 36{54. Springer, Heidelberg, Aug. 2009.

[DGA+15] Yevgeniy Dodis, Chaya Ganesh, Alexander Golovnev, Ari Juels, and Thomas Ristenpart. A formal treatment of backdoored pseudorandom generators. In Elisabeth Oswald and Marc Fischlin, editors, EUROCRYPT 2015, Part I, volume 9056 of LNCS, pages 101–126, Sofia, Bulgaria, April 26–30, 2015. Springer, Heidelberg, Germany.

CHES 2016: Flush, Gauss, and Reload – A Cache Attack on the BLISS Lattice-Based Signature Scheme

MathJax TeX Test Page
Leon Groot Bruinderink presented at CHES a cache-attack against the signature scheme BLISS, a joint work with Andreas Hulsing, Tanja Lange and Yuval Yarom.
The speaker first gave a brief introduction on BLISS (Bimodal Lattice Signature Scheme), a signature scheme whose security is based on lattice problems over NTRU lattices. Since such problems are believed to be hard even if in the presence of quantum computers, BLISS is a candidate for being a cryptographic primitive for the post-quantum world. In addition, its original authors proposed implementations making BLISS a noticeable example of a post-quantum algorithm deployable in real use-cases.
Informally speaking, a message $\mu$ is encoded in a challenge polynomial $\mathbf{c}$, which is then multiplied by the secret key $\mathbf{s}$ according to the following formula: $$ \mathbf{z} = \mathbf{y} + (-1)^b ( \mathbf{s} \cdot \mathbf{c} ) $$ where the bit $b$ and the noise polynomial $\mathbf{y}$ are unknown to the attacker. It is easy to see that if the attacker gains information about the noise polynomial, some linear algebra operations would lead her to the secret key. The coordinates of $\mathbf{y}$ are independently sampled from a discrete Gaussian distribution, which is implementable in several ways. The ones targeted in the paper are CDT and rejection samplings. In particular, the first method was also covered during the talk therefore I am focusing only on that in this blog post.
The idea behind CDT sampling is precomputing a table according to the cumulative distribution function of the discrete Gaussian, drawing a random element and considering it as an index in the table. The element in the cell indexed by the random number is returned. In the end, elements returned by such a procedure will be distributed statistically close to a discrete Gaussian. Although being fast, this has the drawback of needing to store a large table, fact that it is known to be vulnerable to cache-attacks.
The peculiarity of the attack carried out by Bruinderink \emph{et al.} is that, since the algorithm does not return the exact cache lines in which the sampling table is accessed, the equations learned are correct up to a small error, say $\pm 1$. The authors managed to translate such an issue into a shortest vector problem over lattices. Then, they run LLL algorithm to solve the problem and retrieve correct equations.

Thursday, August 18, 2016

Crypto & CHES 2016: 20 years of leakage, leakage, leakage

Paul Kocher was invited to give an invited presentation at Crypto and CHES, which was certainly deserved on the 20th anniversary of his paper that has more than 3500 citations on Google Scholar. The content of his talk ranged from tales of his work over philosophical considerations on security to an outlook to the future.

It was interesting to see how Kocher, a biologist by training, got into cryptography and managed to break implementations via side channels with rather cheap equipment, including parts from a toy electronics kit. He claimed that they could break every smart card at the time, which was of course disputed by the vendors.

In the philosophy part of the talk, the speaker brought up various analogies that I found interesting even though they did not provide direct advice. For example, he compared security to fractals that provide ever more detail the closer you look. More practically, Kocher mentioned building codes and the aviation industry. Both minimize risks by best practices that include safety margins even though these incur extra costs. However, I could not help thinking that aviation killed a fair amount of people before the standards improved.

On the outlook, Kocher did not seem particularly optimistic. The world is already complex and full of devices that regularly exhibit security problems, but it will get worse with the Internet of Things, where there will be more devices with longer life spans produced by vendors with less security knowledge while the impact of vulnerabilities will be even higher. He predicted that the security breaches will get worse for 3-5 years at least.

In terms of constructive ideas, he suggested to move the security into chips because it won't be ruined by the lower layer there. There already has been a significant move in that direction with Intel's SGX, but there are of course other approaches.

Tuesday, August 16, 2016

Crypto 2016: Network Oblivious Transfer

On the first day of CRYPTO 2016, Adam Sealfon presented his work with Ranjit Kumaresan and Srinivasan Raghurama on Network Oblivious Transfer. Oblivious transfer (OT) is a two party protocol in which party $A$ inputs two strings and party $B$ a bit $b$: $B$ receives exactly one of the strings according to his bit and finds out nothing about the other string, while $A$ does not find out which of the two strings $B$ chose. If two parties are able to engage in an OT protocol, we say that there is an OT channel between them. OT channels are a good thing to study because they are:
  • Useful: OT has been called MPC (multi-party computation) complete, and the Atom of MPC, since many MPC protocols can be realised using OT;
  • Achievable: e.g. trapdoor permutations can be used to realise them.
Suppose we have a network in which all parties have secure connections to all other parties, and some of the parties also have OT channels between them. What can we say about the ability of the network to allow computation of OT-based MPC? In 2007, Harnik et al. asked How Many Oblivious Transfers are Needed for Secure Multiparty Computation? and give a lower bound on the number of OT channels a network must have. The paper presented gave an upper bound which matches the lower bound of the aforementioned paper, and hence allows a complete characterisation of the networks in which OT channels can be established to enable secure MPC.

For some intuition as to what this all means, consider the following three graphs. Nodes represent parties in the network, and edges represent OT channels. All parties are assumed to have secure connections to all other parties and we want to have an OT channel between $A$ and $B$.



In Figure 1, $A$ and $B$ have an OT channel between them, so we're done. In Figure 2, it turns out that the connections in place already suffice to provide $A$ and $B$ with an OT channel. However, in Figure 3, we cannot form an OT channel between $A$ and $B$.

The reason some graphs admit OT channels between certain parties and some do not concerns a property known as splittability. A graph $G$ is called $k$-unsplittable (for $k<n/2$) if for any two disjoint sets of $k$ vertices, there is an edge from a vertex in one set to a vertex in the other; $G$ is called $k$-splittable if this does not hold. The main theorem of the paper states that, assuming a semi-honest adaptive adversary controlling no more than $t$ parties, two parties, $A$ and $B$, in the network can establish an OT channel if and only if
  1. $t<n/2$, or
  2. $t \ge n/2$ and the graph is $(n-t)$-splittable.
Adding the edge $(A,B)$ to Figures 2 and 3 shows this at least looks like it says the right thing, since doing so in Figure 3 shows every 2-split of the graph has an edge between the two partitions.

In proving this theorem, the paper provides a good description of the limits of OT-based MPC.

Crypto 2016: A subfield lattice attack on overstretched NTRU assumptions


This year's Crypto kicked off this morning in sunny Santa Barbara. The early afternoon session in track A covered asymmetric Ccryptography and cryptanalysis. Shi Bai presented A subfield lattice attack on overstretched NTRU assumptions: Cryptanalysis of some FHE and Graded Encoding Schemes, which is joint work with Martin Albrecht and Leo Ducas. The talk consisted of three main parts, an introduction, a presentation of the subfield attack and a discussion on its implications.

Introduction

The set-up of the problem is the usual one. Let $\Phi_m$ be a cyclotomic power-of-two polynomial and let $R$ be the ring $R = \mathbb{Z}[x]/\Phi_m$. We let $\lambda$ be the security parameter, $n=\phi(m)=poly(\lambda)$, $q=q(\lambda)$ and $\sigma = poly(\lambda)$. The NTRU problem is the following.

NTRU Problem: We are given a ring $R$ of rank $n$, a modulus $q$, a distribution $D$ and a target norm $\tau$. Given an element $h = [gf^{-1}]_q$ (subject to $f$'s invertibility modulo $q$) for $f, g \leftarrow D$, the NTRU$(R,q,D,\tau)$ problem is to find a vector $(x,y)\neq (0,0) \in R^2 \mod q$ of Euclidean norm smaller than  $\tau\sqrt{2n}$ in the lattice

$\Lambda_h^q = \{ (x,y)\in R^2 : hx-y = 0 \mod q \}$.

We call the above the NTRU lattice.

What the authors mean by overstretched NTRU assumption is the use of super-polynomial modulus $q$ which is utilised in the context of NTRUEncrypt, signature schemes, Fully Homomorphic Encryption schemes and some candidate multilinear maps.

The starting point of the attack is that whenever

$ |f| \approx |g| \approx \sqrt{n}\sigma \ll \sqrt{nq}$,

then the NTRU lattice has an unusually short vector. We also note that, for some target norm, recovering a short enough vector is sufficient to carry the attack. In particular, finding a vector of length $o(q)$ would break applications such as encryption. We note however that in practice, parameters can indeed be set so as to avoid this attack.

The attack

Let $K$ be the cylotomic field $\mathbb{Q}(x)/\Phi_m$ and $L = \mathbb{Q}(x)/\Phi_{m'}$ a subfield, where we have that $m'|m$ and we let $\zeta_m$ and $\zeta_m'$ be the $m^{th}$, respectively $m'^{th}$ roots of unity. The authors here work with power-of-two cyclotomics, but we note that such a subfield can always be found; indeed we can take the maximal real subfield.


The strategy is as follows. We use the fact that $L$ is a subfield of $K$ to use the norm map

$N_{K/L}: K \rightarrow L$

to map down NTRU instances to the subfield, assuming we are working on overstretched large modulus $q$. We then apply lattice reduction (e.g. BKZ) to the subfield, solving a potentially easier problem.

For an NTRU instance $(h,f,g)$ in the full field, we norm it down to an instance $(h',g',g')$ of the subfield. Now the vector $(f',g')$ is in the subfield NTRU lattice $\Lambda_{h'}^q$ and depending on the parameters, it may be unusually short. The attack then proceeds by running a lattice reduction algorithm on the subfield, which produces a vector $(x',y')$. Then, if that vector is short enough, it is in fact an $\mathcal{O}_K$-multiple of $(f',g')$ and we have $(x',y')=v(f',g')$. This allows to lift $(x',y')$ to the full NTRU lattice $\Lambda_{h}^q$ and thus potentially recover non-trivial information on $f$ and $g$.

Consequences

This produces a sub-exponential attack on bootstrappable YASHE. The work also implies an attack on the latest GGH construction without an encoding of zero. Depending on the multilinear degree, this can even go down to a polynomial attack. Compared to the prior state of the art, this is the best attack there is.

In terms of limitations, if the normed down vector $(f',g')$ is not unusually short, then this attack fails. Equally, NTRU-743, NTRU-401 and BLISS are essentially immune. The conclusion of this talk was that in an NTRU assumption set-up, the presence of a subfield, a large modulus and a small $\sigma$ should be considered insecure.

Monday, August 15, 2016

Crypto 2016: Provable Security for Symmetric Cryptography

On the morning that the CAESAR competition entered its third round, track A of CRYPTO 2016 begin with a session on provable security for symmetric cryptography. It contained 5 talks, all of which were very well presented. In each case the results were given in context, along with a sketch of the key techniques behind their proofs, and very clear diagrams.

First up was Viet Tung Hoang, presenting joint work with Stefano Tessaro on the multi-user security of Key-alternating Ciphers. Key Alternating Ciphers can be seen as a generalisation of the Evan-Mansour construction, and are a natural idealisation of the AES design.  Often work is done in the single-user setting, leaving multi-user security to be reaching via a hybrid argument. However, this leads to a reduction in security linear in the number of users.

The speaker explained two ways in which their work improves upon the previous techniques for applying the H-coefficient techinque to bound adversarial advantages using the statistical distance between possible sets of transcripts, allowing them to achieve tighter bounds.would have possible previously. They termed the first of these the "Expectation Method", where they replace an upper bound with an expected value bound to significantly improve the tightness of one of the internal bounds (specifically, when one is measuring the total probability of an adversary being able to distinguish the systems from a good transcript), while the second is a tightening of the hybrid (by pushing the hybridisation step back to the transcript stage rather than waiting until the final bound has been collected).  These are both very neat observations, and it will be interesting to see how easily they can be applied to other related problems.

Next, Yannick Seurin gave the first of his two talks, on the Counter-in-Tweak (CTRT) mode for bootstrapping AE from a TBC, based on joint work with Thomas Peyrin.  In this work, the authors set out to construct an AE scheme that was:
  • Beyond-Birthday-Bound Secure in the nonce-respecting case
  • Birthday-bound secure in the nonce-abusing case
They do so using a generic-composition style approach, demonstrating that a slight variant of SIV mode can be used to combine an encryption and an authentication mechanism that each meet these security requirements such that their composition inherits this security.  For their result, an encryption routine is required that takes both a random IV and a nonce. To get this, Yannick explained how one can use a Tweakable Block Cipher to improve upon the classic counter mode, by instead putting the counter into the tweak.  Thus their scheme uses a counter (in the tweak) that is initialised with a random IV to encrypt the nonce, security of which is proven using a neat little balls-and-bins game.


After a short break, Bart Mennink introduced the XPX construction.  His construction generalises single-round most tweakable Even-Mansour constructions by considering them all as being equal to the TBC

\[ \begin{array}{cccccccc} & t_{11}K \oplus t_{12}P(K) & & t_{21}K \oplus t_{22}P(K) \\ & \downarrow & & \downarrow \\ m & \to \oplus \to & P & \to \oplus \to & c \\ \end{array} \]

 under certain sets of tweaks $(t_{11},t_{12},t_{21},t_{22}) \in \mathcal{T}$ (apologies for the terrible diagram!). After describing conditions for such Tweak sets to be weak (ie, totally insecure), he explains that all other sets are in fact reasonably secure.  Developing this further, the work then investigates certain forms of related key security, and the conditions one must impose on the tweak set to achieve these.  Bart then explained how these results apply to some preexisting schemes, recovering the security of the CAESAR candidates MinAlpha and Prost-COPA (for which the work also demonstrates a certain degree of related key security).  Finally, he showed how these results can be applied to the Chaskey MAC algorithm, and suggested a possible modification that would (at the cost of slightly more expensive key rotation) provide some related key security, a method that might also be applicable to sponge-based schemes.

The penultimate talk was on "Indifferentiability of 8-Round Feistel Networks" by
Yuanxi Dai describing his work with John Steinberger.  It is next in a long line of papers seek to best describe the extent to which one can substitute a Fiestel network in for a random permutation, even when the adversary has access to the internal functions.  The presentation was well delivered and described the overall intuition behind the proof, and the design of their simulator, but the details of such results are generally very complicated indeed.

Finally, Yannick Seurin returned to describe "EWCDM", a block-cipher based MAC construction that one could use to more efficiently instantiate the CTRT mode described previously, based on joint research with Benoît Cogliati, which looks something like:
\[ \begin{array}{cccccccc} & & & & N & \to & \downarrow \\ & & & & \downarrow & & \downarrow \\ & & & & E_{k_1} & & \downarrow \\ & & & & \downarrow & & \downarrow \\ M&\to&\text{Hash} & \to & \oplus & \leftarrow & \leftarrow \\ & & & & \downarrow & & \\ & & & & E_{k_2} & & \\ & & & & \downarrow & & \\ & & & & T & & \\ \end{array} \]
  It is secure up to ~$2^{n/2}$ queries under nonce-reuse, and achieves security for $2^{2n/3}$ queries in the nonce-respecting setting.  Moreover, for the nonce-respecting case the actual security level might be even better, since the best known attack in the currently sits at around $2^{n}$ queries, leaving scope for further research.

Thursday, August 11, 2016

USENIX 2016: How to Scrutinize "Password1"

On the first day of USENIX, there was one talk particularly catching my attention. Daniel Lowe Wheeler from Dropbox talked about a password strength estimation, and he started with the USENIX online account registration, which rates "password" as a fair password, "Password" as good, and "Password1" as strong, while "zjwca" is rated as weak. He argued that, while password guessing has improved over the last 35 years, password policy has not evolved much since 1979. Moreover, there are inconsistent and not always helpful password policies. Two previous studies have found 142 distinct policies on 150 sites and 50 distinct policies on 50 sites, respectively.

To put an end to this, the authors propose a client-side piece of JavaScript that takes 3 ms to run and gives accurate estimates for online attacks by the best available algorithms. The core estimator takes a minimum rank of the input over lists such as top passwords ("password", "123456", etc.), top surnames (Li, Khan, etc.), and specific information (user name, etc.). It also considers word transformations such as 1337, caps, and reversing, as well as keyboard patterns and sequence patterns. All this information is combined into an estimate how many guesses a sophisticated algorithm would need to find the password.

To evaluate the estimates, the authors used a large data set consisting of leaked passwords as well as other sources. On this data set, other password strength estimators perform quite badly, overestimating the number of attempts for a lot of passwords that would be found in less than 10^5 tries. A particular offender is NIST entropy, which is completely oblivious to real-world choices such as "password". In comparison, overestimating happens for very few passwords with zxcvbn.

The software is available on https://github.com/dropbox/zxcvbn, and it is already used by a number of companies, most notably WordPress.