Thursday, November 29, 2012

Kick off to CARDIS 2012


Today was the first day of the CARDIS 2012 conference which focuses on both research and applications of smart card devices. The program for today was aimed at Java card security and Smart Card protocols. The pre-proceedings for the conference can be found here: 


This article is just a brief overview of the talks and presentations given during the session.

The first paper presents the problem of static verification with respect to Java Card applets and how the current standards do not efficiently detect fault injection attacks. The authors propose the use of type and bounds protection to achieve efficient run-time verification. The paper presents an attack which would normally compromise Java Card but is thwarted by the countermeasures proposed. The authors simulate and benchmark the countermeasures to show that a hardware implementation of the countermeasures will result in an additional overall overhead of 10%, compared with the software implementation at 115%. Further work is being carried out to develop a toolchain which will facilitate compatibility and transparency for Java Card developers. The follow-up Q&A session highlighted that there has not yet been any work on a hardware implementation and therefore the cost of the additional logic is not yet known.

The second paper focuses on the performance impact that software run-time verifiers have on a smart cards. The paper describes a method to implement dynamic security levels on a smart card which will allow the detection of an attack and activation additional security countermeasures. A trade-off is made to increase the performance of a card on a day-to-day basis with the cost of allowing an adversary a one-shot attack before the security level is raised. Several points were raised during the Q&A session about both the overall practicality of the scheme and the assumption that an attacker will not be able to influence the trigger for the additional security levels.

The third and final paper on Java Card security presented a fault injection attack which focused on low precision localised targets. The author describes a combination of both logical and hardware attacks to maximise the probability of successful fault injection. The results presented are from a simulation model of a fault injection set-up and it is therefore difficult to determine the feasibility of the attack. The theoretical outcome of the paper is that an attacker need not have a high degree of precision when injecting a fault to achieve a usable result. This is often a difficult task for an attacker in the real world.

The first paper on smart card protocols describes an improvement to the cryptoGPS Public-Key authentication protocol which is also practical for real world application. The original protocol requires the use of both a PRF and a hash function in order to generate a valid response. The proposed modifications allow for the use of a block cipher in place of both these functions. The net result offers an asymmetric authentication protocol which lowers the area requirements of the tag and improves the performance of the protocol. The Q&A session offered further insight into the practicality of the scheme. The limited memory of smart cards results in the need for regular updates to the tag in order to maintain a set of valid commitments.

The final paper for the day was focused on unlinkability and revocation of smart cards. The authors highlight the leakage of personal data from a smart card device during a legitimate transaction with a trusted entity. That is to say, given an entity is trusted, does not mean that a user will want to share all the details available on the smart card. Schemes exist to achieve this property but none which also allow for practical revocation and de-anonymisation of a corrupt user. The performance overheard was found to be fairly high in the majority of instances and therefore further work is being carried out to improve the scheme.

Wednesday, November 28, 2012

Multi-Instance Security and its Application to Password-Based Cryptography

Gareth and Marcel discussed the paper on Multi-Instance Security and its Application to Password-Based Cryptography from this year's CRYPTO in the latest study group. In this paper, Bellare, Tessaro and Ristenpart introduce the theory of multi-instance security to deal with settings where one or more instances can feasibly be compromised, but it should remain infeasible to compromise all of some large number m of instances. Here, infeasible means that the effort of compromising all m instances should at least increase linearly with m. They define security metrics and games to model security in such a setting and examine the relations between their security notions. Finally, they analyse password-based Key Derivation Functions (KDFs) from the PKCS#5 standard to show that it meets their new multi-instance security definition.

Consider the setting of password-based encryption, where a message M is encrypted under a key derived from a password. Passwords are often chosen poorly in practice, which means that the set D of likely passwords (known as a dictionary) is limited and can be exhausted. Thus, an attacker can apply brute-force to retrieve the password using cN hashes, where N is the size of the dictionary D and c is equal to the number of hashes performed by the KDF. Now, increasing c would make it harder for the attacker to break a password, but it would also decrease the efficiency of the system. Furthermore, in the multi-user setting, the effort of the attacker to break the passwords of m users is still roughly cN because he only needs to exhaust the dictionary. To protect against such attacks it is possible to pick a random bitstring called the salt and hashing it together with the password when deriving the key. This salt is sent together with the message, so only the password is needed to derive the key. Now, the effort of the attacker is increased to mcN if the salt is large enough, because the attacker would need to create a dictionary for each distinct salt value that is used.

For their security metric, the authors say that the adversary wins if he breaks all m instances of the system and no fewer. They define security games based on the Left-or-Right (LOR) security definition, where an adversary has access to an encryption oracle that takes messages M0 and M1, and returns an encryption of Mb for some bit b that is fixed in the oracle. The adversary has to guess whether the oracle has the fixed bit b=0 (left oracle) or b=1 (right oracle). To extend this to the multi-instance setting, they define LORX-security, where the X stands for XOR. In this security game, the oracle has a fixed bit bi for each of the instances 1 ≤ im. The adversary can query the oracle on a message pair and an index (M0,M1,i) and get the encryption of Mbi under the key associated with instance i. In the end, the adversary must guess the XOR of the bits bi. Another variation of this game is the AND metric, where instead of having to guess the XOR of the bits bi, the adversary has to guess the vector of all bits bi. The reason for using XOR is that even if the adversary figures out m-1 of the challenge bits, it still cannot guess the XOR of all challenge bits with high advantage unless it figures out the last one as well.

The authors also extend the Real-or-Random (ROR) security definition to the multi-instance setting, calling it RORX. In the RORX security game, the adversary has access to an oracle (that has some fixed bits bi) that takes a message M1 and an index i. The oracle then generates a random message M0 of the same length as M1 and encrypts Mbi under the key associated to instance i. In the single-instance setting, the adversary has to decide whether he is dealing with an oracle with fixed bit b = 1 (encrypting real messages) or b = 0 (encrypting random messages). In the RORX security game, the adversary must guess the XOR of the bits bi. Finally, they define the Universal Key-Unrecoverability (UKU) security notion, in which the adversary gets access to an encryption oracle and must recover the key. It should be noted that all these definitions include corruptions of instances, which means the adversary also has access to a corruption oracle. When given an index i, the corruption oracle returns the key associated with instance i and if appropriate the bit bi as well.

Having defined all these security notions and games, the authors show that LORX tightly reduces to RORX, whereas the reverse incurs a loss in tightness of 2m. Similarly, LORX tightly reduces to UKU, whereas the reduction from RORX to UKU loses a factor 2m as well. Similarly to the single-instance setting, UKU does not imply LORX, because the encryption scheme can ignore the key and just output messages in the clear, thus being UKU secure (as encryptions contain no information on the key) but not LORX secure. They also show that LORX implies AND, under mild conditions that are "usually met". According to the authors, the loss of tightness in the reductions for RORX shows that LORX is the right security definition for their purposes.

The final result of their paper is about the PKCS#5 standard on password-based encryption. Using a very technical proof with lots of different games, they show that the advantage of an adversary trying to break LORX security of your password-based encryption is bounded from above by m times the advantage of an adversary against the used encryption scheme plus 2 times the advantage of an adversary trying to guess the passwords plus 2 times the advantage of an adversary trying to break the KDF. Thus, the conclusion is that if you use a reasonably secure encryption scheme and KDF (e.g. KD1 from PKCS#5), then it all comes down to how hard it is to guess the passwords of your users.

There are some applications where it is really necessary to break all instances. Think of a group of people using a secret sharing scheme that have their secrets stored on their email account, which is protected by a password-based encryption scheme. If the threshold of the secret sharing is equal to the number of players, then an adversary would really need to break all instances of the password-based encryption to be able to retrieve the secret. However, for some applications you might want a less strict definition of security, because having all but one of your instances broken does not seem very secure. It would be interesting to see whether it is possible to create a definition that requires some fraction of the instances to remain secure and whether this leads to any meaningful results.

Tuesday, November 27, 2012

Wrapping up Vampires - VAM 2 Workshop in Graz

The current "Workshop on Physical Attacks" organized by the Vampire Lab (VAM 2) of the Ecrypt II project and the Graz University of Technology serves to wrap up the work of the VAM 2 group at the end of the Ecrypt II project. To this end, the workshop has two different types of talks:
  • Talks by industry representatives who, in the majority, outlined open research problems and
  • talks by VAM 2 researchers giving brief overviews of some of the work done within the VAM 2 research group.
This will be followed by a poster session intended to engage further collaborative work in the spirit Ecrypt II.

The day was started by Stefan Mangard (Infineon Technologies, Germany) giving an impressive argument that the "system" perspective for secure hardware modules requires further work. While a lot of  research has been done to secure (as well as attack) cryptographic implementations, those always require services from the system that they're part of. For example, keys have to be stored somewhere, randomness has to be generated by the system and confidentiality and integrity of the keys and the randomness has to be provided within the system. The second big problem he mentioned is the lack of a more systematic approach towards leakage resilient embedded systems. Prompted by a question from the audience he also briefly outlined a final open problem, namely the integration of security modules into bigger embedded systems such as modern multi- and many-core System-on-Chips (SoCs) and how to use them as sort of a "trust anchor" to extend trust onto other parts of the SoC.

Somewhat related to the second point, Christophe Giraud (Oberthur Technologies) spoke on the difficulty of protecting a system against side-channel attacks without creating new leakages in a different leakage model and of the difficulty to keep track of different leakage models that apply to different parts of a system. Also related, Tony Boswell (Siventure) addressed challenges for the Common Criteria framework by which the security of systems (and especially security tokens such as smart cards) is tested and certified.

In the second line of talks, Lejla Batina and Carolyn Whitnall first presented some work done within the VAM 2 group that introduced Mutual Information Analysis to the tool box used for side-channel analysis and made an important contribution to improve our understanding of statistical side-channel distinguishers. Finally, Jörn-Marc Schmidt and Mike Tunstall presented VAM 2 work that improved the state of art in Fault Injection Attacks and countermeasures against these.

Monday, November 12, 2012

Post-Quantum Cryptography and Quantum Algorithms


Last week, the Lorentz Center hosted a workshop on Post-Quantum Cryptography and Quantum Algorithms, which attracted both cryptographers and quantum algorithms people. The aim of this workshop was to "look into alternative cryptosystems which also withstand attacks using quantum computers" and to encourage cryptographers and quantum algorithms to work together on the two overlapping fields.

When cryptographers consider 'attacks using quantum computers', they mostly think of Shor's algorithm, which allows us to efficiently factor large numbers and solve the discrete logarithm problem on a quantum computer. Thus, Shor's algorithm gives us an exponential speedup over the best known classical algorithms. Cryptosystems that rely on the hardness of such problems, such as RSA and Elliptic Curve Cryptography, will be much less secure (read: broken) once we have a quantum computer. Because these systems are used widely in practice, we would like to find some alternatives that, as far as we know, are still secure against attacks using a quantum computer.

The alternatives discussed at the workshop were based on problems from the fields of lattices, error-correcting codes and multivariate quadratic equations. For each of these fields there was an introductory tutorial and one or two invited talks. Additionally, there were two tutorials and one invited talk on quantum algorithms. The tutorials on quantum algorithms were very interesting, because quantum algorithms are often no more than black boxes to cryptographers and these talks explained some of the 'magic' behind them.

The first quantum algorithms tutorial was given by Stephen Jordan. He gave a high-level overview about the kind of quantum algorithms that provide super-polynomial speed-ups, such as Shor's algorithm. These algorithms solve some variant of the Hidden Subgroup Problem, for various groups. He also described several groups for which we do not yet know how to solve the HSP. For example, if we could solve the HSP over the symmetric group, we would be able to solve the Graph Isomorphism problem using a quantum computer. Finally, he mentioned that we might be able to solve certain lattice problems if we were able to construct certain quantum states corresponding to a Gaussian or similar distribution over lattices. Constructing these states efficiently is an open problem.

The other tutorial on quantum algorithm was given by Frederic Magniez. It focused on applications of Grover's algorithm, which allows us to perform a search of an unsorted database in the square root of the size of the database. When used to find collisions in specific functions, this can sometimes even be improved to a cube root. Thus, this kind of quantum algorithm gives polynomial speedups. For symmetric cryptography, the existence of Grover's algorithm usually means that we need to double the length of the key if we want a comparable security level against a quantum computer. However, it is not always trivial to apply Grover's algorithm to other search problems. This lead to some interesting open problems, i.e., whether Grover could be used to speed up other classical algorithms in cryptanalysis.

Besides these talks, there was some room in the program for people to contribute their own talks. The remainder of the time was spent discussing open problems and drinking coffee. The group was split into three groups, one for lattices, one for error-correcting codes and one for multivariate quadratic equations. Each group had a few people from the area of quantum algorithms as well. For those interested, there is a wiki page with slides of the main talks, a list of open problems and a summary of the topics that were discussed by the three groups.

Wednesday, November 7, 2012

Study group: On Secure Multi-party Computation in Black-Box Groups


Yesterday's study group, driven by Ashish and Jake-1, was on realizing multiparty computation over groups. Traditionally, MPC has been carried out over (finite) fields with some extensions to rings. It is a natural question to ask whether there exist constructions using different algebraic structures, interesting not only from a theoretical point of view, but also because group oriented work may lead to new techniques.

When one is dealing with groups rather than with others algebraic structures equipped with two operations, the first problem that arises is how to evaluate boolean circuits. Group-based circuits have essentially one type of gate (or two, considering gates with built-in constants). Working with non-abelian groups is of particular interest due to a result (section 4) reformulated here (theorem 1) that allows to translate any boolean circuit (expressed as AND and NOT gates) into a circuit over S 5.

In the paper, they provide the means to construct passively secure protocols among n parties against unbounded adversaries, for the evaluation of the n-product function
ƒG(x 1, ...,xn) = Πxi, where xi is privately held by party i, and then argue that the construction can be extended to the most general case. Also, for flexibility, they assume black-box access to the group G (i.e. parties only perform either group operation, group inverse or random sampling from the group).

They first prove that MPC in non-abelian groups requires honest majority by showing that ƒG is not t-private if t ≥ n/2, where t is the threshold on the number of corrupted parties and n ≥ 4 . They do so using a characterization of 1-private 2-inputs protocols, for the evaluation of function ƒ, via certain type of matrices (non-forbidden matrices), and showing by contradiction that if a n/2-private protocol for ƒG exists, then this characterization does not hold.

Then they construct a protocol for the n-product ƒG out of a 2-product subprotocol. This construction is relatively easy using a binary tree with leaf nodes set to the parties' inputs, and root node to the output of ƒG. The computation of each intermediate node is done running the subprotocol.

Given ℓ > t the only tool needed for the construction of the protocols is a ℓ-out-of-ℓ sharing of a given secret s. The shares are defined to be the vector ss = (s1, ...,s) where for a fixed j ∈ [ℓ] the shares si for i ≠ j are randomly chosen, and sj is such that s = Πsi. It can be proven that the distribution of ss is independent of j.

Interestingly, t-private n-party 2-product protocols can be constructed from the combinatorial problem of finding t-reliable n-colouring graphs. As the subprotocol employed in each node of the tree takes a 2ℓ-value and outputs a ℓ-value the graph also has this structure.

The protocol must provide correctness and privacy. For the former, recall that the working group is non-abelian. They use a stronger notion of planar directed acyclic graphs, what they call admisible PDAG; esentially is planarity what ensures that the order of the factors in the multplication does not get commuted. Privacy is achieved by the t-reliable n-colouring property of the admisible PDAG. Each node of the graph is labeled with a colour in [n] that designates the party that evaluates that particular node; the property states that for any adversarial set of indices Ι ∈ [n] there exists index jx (jy) such that it can be found a path from j(jy) input node to jx (jy) output node with no colours in Ι. Intuitively, there are always intermediate values that escape from the adversary's control.

On what efficiency refers, if we are willing to keep the threshold t optimal (that is t < n/2) then the sharing parameter is ℓ = c(n,t) and this leads to exponential communication complexity. Although lowering the threshold by a graph-dependent constant gives linear ℓ on n.

Saturday, November 3, 2012

Study Group: Real world attacks on bad randomness.

This week’s study group was held on Tuesday 30th October. Gaven and Stefan presented two papers introducing real world attacks on bad randomness. The talks were based on the following papers:
  • Ronwas wrong, Whit is right by Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter.

Security often depends on keys being chosen uniformly at random, thus randomness is critical for crypto. Long studies have been done in this area and it would be expected that modern operating systems and server software generate random numbers securely. Both papers test that proposition empirically by examining the public keys in use on the Internet. 
In Ps and Qs the authors conduct a wide survey of two of the most important cryptographic protocols TLS and SSH collecting 5.8 million unique TLS certificates from 12.8 million hosts and 6.2 million unique SSH host keys from 10.2 million hosts. Their results give “a macroscopic perspective of the universe of keys”. Then they analyze this dataset to find evidence related to inadequate randomness and find that at least 5.57% of TLS hosts and 9.60% of SSH hosts use the same keys as other hosts in an apparently vulnerable manner.
They also compute the private keys for 64,000 (0.50%) of the TLS hosts and 108,000 (1.06%) of the SSH hosts from their data by exploiting known weaknesses of RSA and DSA when used with insufficient randomness. For RSA, distinct moduli that share exactly one prime factor will result in public keys that appear distinct but their private keys can be computed by calculating the greatest common divisor (GCD). Their algorithm computes the GCDs of all pairs of 11 million distinct public RSA moduli in less than 2 hours and they are able to obtain the private keys for 0.50% of TLS hosts and 0.03% of SSH hosts. For DSA, if a key is used to sign two different messages using the same ephemeral key, an attacker can compute the signer’s long-term private key. Their SSH data contained DSA signatures with the same ephemeral keys during signing. This feature resulted in successful computing of the private keys for 1.6% of SSH DSA hosts.
An investigation on hundreds of vulnerable hosts showed that nearly all served information identifying them as headless or embedded systems. These kinds of devices usually generate keys automatically during the first boot and they may have limited entropy sources compared to traditional personal computers. The investigation also showed that the examination of clusters of hosts that shared a key or factor links them with a manufacturer or device model. They conclude that the problems are caused by specific defective implementations that generate keys without having collected sufficient entropy. They finally identify vulnerable hardware and software from 54 manufacturers.
In the same paper the authors explore the root causes of these vulnerabilities by investigating popular open-source software components from the population of vulnerable devices. Based on the aforementioned devices it is clear that none implementation was solely responsible for the vulnerabilities. All the examined software packages relied on /dev/urandom to produce cryptographic keys. Linux’s random number generator can present a boot-time entropy hole which causes urandom to produce deterministic output under conditions that can occur in headless and embedded devices. Experiments with OpenSSL and Dropbear SSH showed that repeated output from the system random number generator can lead to repeated long-term keys and also to factorable RSA keys and repeated DSA ephemeral keys.
The authors also provide some recommendations for developers of operating systems, cryptographic libraries and applications, for device manufacturers, certificate authorities, end users and the security and cryptography communities. Their conclusion states that the margin of safety is slimmer than we might prefer but there is no reason to doubt the security of most keys generated interactively by users on traditional personal computers.
To OS developers it is suggested to make sure the operating system maintains a secure PRNG that refuses to return data until it has been seeded with a minimum amount of randomness and is continually seeded with fresh entropy collected during operation. The operating system should also provide an interface to indicate how much entropy it has mixed into its PRNG, so that applications can gauge whether the state is sufficiently secure for their needs. Generally speaking cryptographic libraries should default to using the most secure mechanisms available. So, library developers should use RSA and DSA defensively. Also, if the OS provides a mechanism to flag that the entropy it has collected is insufficient for cryptographic use, they should do not proceed with key generation.
Application developers are advised to generate keys on first use not during installation or first boot. Developers should avoid working around low-entropy states by ignoring or disabling such warnings (with extremely dangerous results) and they must frequently add entropy from the operating system.
Device makers should tend to generate fresh keys on the device after sufficient randomness has been collected or to force users to upload their own keys. They should avoid factory-default keys or certificates. Embedded (or headless) device makers should make sure that effective entropy sources are present, and that these are being collected in advance of cryptographic operations. Devices should be initialized with truly random seeds at the factory. Device makers should test cryptographic randomness on the completed device and use hardware random number generators when possible.
Cryptographic keys and certificates that were included in the device or were automatically generated at first boot, should be manually regenerated by end users. Security and crypto researchers should be aware that the fact that all major OSs now provide cryptographic RNGs might lead experts to assume that any entropy problems that still occur are the fault of developers taking foolish shortcuts. The paper suggests otherwise: entropy-related vulnerabilities can result from complex interaction between hardware, operating systems, applications and cryptographic primitives.
The second paper brings the same alarming results. Despite the fact that numerous studies have been conducted to assess the state of the current public key infrastructure, several problems have been identified that are related to the way certificates are used. A worrisome finding is that among the 4.7 million distinct 1024-bit RSA moduli that were originally collected by the authors, 12720 have a single large prime factor in common. In a set of 11.4 million RSA moduli, 26965 are vulnerable, including ten 2048-bit ones. When exploited, it could affect the expectation of security that the public key infrastructure is intended to achieve. For the needs of the current experiment the authors collected as many openly accessible public keys as possible from the web, resulting in a set of 11.7 million public keys contains 6.4 million distinct RSA moduli. The remainder is almost evenly split between ElGamal keys and DSA keys, plus a single ECDSA key. All keys were checked for consistency such as compositeness, primality and (sub)group membership tests.
The assumption that entropy of the output distribution (of standardized RSA key generation) is always almost maximal and the outputs are hard to factor if factoring in general is hard, sometimes fails. The findings reveal that duplication of keys is more frequent in the collected set than in sets described in other works. More seriously, authors discovered 12934 different RSA moduli that offer no security.
They conclude that systems such as RSA that require multiple secrets during key-setup, are more affected by the difficulty to generate proper random values than systems that require a single secret. For RSA multiple secrets can be avoided, for instance by using moduli pq for k-bit primes p chosen such that q = [(22k−1 + p − (22k−1 mod p))/p] is prime.