Thursday, December 20, 2012

Hewlett-Packard Colloquium

Every year the UK Cyber Security community get together at Royal Holloway for a day of talks just before Christmas; all funded by Hewlett-Packard. And today is no exception. Indeed this is the 23rd such event. The main purpose of the event is for everyone to meet and discuss issues, opportunities and share experiences of the proceeding year. Most of the participants are the thought leaders from the UK Cyber Security industry, as well as a few academic hangers-on like myself.

A regular part of the proceedings is a pen-and-paper cryptographic challenge set for the audience by someone in HP Labs. This is called the Zaba Memorial Challenge, in honour of Stefek Zaba an ex-HP Lab'y who used to set the challenge in the past. Stefek is a much missed member of our community in the UK; and the regular challenge allows us to remember him in a small way.

The talks were given by John Madelin (Area VP for EMEA at Verizon), James Lynne (Director of Technology Strategy at Sophos and David McGrew (Fellow at Cisco). Most of the talks are more focused on business issues rather than technology and scientific issues. Most of the interest I find in the day is this exposure to other views.

John, who talked about the "Business of Security", had some interesting anecdotes about how the younger generation are adopting different, new technologies, compared to our generation. For example, email is often not used at all by people under the age of 20. This is the third talk I have attended at which this quite startling observation (for me) has been made. This has widespread impact on the increasing use of "Bring-Your-Own" device to work; which is only going to accelerate as the generations change. An interesting graph was the growth in cloud based service takeup in the BRICs countries compared to the tradional high tech economies. John discussed the security priorities of companies in North America; a regular survey showed that Cloud Security was the top concern, displacing Intrusion Detection from the previous years top spot. Indeed IDS had dropped off the top ten list; which was dominated by concerns about mobile phones (security and malware), tablet security and so on.  Another interesting point made was the difference in technology adoption by the BRICs countries, i.e. Brazil, Russia, India and China. This is because such countries do not have such a sunk investment in legacy systems, and so they can adopt new technology quicker. He discussed how companies spend money protecting against non-existant threats; but which they feel more comfortable on spending money on. They then miss out on protecting against the real threats which provide the greatest risks.

James, talked about "Hacking cyber crime and deja vu" a really cool talk describing a number of attacks which are out there in the wild. A nice histogram (albeit a 3D one) presented the growth in unique pieces of malicious code found per year. Most of this code is distributed by SQL injection into web pages; which is surprising as this weakness has been known about for many many years. A nice description was given of a malware "service" based in the cloud which monitors users computers; and when the malware is detected it phones back to base to enable a new version of the malware to be distribtued. This is an example of the increase in the sophistication of the software used in malware. Annother example was some extortion/ransom based malware which encrypts your files and then asks for you to pay up to get access to the password; and, as an added bonus, if the victum does not pay up the criminal places increminating files on the victims computer and then informs the police of their existance. The talk went on to discuss various other criminal activities which were very close to the old Digicrime joke website; but were scarily real. Most of the rest of the talk discussed an investigation into a specific cyber criminal and how various security weaknesses in photo uploading sites, social media sites etc were used to track them down.

David, who gave a talk at the workshop organized by Kenny Paterson and myself earlier in the year, gave a talk today on "The Vernam cipher is better than Quantum Key Distribution". David's background is in Physcis, but he made a switch to Info Sec and has concentrated on communications security since, and so he was particularly well suited to discuss QKD technology in the real world. David's basic thesis was that there is too much hyperbole about this; with quotes like "QKD will make the internet unhackable", and that is is "absolutely unbreakable", statements which the Info Sec community knows to be false. He presented a set of ten goals that might want from a system to enable secure communication; he then showed that QKD provided very weak (or no) coverage of all bar one of these goals. In fact the only thing for which QKD did well was that it minimized computational assumptions; but even then this is not true since most QKD systems work in a hybrid manner. David compared QKD to building a type out of concrete, it resists punctures but does little else very well.  He ended by discussing what would happen if Quantum Computers became a reality; here his thesis was that one could switch to "postquantum" schemes such as those based on lattices. In particular as long as one-way functions still exist we can use digital signatures. In such a situation he claimed that there was still no business reason to adopt QKD.

Tuesday, December 18, 2012

InTrust 2012 (Day 2)

The conference concluded today with three contributed talks (including mine on the security analysis of an open car immobilizer protocol stack) and one invited talk. Andrea Saracino presented a new method to classify the security of Android applications which could make it easier for users to review the threats involved with the installation of individual apps. Liqun Chen talked about the re-evaluation of attacks on Direct Ananymous Attestation (DAA) schemes, which have been specified for usage with TPMs.

The last talk was by Christof Paar on four very interesting pieces of practical security work (two "constructive" and two "destructive") ranging from work on securing vehicular networks (VANETs) to the breaking of bitstream encryption in FPGA devices.

A Sort of Christmas Message

Someone recently pointed out that perhaps I should blog a bit more on our own Crypto Blog for Bristol, so have decided to put finger-to-keyboard to pen this end of year message. Not quite up to the standards of the traditional christmas message from Her Majesty, for example this blog does not come in 3D.

So looking back on 2012 what has been the highlights for me? Perhaps most interesting has been the raising in profile of the whole area of "Cyber Security", not only in the UK academic community but also world wide and in the general media. There is a now a widespread understanding that as a society we are ill equipped to deal with the threats posed in the new online world. Whilst some areas are well developed scientifically (for example cryptography is founded on well established scientific principles and lines), other areas of information security are less well ground in science. Indeed many problems in information security cannot be solved by technical means alone; thus we need collaboration with other disciplines such as sociologists, economics, policy, psychology etc to solve many of the problems we face.

But progress is being made:
  • In the UK an initiative by EPSRC and GCHQ has set up the "Centres of Excellence in Cyber Security Research", of which Bristol is one of the first eight. This initiative aims to not only recognize the excellent research in various disciplines being carried out across the UK, it also aims to bring the community together so as to achieve more. We are using our granting of a centre to try to bring together the interested parties in the so-called "Cyber Corridor" along the M5 in the UK. Alongside this motorway are a number of large companies, SMEs and govenment departments with an interest in Cyber Security. So we are hosting a regular series of evening events to bring this community together.
  • A major issue is one of capacity building of the human capital in the area. Many of our problems in this area stem from poor provision in all areas of our education system. Thus another welcome development in 2012 has been the political, educational and industrial push to finally do something about the provision of computing teaching within schools in the UK. Basicly for the non-UK readers this is to move provision from something akin to "Digital Literacy" to a more "Computer Science" based curriculum. This is vital as almost all high powered jobs in the future will be driven by the concept of computational thinking. Eventually the changes in the school curriculum will feed through, and we will have a better trained population to deal with the challenges we will face in the future.

In the area of cryptography the highlight for me has been the rapid advance in the area of Fully Homomorphic Encryption (FHE). The year has seen some major advances, and publications of various techniques. For example we have had various simplifications of the basic ideas from Brakerski in relation to scale invariance, the publication of the Brakerski, Gentry and Vaikuntanathan (BGV) leveled FHE scheme, a deeper understanding of the ring-LWE problem and how to implement schemes based on it by Lyubashevsky, Peikert and Regev. There have also been advances in computational techniques; here Gentry, Halevi and I have shown one can compute homomorphically with only polylogarithmic blowup in terms of computational cost, we have demonstrated that a leveled Somewhat Homomorphic scheme can compute a high degree functionality, and we have shown (with Peikert) how to switch between different ring representations. All of these advances have enabled us to bring the practical goal of FHE closer to reality.

However, perhaps the most interesting outcome of the work on FHE will not be to FHE itself. The interest in FHE has provided two key improvements in other areas of cryptography:
  • All FHE schemes are based on lattices and as such are resistant to known quantum algorithms. One by product of the current interest in lattice based schemes is that there is now an efficient quantum secure digital signature scheme based on lattices.
  • In joint work with Aarhus the Bristol team has developed a highly efficient Multi-Party Computation protocol which outperforms (both in terms of security and performance) all existing practical instantiations. The protocol amazingly uses FHE technology to make it go faster; which given the current state-of-the-art in FHE performance is at first sight quite surprising.
So what else has Bristol been working on? Quite a lot it turns out, as a look at our list of publications will show. Ranging from very theoretical work through to very applied work. We have looked at various real world protocols (TLS, EMV, the Helios e-voting protocol), deployed products (J2ME installations, Android smartphones), as well as examined issues related to DoS attacks and Role Based Access Control.

Finally, I return to the theme at the start. When 2012 started it would have appeared that the major media interest would focus around the 100th anniversary of the birth of Alan Turing; and indeed there has been a lot of media attention devoted to this event. However, a quick glance at any major media outlet will reveal that probably the major story has been the coverage of all things Cyber Security related. Be they the recent story on the pigeon cipher through to major attacks like the follow on from the Stuxnet incident. It would seem that not a week goes past without some Cyber Security related story appearing on the BBC website at least. Whilst in some sense this can be construed as bad news,on the other hand "all publicity is good publicity". After all, raising awareness will encourage more students into the area, will the profile of the issues amongst the general population, will produce demand for solutions to the problems we face and will encourage more people to come and innovate.

So looking forward may 2013 be as exciting as 2012 has been.

Monday, December 17, 2012

InTrust 2012 (Day 1)

The fourth installment of InTrust has made it out of Beijing and is currently being held at Royal Holloway, University of London. The first day featured three contributed talks balanced by three keynotes and a panel session.

For me, the highlights of the day were a practical demonstration of Nicolai Kuntze, who showcased their implementation of establishing trusted connections in mobile ad-hoc networks leveraging Trusted Platform Modules (TPMs). The interconnections of three nodes in the lecture theatre (two embedded ones with a hardware TPM and a laptop with a simulated TPM) were displayed and updated in real-time on the projector. Once the software configuration of one of the embedded nodes was changed to a non-trusted setup (by connecting a USB stick to it which in turn led to the execution of non-trusted code), its TPM detected this change via updates to its PCR(s), which in turn locked out use of the communication key and caused the other two participants of network to drop their connection to the node and to reject future attempts by it to reconnect. Amazingly enough, the demo went on smoothly to all our surprise (and at most Nicolai's). I guess the demo effect is already on Christmas break :-)

The second highlight was the panel discussion of Charles Brookson, Nicolai Kuntze, Kenny Paterson and Graeme Proudler (chaired by Shin'ichiro Matsuo) about mobile device trust. The participants shared a rather pessimistic view on the adoption of sound security mechanism in future mobile devices, as security is often not regarded as a primary goal of a device/system, but rather as a cost factor which needs to be minimized (or eliminated). Only in area's where the major stakeholders are actively aware of the security issues, adoption of security is facilitated. One example given was the "bring your device to work" use case, where companies have a vested interest in keeping their infrastructure secure despite having connections to employee's devices not under the company's control on the one side, and employee interests of keeping their personal data on the device protected from access of the company.

Monday, December 10, 2012

Indocrypt 2012

Indocrypt 2012 is currently on in Calcutta, India. I am there as an invited speaker, along with Vinod Vaikuntanathan and Orr Dunkelman. I expect my invite has more to do with my good friend Steven Galbraith being programme co-chair than anyone wanting to hear me speak.  In any case the hospitality and organization shown by our hosts has been amazing. Unlike the major conferences, conferences like Indocrypt and Africacrypt always have a great deal of speaker/audience interaction outside of the main talks and it is always great to see new faces joining the subject.

The programme is slightly biased towards more symmetric cryptography style areas; an area I am not really expert in. However, one highlight in this area for me was a tutorial on the RC4 cipher and various cryptanalytic results given by Subhamoy Maitra. This complimented two tutorials given the day before the conference started by Steven Galbraith on lattices in cryptography and by Francisco Rodriguez-Henriquez on hardware design for cryptography.  Francisco's talk gave a gentle introduction to the different issues one faces when moving from a mindset of software implementation to hardware implementation; as well as an interesting subsection on implementation of a length preserving mode of operation using a tweakable block cipher.

The first morning was however much more "up my street". The first session being particularly interesting, with four very good talks. Focused around the loose theme of protocols, the four talks considered completeness theorems for MPC, aspects of the Fiat-Shamir transform from Sigma protocols to NIZKs, a look at the BB84 QKD protocol and leakage resilient MPC protocols.

The BB84 talk, by Arpita Maitra, was particularly appealing as it showed how a simple observation could be overlooked for a long time. The basic idea was that the original BB84 protocol sends four qubits per agreed key bit (essentially), a problem related to the resulting security gaurantee was found in this a while back and a modified protocol was presented which sends six qubits per key bit. However, what the Indocrypt talk showed was that if one repeated the two protocols and looked at the bit-security per qubit sent (which is an obvious metric used in other areas of cryptography) then the original four qubit BB84 protocol turns out to be better.

Vinod Vaikuntanathan gave a nice invited talk on recent advances in Fully Homomorphic Encryption. Such talks are now becoming very accessible, although perhaps I am not the best person to comment on this. In the past explaining the basic idea behind the original scheme of Gentry was challenging in a one hour slot. But now with the advent of the LWE based schemes, and a better understanding of how the different ideas fit together, such talks are I think more accessible to a general crypto audience. Of course this talk was aided in this respect by Vinod's standard brilliant delivery, which makes even the most complex concept look effortlessly simple.

Thursday, November 29, 2012

Kick off to CARDIS 2012


Today was the first day of the CARDIS 2012 conference which focuses on both research and applications of smart card devices. The program for today was aimed at Java card security and Smart Card protocols. The pre-proceedings for the conference can be found here: 


This article is just a brief overview of the talks and presentations given during the session.

The first paper presents the problem of static verification with respect to Java Card applets and how the current standards do not efficiently detect fault injection attacks. The authors propose the use of type and bounds protection to achieve efficient run-time verification. The paper presents an attack which would normally compromise Java Card but is thwarted by the countermeasures proposed. The authors simulate and benchmark the countermeasures to show that a hardware implementation of the countermeasures will result in an additional overall overhead of 10%, compared with the software implementation at 115%. Further work is being carried out to develop a toolchain which will facilitate compatibility and transparency for Java Card developers. The follow-up Q&A session highlighted that there has not yet been any work on a hardware implementation and therefore the cost of the additional logic is not yet known.

The second paper focuses on the performance impact that software run-time verifiers have on a smart cards. The paper describes a method to implement dynamic security levels on a smart card which will allow the detection of an attack and activation additional security countermeasures. A trade-off is made to increase the performance of a card on a day-to-day basis with the cost of allowing an adversary a one-shot attack before the security level is raised. Several points were raised during the Q&A session about both the overall practicality of the scheme and the assumption that an attacker will not be able to influence the trigger for the additional security levels.

The third and final paper on Java Card security presented a fault injection attack which focused on low precision localised targets. The author describes a combination of both logical and hardware attacks to maximise the probability of successful fault injection. The results presented are from a simulation model of a fault injection set-up and it is therefore difficult to determine the feasibility of the attack. The theoretical outcome of the paper is that an attacker need not have a high degree of precision when injecting a fault to achieve a usable result. This is often a difficult task for an attacker in the real world.

The first paper on smart card protocols describes an improvement to the cryptoGPS Public-Key authentication protocol which is also practical for real world application. The original protocol requires the use of both a PRF and a hash function in order to generate a valid response. The proposed modifications allow for the use of a block cipher in place of both these functions. The net result offers an asymmetric authentication protocol which lowers the area requirements of the tag and improves the performance of the protocol. The Q&A session offered further insight into the practicality of the scheme. The limited memory of smart cards results in the need for regular updates to the tag in order to maintain a set of valid commitments.

The final paper for the day was focused on unlinkability and revocation of smart cards. The authors highlight the leakage of personal data from a smart card device during a legitimate transaction with a trusted entity. That is to say, given an entity is trusted, does not mean that a user will want to share all the details available on the smart card. Schemes exist to achieve this property but none which also allow for practical revocation and de-anonymisation of a corrupt user. The performance overheard was found to be fairly high in the majority of instances and therefore further work is being carried out to improve the scheme.

Wednesday, November 28, 2012

Multi-Instance Security and its Application to Password-Based Cryptography

Gareth and Marcel discussed the paper on Multi-Instance Security and its Application to Password-Based Cryptography from this year's CRYPTO in the latest study group. In this paper, Bellare, Tessaro and Ristenpart introduce the theory of multi-instance security to deal with settings where one or more instances can feasibly be compromised, but it should remain infeasible to compromise all of some large number m of instances. Here, infeasible means that the effort of compromising all m instances should at least increase linearly with m. They define security metrics and games to model security in such a setting and examine the relations between their security notions. Finally, they analyse password-based Key Derivation Functions (KDFs) from the PKCS#5 standard to show that it meets their new multi-instance security definition.

Consider the setting of password-based encryption, where a message M is encrypted under a key derived from a password. Passwords are often chosen poorly in practice, which means that the set D of likely passwords (known as a dictionary) is limited and can be exhausted. Thus, an attacker can apply brute-force to retrieve the password using cN hashes, where N is the size of the dictionary D and c is equal to the number of hashes performed by the KDF. Now, increasing c would make it harder for the attacker to break a password, but it would also decrease the efficiency of the system. Furthermore, in the multi-user setting, the effort of the attacker to break the passwords of m users is still roughly cN because he only needs to exhaust the dictionary. To protect against such attacks it is possible to pick a random bitstring called the salt and hashing it together with the password when deriving the key. This salt is sent together with the message, so only the password is needed to derive the key. Now, the effort of the attacker is increased to mcN if the salt is large enough, because the attacker would need to create a dictionary for each distinct salt value that is used.

For their security metric, the authors say that the adversary wins if he breaks all m instances of the system and no fewer. They define security games based on the Left-or-Right (LOR) security definition, where an adversary has access to an encryption oracle that takes messages M0 and M1, and returns an encryption of Mb for some bit b that is fixed in the oracle. The adversary has to guess whether the oracle has the fixed bit b=0 (left oracle) or b=1 (right oracle). To extend this to the multi-instance setting, they define LORX-security, where the X stands for XOR. In this security game, the oracle has a fixed bit bi for each of the instances 1 ≤ im. The adversary can query the oracle on a message pair and an index (M0,M1,i) and get the encryption of Mbi under the key associated with instance i. In the end, the adversary must guess the XOR of the bits bi. Another variation of this game is the AND metric, where instead of having to guess the XOR of the bits bi, the adversary has to guess the vector of all bits bi. The reason for using XOR is that even if the adversary figures out m-1 of the challenge bits, it still cannot guess the XOR of all challenge bits with high advantage unless it figures out the last one as well.

The authors also extend the Real-or-Random (ROR) security definition to the multi-instance setting, calling it RORX. In the RORX security game, the adversary has access to an oracle (that has some fixed bits bi) that takes a message M1 and an index i. The oracle then generates a random message M0 of the same length as M1 and encrypts Mbi under the key associated to instance i. In the single-instance setting, the adversary has to decide whether he is dealing with an oracle with fixed bit b = 1 (encrypting real messages) or b = 0 (encrypting random messages). In the RORX security game, the adversary must guess the XOR of the bits bi. Finally, they define the Universal Key-Unrecoverability (UKU) security notion, in which the adversary gets access to an encryption oracle and must recover the key. It should be noted that all these definitions include corruptions of instances, which means the adversary also has access to a corruption oracle. When given an index i, the corruption oracle returns the key associated with instance i and if appropriate the bit bi as well.

Having defined all these security notions and games, the authors show that LORX tightly reduces to RORX, whereas the reverse incurs a loss in tightness of 2m. Similarly, LORX tightly reduces to UKU, whereas the reduction from RORX to UKU loses a factor 2m as well. Similarly to the single-instance setting, UKU does not imply LORX, because the encryption scheme can ignore the key and just output messages in the clear, thus being UKU secure (as encryptions contain no information on the key) but not LORX secure. They also show that LORX implies AND, under mild conditions that are "usually met". According to the authors, the loss of tightness in the reductions for RORX shows that LORX is the right security definition for their purposes.

The final result of their paper is about the PKCS#5 standard on password-based encryption. Using a very technical proof with lots of different games, they show that the advantage of an adversary trying to break LORX security of your password-based encryption is bounded from above by m times the advantage of an adversary against the used encryption scheme plus 2 times the advantage of an adversary trying to guess the passwords plus 2 times the advantage of an adversary trying to break the KDF. Thus, the conclusion is that if you use a reasonably secure encryption scheme and KDF (e.g. KD1 from PKCS#5), then it all comes down to how hard it is to guess the passwords of your users.

There are some applications where it is really necessary to break all instances. Think of a group of people using a secret sharing scheme that have their secrets stored on their email account, which is protected by a password-based encryption scheme. If the threshold of the secret sharing is equal to the number of players, then an adversary would really need to break all instances of the password-based encryption to be able to retrieve the secret. However, for some applications you might want a less strict definition of security, because having all but one of your instances broken does not seem very secure. It would be interesting to see whether it is possible to create a definition that requires some fraction of the instances to remain secure and whether this leads to any meaningful results.

Tuesday, November 27, 2012

Wrapping up Vampires - VAM 2 Workshop in Graz

The current "Workshop on Physical Attacks" organized by the Vampire Lab (VAM 2) of the Ecrypt II project and the Graz University of Technology serves to wrap up the work of the VAM 2 group at the end of the Ecrypt II project. To this end, the workshop has two different types of talks:
  • Talks by industry representatives who, in the majority, outlined open research problems and
  • talks by VAM 2 researchers giving brief overviews of some of the work done within the VAM 2 research group.
This will be followed by a poster session intended to engage further collaborative work in the spirit Ecrypt II.

The day was started by Stefan Mangard (Infineon Technologies, Germany) giving an impressive argument that the "system" perspective for secure hardware modules requires further work. While a lot of  research has been done to secure (as well as attack) cryptographic implementations, those always require services from the system that they're part of. For example, keys have to be stored somewhere, randomness has to be generated by the system and confidentiality and integrity of the keys and the randomness has to be provided within the system. The second big problem he mentioned is the lack of a more systematic approach towards leakage resilient embedded systems. Prompted by a question from the audience he also briefly outlined a final open problem, namely the integration of security modules into bigger embedded systems such as modern multi- and many-core System-on-Chips (SoCs) and how to use them as sort of a "trust anchor" to extend trust onto other parts of the SoC.

Somewhat related to the second point, Christophe Giraud (Oberthur Technologies) spoke on the difficulty of protecting a system against side-channel attacks without creating new leakages in a different leakage model and of the difficulty to keep track of different leakage models that apply to different parts of a system. Also related, Tony Boswell (Siventure) addressed challenges for the Common Criteria framework by which the security of systems (and especially security tokens such as smart cards) is tested and certified.

In the second line of talks, Lejla Batina and Carolyn Whitnall first presented some work done within the VAM 2 group that introduced Mutual Information Analysis to the tool box used for side-channel analysis and made an important contribution to improve our understanding of statistical side-channel distinguishers. Finally, Jörn-Marc Schmidt and Mike Tunstall presented VAM 2 work that improved the state of art in Fault Injection Attacks and countermeasures against these.

Monday, November 12, 2012

Post-Quantum Cryptography and Quantum Algorithms


Last week, the Lorentz Center hosted a workshop on Post-Quantum Cryptography and Quantum Algorithms, which attracted both cryptographers and quantum algorithms people. The aim of this workshop was to "look into alternative cryptosystems which also withstand attacks using quantum computers" and to encourage cryptographers and quantum algorithms to work together on the two overlapping fields.

When cryptographers consider 'attacks using quantum computers', they mostly think of Shor's algorithm, which allows us to efficiently factor large numbers and solve the discrete logarithm problem on a quantum computer. Thus, Shor's algorithm gives us an exponential speedup over the best known classical algorithms. Cryptosystems that rely on the hardness of such problems, such as RSA and Elliptic Curve Cryptography, will be much less secure (read: broken) once we have a quantum computer. Because these systems are used widely in practice, we would like to find some alternatives that, as far as we know, are still secure against attacks using a quantum computer.

The alternatives discussed at the workshop were based on problems from the fields of lattices, error-correcting codes and multivariate quadratic equations. For each of these fields there was an introductory tutorial and one or two invited talks. Additionally, there were two tutorials and one invited talk on quantum algorithms. The tutorials on quantum algorithms were very interesting, because quantum algorithms are often no more than black boxes to cryptographers and these talks explained some of the 'magic' behind them.

The first quantum algorithms tutorial was given by Stephen Jordan. He gave a high-level overview about the kind of quantum algorithms that provide super-polynomial speed-ups, such as Shor's algorithm. These algorithms solve some variant of the Hidden Subgroup Problem, for various groups. He also described several groups for which we do not yet know how to solve the HSP. For example, if we could solve the HSP over the symmetric group, we would be able to solve the Graph Isomorphism problem using a quantum computer. Finally, he mentioned that we might be able to solve certain lattice problems if we were able to construct certain quantum states corresponding to a Gaussian or similar distribution over lattices. Constructing these states efficiently is an open problem.

The other tutorial on quantum algorithm was given by Frederic Magniez. It focused on applications of Grover's algorithm, which allows us to perform a search of an unsorted database in the square root of the size of the database. When used to find collisions in specific functions, this can sometimes even be improved to a cube root. Thus, this kind of quantum algorithm gives polynomial speedups. For symmetric cryptography, the existence of Grover's algorithm usually means that we need to double the length of the key if we want a comparable security level against a quantum computer. However, it is not always trivial to apply Grover's algorithm to other search problems. This lead to some interesting open problems, i.e., whether Grover could be used to speed up other classical algorithms in cryptanalysis.

Besides these talks, there was some room in the program for people to contribute their own talks. The remainder of the time was spent discussing open problems and drinking coffee. The group was split into three groups, one for lattices, one for error-correcting codes and one for multivariate quadratic equations. Each group had a few people from the area of quantum algorithms as well. For those interested, there is a wiki page with slides of the main talks, a list of open problems and a summary of the topics that were discussed by the three groups.

Wednesday, November 7, 2012

Study group: On Secure Multi-party Computation in Black-Box Groups


Yesterday's study group, driven by Ashish and Jake-1, was on realizing multiparty computation over groups. Traditionally, MPC has been carried out over (finite) fields with some extensions to rings. It is a natural question to ask whether there exist constructions using different algebraic structures, interesting not only from a theoretical point of view, but also because group oriented work may lead to new techniques.

When one is dealing with groups rather than with others algebraic structures equipped with two operations, the first problem that arises is how to evaluate boolean circuits. Group-based circuits have essentially one type of gate (or two, considering gates with built-in constants). Working with non-abelian groups is of particular interest due to a result (section 4) reformulated here (theorem 1) that allows to translate any boolean circuit (expressed as AND and NOT gates) into a circuit over S 5.

In the paper, they provide the means to construct passively secure protocols among n parties against unbounded adversaries, for the evaluation of the n-product function
ƒG(x 1, ...,xn) = Πxi, where xi is privately held by party i, and then argue that the construction can be extended to the most general case. Also, for flexibility, they assume black-box access to the group G (i.e. parties only perform either group operation, group inverse or random sampling from the group).

They first prove that MPC in non-abelian groups requires honest majority by showing that ƒG is not t-private if t ≥ n/2, where t is the threshold on the number of corrupted parties and n ≥ 4 . They do so using a characterization of 1-private 2-inputs protocols, for the evaluation of function ƒ, via certain type of matrices (non-forbidden matrices), and showing by contradiction that if a n/2-private protocol for ƒG exists, then this characterization does not hold.

Then they construct a protocol for the n-product ƒG out of a 2-product subprotocol. This construction is relatively easy using a binary tree with leaf nodes set to the parties' inputs, and root node to the output of ƒG. The computation of each intermediate node is done running the subprotocol.

Given ℓ > t the only tool needed for the construction of the protocols is a ℓ-out-of-ℓ sharing of a given secret s. The shares are defined to be the vector ss = (s1, ...,s) where for a fixed j ∈ [ℓ] the shares si for i ≠ j are randomly chosen, and sj is such that s = Πsi. It can be proven that the distribution of ss is independent of j.

Interestingly, t-private n-party 2-product protocols can be constructed from the combinatorial problem of finding t-reliable n-colouring graphs. As the subprotocol employed in each node of the tree takes a 2ℓ-value and outputs a ℓ-value the graph also has this structure.

The protocol must provide correctness and privacy. For the former, recall that the working group is non-abelian. They use a stronger notion of planar directed acyclic graphs, what they call admisible PDAG; esentially is planarity what ensures that the order of the factors in the multplication does not get commuted. Privacy is achieved by the t-reliable n-colouring property of the admisible PDAG. Each node of the graph is labeled with a colour in [n] that designates the party that evaluates that particular node; the property states that for any adversarial set of indices Ι ∈ [n] there exists index jx (jy) such that it can be found a path from j(jy) input node to jx (jy) output node with no colours in Ι. Intuitively, there are always intermediate values that escape from the adversary's control.

On what efficiency refers, if we are willing to keep the threshold t optimal (that is t < n/2) then the sharing parameter is ℓ = c(n,t) and this leads to exponential communication complexity. Although lowering the threshold by a graph-dependent constant gives linear ℓ on n.

Saturday, November 3, 2012

Study Group: Real world attacks on bad randomness.

This week’s study group was held on Tuesday 30th October. Gaven and Stefan presented two papers introducing real world attacks on bad randomness. The talks were based on the following papers:
  • Ronwas wrong, Whit is right by Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter.

Security often depends on keys being chosen uniformly at random, thus randomness is critical for crypto. Long studies have been done in this area and it would be expected that modern operating systems and server software generate random numbers securely. Both papers test that proposition empirically by examining the public keys in use on the Internet. 
In Ps and Qs the authors conduct a wide survey of two of the most important cryptographic protocols TLS and SSH collecting 5.8 million unique TLS certificates from 12.8 million hosts and 6.2 million unique SSH host keys from 10.2 million hosts. Their results give “a macroscopic perspective of the universe of keys”. Then they analyze this dataset to find evidence related to inadequate randomness and find that at least 5.57% of TLS hosts and 9.60% of SSH hosts use the same keys as other hosts in an apparently vulnerable manner.
They also compute the private keys for 64,000 (0.50%) of the TLS hosts and 108,000 (1.06%) of the SSH hosts from their data by exploiting known weaknesses of RSA and DSA when used with insufficient randomness. For RSA, distinct moduli that share exactly one prime factor will result in public keys that appear distinct but their private keys can be computed by calculating the greatest common divisor (GCD). Their algorithm computes the GCDs of all pairs of 11 million distinct public RSA moduli in less than 2 hours and they are able to obtain the private keys for 0.50% of TLS hosts and 0.03% of SSH hosts. For DSA, if a key is used to sign two different messages using the same ephemeral key, an attacker can compute the signer’s long-term private key. Their SSH data contained DSA signatures with the same ephemeral keys during signing. This feature resulted in successful computing of the private keys for 1.6% of SSH DSA hosts.
An investigation on hundreds of vulnerable hosts showed that nearly all served information identifying them as headless or embedded systems. These kinds of devices usually generate keys automatically during the first boot and they may have limited entropy sources compared to traditional personal computers. The investigation also showed that the examination of clusters of hosts that shared a key or factor links them with a manufacturer or device model. They conclude that the problems are caused by specific defective implementations that generate keys without having collected sufficient entropy. They finally identify vulnerable hardware and software from 54 manufacturers.
In the same paper the authors explore the root causes of these vulnerabilities by investigating popular open-source software components from the population of vulnerable devices. Based on the aforementioned devices it is clear that none implementation was solely responsible for the vulnerabilities. All the examined software packages relied on /dev/urandom to produce cryptographic keys. Linux’s random number generator can present a boot-time entropy hole which causes urandom to produce deterministic output under conditions that can occur in headless and embedded devices. Experiments with OpenSSL and Dropbear SSH showed that repeated output from the system random number generator can lead to repeated long-term keys and also to factorable RSA keys and repeated DSA ephemeral keys.
The authors also provide some recommendations for developers of operating systems, cryptographic libraries and applications, for device manufacturers, certificate authorities, end users and the security and cryptography communities. Their conclusion states that the margin of safety is slimmer than we might prefer but there is no reason to doubt the security of most keys generated interactively by users on traditional personal computers.
To OS developers it is suggested to make sure the operating system maintains a secure PRNG that refuses to return data until it has been seeded with a minimum amount of randomness and is continually seeded with fresh entropy collected during operation. The operating system should also provide an interface to indicate how much entropy it has mixed into its PRNG, so that applications can gauge whether the state is sufficiently secure for their needs. Generally speaking cryptographic libraries should default to using the most secure mechanisms available. So, library developers should use RSA and DSA defensively. Also, if the OS provides a mechanism to flag that the entropy it has collected is insufficient for cryptographic use, they should do not proceed with key generation.
Application developers are advised to generate keys on first use not during installation or first boot. Developers should avoid working around low-entropy states by ignoring or disabling such warnings (with extremely dangerous results) and they must frequently add entropy from the operating system.
Device makers should tend to generate fresh keys on the device after sufficient randomness has been collected or to force users to upload their own keys. They should avoid factory-default keys or certificates. Embedded (or headless) device makers should make sure that effective entropy sources are present, and that these are being collected in advance of cryptographic operations. Devices should be initialized with truly random seeds at the factory. Device makers should test cryptographic randomness on the completed device and use hardware random number generators when possible.
Cryptographic keys and certificates that were included in the device or were automatically generated at first boot, should be manually regenerated by end users. Security and crypto researchers should be aware that the fact that all major OSs now provide cryptographic RNGs might lead experts to assume that any entropy problems that still occur are the fault of developers taking foolish shortcuts. The paper suggests otherwise: entropy-related vulnerabilities can result from complex interaction between hardware, operating systems, applications and cryptographic primitives.
The second paper brings the same alarming results. Despite the fact that numerous studies have been conducted to assess the state of the current public key infrastructure, several problems have been identified that are related to the way certificates are used. A worrisome finding is that among the 4.7 million distinct 1024-bit RSA moduli that were originally collected by the authors, 12720 have a single large prime factor in common. In a set of 11.4 million RSA moduli, 26965 are vulnerable, including ten 2048-bit ones. When exploited, it could affect the expectation of security that the public key infrastructure is intended to achieve. For the needs of the current experiment the authors collected as many openly accessible public keys as possible from the web, resulting in a set of 11.7 million public keys contains 6.4 million distinct RSA moduli. The remainder is almost evenly split between ElGamal keys and DSA keys, plus a single ECDSA key. All keys were checked for consistency such as compositeness, primality and (sub)group membership tests.
The assumption that entropy of the output distribution (of standardized RSA key generation) is always almost maximal and the outputs are hard to factor if factoring in general is hard, sometimes fails. The findings reveal that duplication of keys is more frequent in the collected set than in sets described in other works. More seriously, authors discovered 12934 different RSA moduli that offer no security.
They conclude that systems such as RSA that require multiple secrets during key-setup, are more affected by the difficulty to generate proper random values than systems that require a single secret. For RSA multiple secrets can be avoided, for instance by using moduli pq for k-bit primes p chosen such that q = [(22k−1 + p − (22k−1 mod p))/p] is prime.