Thursday, December 23, 2010

INTRUST 2010

I attended INTRUST 2010 in Beijing, China. The conference focused on all aspects of trusted systems like trusted modules, platforms, services and applications. In this year, program was split on two parts. In first two days was technical session where the audience was able to select talks from main topics like Hardware Security, Security Analysis, Software Protection or Mobile Trusted Systems. Third day was reserved for Workshop with Invited Talks with keynote speakers like Andrew Yao, Moti Yung, Ahmed-Reza Sadeghi or Liqun Chen.

The bast paper award was sponsored by Singapore Management University (US$1000) and was given to "Seamless Integration of Trusted Computing into Standard Cryptographic Frameworks" by Andreas Reiter, Georg Neubauer, Michael Kapfenberger, Johannes Winter and Kurt Dietrich from IAIK, Graz. This was the first presentation of the conference and in this talk author presented a novel design for Trusted Software Stack (TSS) - interface between applications and Trusted Platform Module (TPMs). Proposed TSS can be easily integrated into existing security frameworks and reuse application programming interface (APIs) from well known frameworks. Presented stack has nice features like dynamically loading components via the network, add, update or replaced functionality even after deployment and support multiple TPMs. The last features is especially nice for mobile devices and systems with many virtual TPMs. The prof of concept was done with the BoncyCastle security framework, but according to authors further enhancement might include integration into Java Cryptographic Extension and CryptoAPI.

Just after first session I gave a talk on "Hardware Trojans for Inducing or Amplifying Side-Channel Leakage of Cryptographic Software", where I presented a novel concept of micro-architectural Trojan Side Channels.

Definitively the last day of the conference was the best one. Many keynote speakers was invited to give a talk: Andrew Yao - "Some Perspectives on Complexity-Based Cryptography"; Moti Yung - "Embedding Cryptography to General IT Engineering System/Project"; Liquen Chen - "Security and Privacy in Vehicular Systems - Case Study: Trusted Anonymous Announcement"; Ahmad-Reza Sadeghi - "Trusted and Secure Computing in Practice: Where are We Now!" and many others.

During this session very interesting talk was given by DongHoon Lee from Korea University on Security Issues in Smart Grid. He highlights security problems in smart grids like privacy of smart meters users and smart meter attacks. According to wikipedia smart meter is an advanced meter that records consumption in intervals of an hour or less and communicates that information at least daily via some communications network back to the utility for monitoring and billing purposes. This information of consumption might reveal (if is not protected) e.g. user's lifestyle pattern which might be considered as a privacy violation. Author presented also the list of possible attacks on smart meters. The last part of the talk was dedicated to security requirements of smart meters and a need of security standards in this field.

The last talk of the conference was given by Claire Vishik from Intel. She briefly introduced "Direction in Hardwire Security R&D in Government, Academic and Industrial Research". The audience was able to listen some state-of-the-art security issues from industry, academia and government perspective, advantages and disadvantages of all of them and point of interests in terms of security research. The last part of the talk was focused on Intel's work and vision - goals and future.

Wednesday, December 15, 2010

Highlights of ACSAC 2010

I just returned from this year's Annual Computer Security Applications Conference (ACSAC) in Austin, Texas. Unfortunately, due to the adverse weather conditions and resulting flight delays, I missed out on the first day of the technical program.

ACSAC is dedicated to work on practical security applications and this year its program encompassed such diverse topics as detection of misbehaving network entities (e.g. spammers, malware-infected machines/botnets), malware analysis/mitigation, practical authentication, hardware security, secure OS (components), security in mobile/wireless devices, and social engineering.

The best paper went to Ang Cui and Salvatore Stolfo from Columbia University, for their assessment of the vulnerability of network-enabled embedded devices. Basically, they performed a scan of the whole Internet in order to find embedded devices (typically gateways or routers) whose network managment interface was accessible via the manufacturer's default credentials. They identified a large number of vulnerable devices, especially gateways in ISP networks which typically act as the network point of entry for private home (NAT) networks. All these devices are potentially up for the taking by uploading malicious firmware, which could make them part of a botnet. This is especially alarming, as a botnet composed of such gateways could be much more potent than traditional botnets built from infected PCs. In terms of workload, these gateways are just as good as any infected machine residing behind it - in the end what counts is the rate at which packets (e.g. SPAM) can be sent out, and that's limited by the gateway's capabilities anyway. Moreover, most gateways are always switched on, giving rise to much higher availability of the bots. Also, infections on PCs usually get detected at some point by showing abnormal behavior like unusually high processor or network loads or weird disk access patterns. On the other had, it seems much harder for a "normal" user -who never even bothered to change the default password - to start suspecting the gateway to be infected with malware.

Another interesting talk given by Jonathan Valamehr involved potential use of 3D chip integration for attaching cryptographic coprocessors to regular embedded processors. 3D chip integration works by stacking several chip dies on top of each other and connecting them by intra-chip vias or similarly fine connections and it allows for much denser integration than would otherwise be possible. The contribution of the teams from UC Santa Barbara and UC San Diego is the development of special connector circuits which allow to add another die with matching landing points conditionally. Their example was an embedded processor which would work normally as a single die and could be stacked with a cryptographic coprocessor die for improved processing of security workloads.

In my talk - which immediately followed the 3D talk - I presented a detailled concept and FPGA prototype for a side-channel resistant embedded processor, which has been developed in the context of the Power-Trust project at my former university (Graz University of Technology, Austria). An ASIC prototype is currently under evaluation in a joint effort of Graz and Bristol.

Thomas Longstaff gave a very interesting invited talk on the lack of the scientic method in many works in the applied security community. He argued that certain pressures and realities in the present academic world would lead researchers to adapt experiments to fit the hypotheses instead of the (proper) other way around. Also he noted that many papers lack a sufficient description of their methodology - which he considered as one of the most important parts of any scientific paper. Also he plead for more care in the choice of program committee members and argued that PCs should contain a minimum number of members with formal scientific training as opposed to practitioners in certain fields.

The very last talk of the conference by Trajce Dimkov from the University of Twente discussed methodologies for evaluating the threat of social engineering attacks. The presentation compared the practical application of two different methodologies for penetration testing, i.e. basically they send in people into an organization who try to breach security via social engineering. In this case the goal of the penetration tester was to "steal" a laptop of a specific employee. The talk was accomanied by video footage take on some of the tester's "coups". The two methodologies differed in the number of involved people and who knows what. In the end there are several conflicting requirements and one must choose the best methodology which will create the least disturbance to the tested organization (e.g. disruption work or shattering existing trust relations).

Friday, October 22, 2010

Cryptoforma Workshop

Nigel, Essam, Ming-Feng and myself are currently attending the Cryptoforma Workshop at the University of Surrey.

The first talk of the day was given by Sriram Srinivasan, of the University of Surrey, on the topic of key sizes (in the context of electronic voting). Typically, when cryptographers give a cryptographic proof they discuss security in terms of the security parameter. However, what actually is the security parameter? For example, a key size of 128 bits alone is meaningless. When considering an asymmetric scheme, a 128 bits modulus really would provide no security at all. If we consider primitives, we find that to achieve an equivalent of 128bit AES security, in the asymmetric setting (for say RSA) we need keys that are 3248 bits long. When we want schemes based on elliptic curves, or discrete logarithms the key length is different again. There are various resources available to help choose the appropriate key size; the Ecrypt II Report on Algorithms and Key Sizes being one such document.

If we now extend our thinking to a protocol built from multiple primitives, we have several questions to answer to make choices about the security. How long do we need the protocol to provide security for? If our protocol is a voting scheme, some people may want a scheme to provide security for at least their lifetime, others may want it to remain secure forever. When proving the security of schemes, the proof often introduces a security loss. These may need to be considered. Sometimes this security loss may mean one needs to increase the key size of the underlying primitives (to provide equivalent security), but sometimes this loss may not be (practically) relevant. So, the question now would be what does this mean, and how on earth do we choose key sizes? And this, unfortunately, is a very open question indeed, which I will not try to answer here, and was the source of much discussion at the Cryptoforma Workshop.

A second talk today by Graham Steel on "Attacking and fixing PKCS#11 security tokens" discussed various ways of attacking cryptographic tokens in order to discover the cryptographic keys stored on the device. To do this they have built "Tookan", an automated tool. This tool reverse engineers a device by querying the device through its API to construct information on how the device functions (and commands available etc). This data can then be run through a model checker which can check for possible vulnerabilities in the token. For example, say there are two keys on the device, k1 and k2. You ask the device to encrypt k1 under k2, receiving back a ciphertext; you then ask the device to decrypt the same ciphertext and receive k1 as the plaintext. These techniques were applied to several real devices, and of the 17 commercially available tokens, 9 were vulnerable to attacks, and 8 had severely restricted functionality. Interesting, and slightly worrying!

Friday, October 8, 2010

Still some more CCS'2010

At CCS'10 Wilko Henecka, Stefan Kögl, Ahmad-Reza Sadeghi, Thomas Schneider and Immo Wehrenberg presented in their paper "TASTY: Tool for Automating Secure Two-partY computations" a new tool to implement a variety of (relatively efficient) secure two-party computation protocols and to compare the results of these protocols. It is based on previous work of the Fairplay Project which uses Yao circuits and previous improvements thereof. In TASTY, the authors implemented further optimization techniques which mainly focus on shifting as much computation as possible into the setup phase. Additionally, the authors implement the additively homomorphic scheme by Paillier and also allow for a hybrid mix of both.

The interesting part is, that now the efficiency of both schemes can be compared. And surprisingly enough, even though additively homomorphic encryption should be expected to be more efficient for multiplications, this does not seem to hold. Indeed the authors show that for some scenarios, Yao Circuits are more efficient at multiplications than the Paillier scheme. This is quite surprising.

Another interesting scheme that was proposed at CCS'10 is that of "Worry-Free Encryption: Functional Encryption with Public Keys" by Hakan Seyalioglu and Amit Sahai. It is based on Yao circuits as well and provides a solution for the following problem:
  • A has data d which may only be read by people with who have security clearance x1 but does not want to reveal that d is only accessible to people with security level x1.
  • B wants to get d from A without knowing which security level is required for d and without having to reveal his/her own security level xb.
Basically, this can be achieved if you have a function f( ) which produces
d=f(x1)
and different (random) output for all other security levels. This function has to look random so that it does neither reveal d nor x1 and it may only be evaluated once. Of course this is just what can be achieved with Yao Circuits. (This was really just a very fundamental explanation, please read the paper for an accurate description. For example, a central authority is required as well.)

Thursday, October 7, 2010

Turing at CCS'10

Today we had two papers at CCS'10 introducing new, Turing complete languages. The second one is "Return-Oriented Programming Without Returns" by Stephen Checkoway, Lucas Davi, Alexandra Dmitrienko, Ahmad-Reza Sadeghi, Hovav Shacham and Marcel Winandy and extends the concept of return-oriented programming into "jump-oriented programming" that uses jump instructions instead of return instructions to build gadgets and this has severe security implications as the authors showed at the examples of x86 processors and Android based devices running on ARM chips. But the first paper "Platform-Independent Program" by Sang Kil Cha, Brian Pak, David Brumley and Richard J. Lipton was even more impressive.

However, before I continue to write about the paper, I should give a short explanation of Turing complete languages and why these are important: In 1937, a few years before the first programmable computer was built, the mathematician Alan Turing invented the concept of a Turing machine to prove that a universal (or programmable) machine can be built that can solve any computable problem. Although no universal Turing machine will ever be built since it requires infinite memory, this is basically the most important result of computer science. If you take a set of instructions which are sufficient to simulate such a Turing machine (with exception of the infinite memory), this set is called "Turing complete". This certainly is not the biggest deal in the world since all modern processors have Turing complete instruction sets. And indeed, in both papers, the Turing completeness of the languages is only used to prove that their languages do not lack fundamental concepts.

So let me now explain what's so special about the language introduced in "Platform-Independent Program". Commonly the instruction sets of two different processors overlap to a certain extent but are not equal; a program for x86 processors will never run on an ARM processor and vice versa. So the authors of the paper started looking at the overlap of the instruction sets to find jump instructions that will have the following effect:
  • If executed on platform a, jump to address x.
  • If executed on platform b, jump to address y.
Now they can place instructions for platform a at position x and instructions for platform b at position y. Out of such short code sequences the authors build gadgets and all the gadgets together form a turing complete language. (The instructions at x and y do not have to have the same effect; on platform a the program might be a harmless desktop gimmick, on platform b it might be malware.) The really amazing thing about this is, that (to my knowledge) this is the first language that is at least semi-platform independent but does not require a virtual machine such as Java or an interpreter to achieve platform independence. (It still needs to have enough overlap in the instruction sets.)

Wednesday, October 6, 2010

Notes from CCS'10 - II

Today there were two papers beating on the almost same issues. One was "Dismantling SecureMemory, CryptoMemory and CryptoRF" by Garcia, van Rossum, Verdult and Wichers Schreur in which the authors analyzed the security of three Atmel chip families that claimed to guarantee security based on a proprietary stream cipher that was kept secret. The other paper was "Attacking and Fixing PKCS#11 Security Tokens" by Bortolozzo, Centenaro, Focardi and Steel which examines the security of 17 tamper resistant tokens that claimed to implement PKCS#11. In both papers, the authors managed to find severe weaknesses such that many of these devices now have to be considered broken.

In case of the Atmel chips the biggest issue was that the manufacturer chose a security-by-obscurity approach (possibly to reduce production costs). However, the authors didn't even have to use expensive semiconductor tools to extract the cipher description from the chips; all they needed to do was disassemble a software library and analyzing the code for the cipher specification. It took them just 3 days which means that even less knowledgeable people would have been able to do it within a reasonable amount of time. Once the algorithm was known, it was quite easy for the authors to break the devices with a combination of side-channel attacks and some cryptanalysis.

In case of the PKCS#11 tokens the authors constructed an automated tool to analyze the tokens and to exploit a range of vulnerabilities if possible. The result was quite devastating: Either the tokens did not offer full PKCS#11 functionality or they had at least one easily exploitable vulnerability. The worst thing was, that some of the vulnerabilities should not exist if the standard had been implemented properly.

So both papers address two major engineering issues for secure devices, both resulting from a lack of security awareness:
  • Security by obscurity does not work! If you have a secure algorithm, you can publish it. If it's not secure, it will leak.
  • A security standard is almost worthless if it does not come with automated standard compliance tests so that customers can verify that the products they want to buy actually are as secure as the standard. (There is no way to guarantee security against unknown vulnerabilities.)
The latter point comes with a couple of benefits:
  • The reputation of the standard will not suffer from bad implementations. Bad implementations just ruin the implementers reputation.
  • The implementation cost of a standard is reduced since implementation errors are more easily to detect. (If you have to implement something deadlines usually do not allow to develop your own testing tool for a standard of somehundred pages full of technical details.)
  • Standard compliant devices will be more trustworthy.
Two additional, somewhat speculative, advantages are that automated standard compliance testing will aid independent security testers and I believe it will help to discover ambiguities in the standard before the standard is adopted since the automated compliance test has to be implemented by that time.

Tuesday, October 5, 2010

Notes from CCS'10

I would like to point out a paper that was presented at CCS'10 today. I liked "Survivable Key Compromise in Software Update Systems" by Samuel, Mathewson, Cappos and Dingledine because it is an excellent example how careful engineering can ease (if not solve) the pains of a worst case scenario. Key compromise, especially of root keys, is the worst case scenarios of any Public Key Infrastructure (PKI) and the PKIs used to authenticate software updates are among those with the highest impact on our entire IT infrastructure. Every software contains some vulnerabilities and without authenticated software updates it is only a matter of time until attackers exploit those vulnerabilities for their purpose. Even worse, a malicious software update could be used to insert new vulnerabilities into secure systems. If you can not trust the PKI and the keys used to authenticate a software update, how can you trust the update? The root keys used to establish a PKI are very well protected and rarely used but, unfortunately, it doesn't mean that they are always secure, it just means that they are less likely to be compromised. But it still happens that you can not trust them anymore, as e.g. https://www.redhat.com/archives/fedora-announce-list/2008-August/msg00012.html shows. Replacing the compromised keys with new, trustworthy ones is a delicate task and regaining the lost trust is difficult. Unfortunately, currently the PKIs used for software updates do not prepare for this case much; therefore the TOR project decided to develop a new PKI system out of existing concepts that is better prepared to cope with this worst case and the result was presented in the paper. I do hope that bigger software projects, such as major linux distributions or Mozilla pick up on this and continue improving the update infrastructure.

Monday, October 4, 2010

Workshop on Privacy in the Electronic Society 2010

Today there were two interesting talks at the Workshop on Privacy in the Electronic Society (which is co-located with CCS 2010) that relate to the work we're doing at Bristol: The first one was on "Deniable Cloud Storage: Sharing Files via Public-key Deniability", a paper written by Paolo Gasti , Giuseppe Ateniese and Marina Blanton. In their paper they look at the scenario where multiple people collaborate on some files which are stored in a computing cloud and one of these persons is forced to hand over all of his/her keys to the attacker. If such a scenario has to be expected (e.g. because you have to travel to a country where the authorities can not be trusted) they show that you can prepare for this scenario: Based on Paillier's homomorphic scheme and RSA-OAEP they construct a deniable encryption scheme in which the attacker will not be able to tell whether you are revealing the true information or a manufactured false document. (Unless he can exploit a side-channel which in this case might be done using a lie detector.)

The other interesting talk was on "Investigating Privacy-Aware Distributed Query Evaluation", a paper written by Nicholas Farnan, Adam Lee and Ting Yu in which they describe their work on assuring privacy for SQL queries. The problem they are facing is that one query which combines data from multiple databases should not reveal more than possible to any of the databases: Each database should only see the information related directly to the data it is supposed to deliver. Additionally, the data bases should not learn the entire query, they should only learn the part of the query that has to be answered by them. If you have been reading the previous entries of this blog, that might remind you of the i-Hop homomorphic scheme presented by Gentry et al. at Crypto 2010 and indeed I believe that the i-Hop scheme can be used to solve some of the open issues that Farnan, Lee and Yu listed in their talk today.
However, that is not the solution they took. Instead they started looking at current implementations of SQL: SQL just describes what you want to learn with the query but it does not say how the answer has to be computed. One technique to do so are mutant query trees and these are what Farnan et al. looked at. In their research they ask how to split these tree into queries solvable by each database without revealing more than necessary and how to »homomorphise« (this is not the term they used but I guess it is the best generic description of what they are doing) them. So instead of designing a secure system that can be used to answer database queries (with a potentially large overhead) they took a very efficient, highly engineered database system and try to retro fit security into it.
It would be interesting to see, whether both approaches can meet in the middle to solve the security issues that Farnan et al. still have without suffering too much of an efficiency backlash from using the i-Hop scheme (or similar schemes).

Wednesday, August 25, 2010

2nd SHA-3 candidate conference - day 2

We got presentations from two software benchmarking efforts. One is eBASH by the ECRYPT-2 network of excellence. Performance figures are currently predominantly on various desktop processors but with the help of the XBX project, more embedded platforms should be covered soon. The other effort is sphlib which focuses on portable C implementations for embedded platforms.

After some presentations on some specialized software implementations and another round of security analysis presentations, each of the 14 teams had the chance to report on the status of their submission. The arguments on some security observations were carried on in humorous form through the mentioning of so-called "banana attacks" by several presenters (a term that has originated in the discussion on the NIST SHA-3 mailing list). Everybody presented arguments why their submission should make it into the final round, highlighting available security proofs and analyses, but also eventual advantages for implementation in software in hardware.

As there have been no dramatic break of the security of any of the 14 candidates, NIST is certainly facing a though choice for the final round competitors. One point William Burr from NIST made in his closing statement was that the choice will probably aim to select a diverse set of candidates (and not just algorithms based on the same design principle). As we have roughly three categories of candidates (Add-Rotate-XOR based, AES based, Sponge based), one might speculate that we will maybe see six finalists with two from each category (rather than just four or five finalists). In any case, in a couple of weeks we will know for sure.

Tuesday, August 24, 2010

2nd SHA-3 candidate conference - day 1

We just finished the first day's program. While the first half focused on observations of properties of some candidates (or components thereof), the second half was about hardware implementation (which is really my cup of tea). We've had five presentations on groups which studied the hardware efficiency of all 14 round-2 candidates (two for ASIC implementations, three for FPGAs). The general consensus was that this kind of hardware performance benchmarking should be carried out with implementation parameters (e.g. design goals, interfacing, target technology) in agreement as much as possible. Patrick Schaumont and I presented one of the ASIC studies each, while Brian Baldwin, Shin'ichiro Matsuo, and Kris Gaj talked about their FPGA studies. Not all results were at perfect match, but some candidates kept appearing more often in the top lists than others. The Athena framework presented by Kris is especially interesting, as it allows for the benchmarking of designs from various authors and is freely available to anyone. It would be great to get something similar for ASICs, but there the availability of design tools and technology information is more complicated than for FPGAs.

The day concluded with a discussion of the next steps for the SHA-3 competition. William Burr from NIST shed a little light on the upcoming selection process for the finalists in the last quarter of this year and also what the submitters of the finalists should look out for (e.g. any big tweaks might invalidate previous cryptanalysis and be hurtful to a candidate's chances). Akashi Satoh pointed out that hardware evaluation should focus more on the flexibility of candidates rather than on raw speed or low area alone. Adi Shamir felt that the discussion at this point was already too much in favor of efficiency criteria and too little about security evaluation. Richard Schroeppel even pleaded that the SHA-3 competition should go on for an additional year in order to get more security analysis on the candidates done.

Tomorrow, the second day will focus on software implementations and another round of security analysis. The afternoon will be filled with the 14 teams giving updates for their respective candidates.

Sunday, August 22, 2010

CHES 2010 rump session

Short summary of the 2010 CHES rump session:

There will be a Journal of Cryptographic Engineering (JCEN), a new Springer journal, covering the research areas of the CHES Workshop; the first issue will be published in February 2011. Papers can be submitted from 15 September.

CHES will be held in Tokyo next year in the last week of September. For
2012, the CHES organisation is still up for grabs; however, the conference
needs to be held somewhere in Europe.

From an entertainment perspective, the points did go to Jean-Jacques Quisquater: He elaborated on how paul, the world cup octopus, could serve the crypto community as an oracle. After that, Stefan tried to educate the audience of how NOT to compare hardware designs.

Sergei Skorobogatov currently develops new technology for effective side-channel analysis that allows a full break of AES on a real device in less than one second. Results can soon be expected on his personal homepage.

Apparently, the second DPA contest is still accepting submissions and participants will be honoured with a free SASEBO-GII board.

As every year, there was an announcement that a new SASEBO board will be
available soon; in addition, they also offer EM probes for side-channel
analysis. Finally, Josh Jaffe from Cryptography Research explained that now
side-channel attacks that analyse one billion traces are possible.

Saturday, August 21, 2010

Revisiting Garbled Circuits

What happens if a garbled circuit performs a computation on masked data creating masked output and the computation leaks to side channels? Theory says, it does not matter and it has strong points to bolster its claim. Friday, at CHES 2010, Thomas Schneider showed that garbled circuits made another step towards being practical in his talk "Garbled Circuits for Leakage-Resilience: Hardware Implementation and Evaluation of One-Time Programs". (The paper has been written by K. Järvinen, V. Kolesnikov, A.-R. Sadeghi and T. Schneider.)

Their first achievement is that they implemented two architectures to evaluate garbled circuits in a FPGA and thus have the first timing data for garbled circuits on hardware. They obtained their performance figures for a garbled circuit computing AES. It still is rather slow compared to conventionally protected implementations but it is substantially faster than the currently best software implementation.

But they also achieve to make the output verifiable. To do this, they basically wrap a n-out-of-n secret sharing scheme around the garbled circuit. The shares traverse through the circuit and any manipulation of the computation destroys at least one share. In the end, the result is only unmasked if the secret can be reconstructed. The secret sharing has, just like the masking and unmasking of the data, to happen inside trusted hardware. This trusted hardware may be tamper proof devices but one can also imagine a scenario where the trusted hardware is your own computer while you outsource the heavy workload computation to rented, untrusted hardware.

Friday, August 20, 2010

Lessons of two attacks on allegedly tamper proof chips.

Today, at CHES 2010, two interesting talks shed new light on the myth of tamper resistant hardware: Di-Battista showed in his talk "When Failure Analysis Meets Side-Channel Attacks" (paper written by J. Di-Battista, J.-C. Courrège, B. Rouzeyre, L. Torres and P. Perdu) how they used photon emissions of transistors as side channel in a DPA attack to recover keys stored inside an Actel A3PE FPGA which claims to be highly tamper-proof. (The photon emission basically occurs in all chips, no matter whether they're ASICs or FPGAs.)


In the data sheet Actel claims: "The flash cells are located beneath seven metal layers, and many device design and layout techniques have been used to make invasive attacks extremely difficult." The light emissions do not come from the flash cells but from switching transistors; this however is no big difference: While it is indeed difficult to probe through all those layers of metal interconnect (and probably no photon would ever get through it) it is much easier to flip the chip upside down and look at its bottom where it has no interconnect. In the first attack they only use 'natural' photon emissions from the transistors, in the second attack they use laser stimulation to increase the photon emissions. From the observed photons they are able to construct traces for DPA attacks.

One might argue that the attacks of Di-Battista et al. are extremely expensive compared to normal DPA attacks: The equipment needed for their weaker attack costs 500 K$ while the equipment for the stronger attack costs 2 M$ but can be reused and should be available in any semiconductor lab. However, the price argument does not apply to a weaker class of attacks that S. Skorobogatov introduced in an earlier talk today, "Flash Memory 'Bumping' Attacks". Skorobogatov attacks three chips including an Actel A3P250 with relatively low cost laboratory hardware and a little bit of cryptanalysis. (That's the 'bumping'.) He is able to break the security claims of the chips regarding configuration extraction; the configuration of an FPGA is equal to the designer's intellectual property (IP) and therefor a major target whenever FPGA based devices are reverse engineered. However, in the Actel case he only attacked one of multiple options available to secure the IP - to break the other options the attack of Di-Battista et al. is needed.

In summary, both talks showed the dangers of relying on FPGA based security claims to protect IP and, at least in case of the light based DPA attack, any cryptographic operation. It may be assumed, that non-Actel FPGAs and many ASICs suffer similar problems. Very impressive results.

Lightweight cryptography for IC-printing

Yesterday I attended the first day of CHES 2010 conference. The first session brought us new results in lightweight cryptography. Quite interesting talk was given by Alex Poschmann. He proposed a new block cipher for IC-printing called PRINTcipher.

Roughly speaking IC-printing technology allows printing integrated circuits directly on a range of materials and thus the cost of production is very low. In contrast to conventional silicon manufacturing, IC-printing allows changing the printed circuits in each run with little cost. Yet another new fancy technology that should be checked in cryptographic applications especially those aimed for the RFID world.

Alex took well known strategies for designing lightweight cryptography and merge them into new design for IC-printing applications. So what is a recipe for developing a new lightweight block cipher? Lets take an SP-network and S-boxes which are as small as possible. Since we care predominantly about area, our cipher might be as slow as a turtle and iterate internal parts as long as it is needed. In the IC world, storage is costly, so we have to get rid of unnecessarily storage elements. First idea is to minimize key schedule. Yes, we can do this as long as we aim at RFIDs which are likely to have fixed keys. This means that instead of using xor gates for xoring key and data bits, we can directly use inverters and wires. Quite nice, we saved area but what
about reverse engineering and side channel attacks? This fixed key feature is most important for IC-printing. Since we are able to change and print ICs with no additional costs we can produce a bunch of RFID tags with different fixed keys very easily.

All in all, new technology, new design and slightly different approach than in traditional RFID tags. The real world applications? We will see.

Wednesday, August 18, 2010

Incoercibility

Today I attended a rather interesting talk at Crypto 2010 by Dominque Unruh on the paper Universally Composable Incoercibility by D. Unruh and J. Müller-Quade.

Incoercibility is typically studied in the setting of voting systems. For example, say you vote in an election where you are issued with a receipt saying who you voted for. You want to vote for Charlie, but an adversary forces you to vote for Alice, demanding you show the receipt to them afterwards as proof. In this environment you can be coerced, unless of course you can produce a fake voting receipt. Although voting systems are the most studied, there are other situations where incoercibility may be a requirement; this is relevant whenever a protocol leaves traces of its actions. The question of course is how do we analyse this and are there any models to allow us to do this? Unrue and Müller-Quade introduce a general model for this under the universal composability framework.

Briefly, the universal composability framework has two worlds. In the real world, there exists an adversary who interacts with parties participating with the real protocol. The ideal world has a simulator who simulates the actions of the real world adversary, interacting with parties who use an ideal functionality, as opposed to the real protocol. If for all adversaries there exists a simulator then the two worlds are equivalent assuming no environment can distinguish with which world it interacts.

How could you model this incoercibility requirement? One way might be to have the following two worlds:
(1) The adversary coerces a party, and this coerced party simply forwards messages backwards and forwards between the protocol and the adversary.
(2) The adversary interacts with an un-coercible party who does what he wants, not what the adversary wants (while interacting with the protocol).
If these two worlds were indistinguishable, then one might expect us to have the desired result. However this fails. For example, say you have an election with only one voter, then the vote tally will reveal which world you are in.

So the best result that can be achieved is that "every lie possible in the ideal setting should be possible in the protocol setting". Without going into the details, Unruh and Müller-Quade achieve this in the following way. In the ideal world a "Deceiver" party is introduced. Typically an adversary corrupts a party and completely controls that party, but with the addition of the Deceiver, if the Deceiver controls a corrupted party, then this party behaves as demanded by the Deceiver and not the adversary (unbeknown to the adversary). This method then leads to the useful result that any attack possible in the real world is possible in the ideal world, and any lie possible in the ideal world is possible in the real world.

Attacking Mobile Communications

Today our (Stephen Williams and I) favourite paper at Crypto'10 is "A
Practical-Time Related-Key Attack on the KASUMI Cryptosystem in GSM and
3G Telephony" by O. Dunkelman, N. Keller and A. Shamir. It isn't an easy
task to describe differential attacks in a blog entry in the first place
and their new sandwich attack uses differentials in a very artful manner
so we're not going to elaborate on the details of the attack except to
notice that they spread the previously known boomerang attacks over
additional rounds somewhat similar to super s-box attacks. (Except of
course, there are no s-boxes involved, neither in boomerang nor in
sandwich attacks.)

However we would like to point out a couple of things:
  • The authors do not claim to have broken A5/3. They explicitly point out that due to the operation mode of KASUMI in A5/3 they haven't found a way to apply their attack on the entire protocol. We think that this is an important reminder of how much depends on the mode of operation.
  • The attack also reminds us how important it is to take proper care of differentials in the design of ciphers: KASUMI is a simplified version of MISTY and while KASUMI has suffered a serious blow today, MISTY stands untainted.
  • After listening to the talk today, our confidence in A5/3 hasn't grown. We don't know whether ETSI/SAGE (who designed KASUMI) wanted it to be less secure than MISTY but we consider it to be precariously close to being a failure equal to A5/1 and A5/2. NIST has generated a lot more trust in AES by its open design process than ETSI/SAGE managed to generate for A5/3.
  • We greatly appreciate that development of A5/4 is on its way already; at the rump session ETSI/SAGE called for participation in the evaluation of the proposed new algorithms which are based on ZUC. The relevant links are http://zucalg.forumotion.net and the GSM webpage.

Tuesday, August 17, 2010

How to rerandomize Yao Circuits

For today's coverage of CRYPTO 2010 we (Stephen Williams and I) picked the "i-Hop Homomorphic Encryption and Rerandomizable Yao Circuits" paper by C. Gentry, S. Halevi and V. Vaikuntanathan. Not only did Vaikuntanathan give a really good talk but also the ideas of the paper are quite appealing. Suppose you have the following setting:
  • Alice encrypts a message m into an ciphertext c0 and sends c0 to the server 1
  • who has to evaluate a function c1<-F1(c0) such that Dec(c1)=f1(m). He then passes c1 on to server 2
  • who evaluates a similar function c2<-F2(c1) such that Dec(c2)=f2(f1(m)).
  • It goes on like his until the last (xth) Server sends cx to Dora who knows the secret key and computes Dec(cx)=...(f2(f1(m))).
Sounds cool, doesn't it?

Well, there's a nasty little problem which got solved by Gentry, Halevi and Vaikuntanathan (GHV) in their paper: How do you keep the functions fi that were applied to the ciphertexts secret if Alice, some of the servers and Dora work together to reveal the function that has been used? GHV use garbled Yao circuits to implement the functions evaluated by the servers. While these circuits look completely random to Dora, who gets all of them, they don't look as random to the servers who created them. So if Dora gives the entire garbled circuit to the collaborating servers, they are able to de-garble the circuit and the functions of the honest servers that should be protected can be reconstructed by the dishonest ones. However, if each server can re-randomize the previous circuits passed on to him, the previous servers can not derandomize their part of the entire circuit and therefore the honest server protected his function. GHV show in their paper how to achieve this re-randomization using the DDH based encryption scheme from Boneh, Halevi, Hamburg and Ostrovsky (proposed at Crypto'08). That rocks!

The claim that this could actually be of practical relevance is believable. Yao's circuits were once thought to be slow, but recent work by researchers here at Bristol has shown them to be practical. Furthermore, a homomorphic encryption scheme is considered to be "compact" if the ciphertext size remains constant; using the re-randomizable Yao circuits GHV create a "non-compact" homomorphic encryption scheme whose ciphertext size grows after every function applied to the scheme. However, and this is the interesting part, the growth of the ciphertext is polynomial and may thus still be practical. As Vaikuntanathan pointed out in his talk, people should look at how many "hops" (or functions) are going to be applied with the homomorphic encryption scheme; if this number is low, their "non-compact" version may be suitable.

So, why does anyone actually care about this? One possible application (here we slightly generalize Vaikuntanathan's example of e-mail filtering) is searching a database for keywords where the keywords are supplied by multiple parties and neither the database nor the keywords of the other parties should be revealed to any party.

Tuesday, August 10, 2010

P vs NP

So here is the first post, and it is a pretty interesting topic to start...

In the last couple of days rumours have spread around the internet about a mathematician from HP Labs, Vinay Deolalika, having proposed a proof of the conjecture that P does nto equal NP. This proof is currently being reviewed by the experts, and if it is proved to be correct then this will be the next of the Clay Institutes Millennium Prize problems to be solved.

So why is this interesting to at all, and why is it interesting to us cryptographers?

Firstly, as a purely financial justification; the Clay Mathematics Institute is offering a one million dollar prize for a proof of each of seven fundemental problems in mathematics. These seven problems were published in 2000 and consist of the following questions
  1. P vs NP
  2. The Hodge Conjecture
  3. The Poincare Conjecture
  4. The Riemann Hypothesis
  5. Yang-Mills existence and mass gap
  6. Navier-Stokes existence and smoothness
  7. The Birch and Swinnerton-Dyer conjecture
Of these problems the Poincare conjecture is already solved (by  Perelman) in 2003, and of the other six I only really know about three: P vs NP (which I shall return to below), the Riemann Hypothesis (which is at the heart of problems about the distribution of prime numbers) and the Birch and Swinnerton-Dyer conjecture (which is about values of a function attached to an elliptic curve).

However, the P vs NP question is probably the most philosophically interesting. It basically asks whether it is easier to check a mathematical proof than it is to invent it. Intuitively this is born out by our experience in lectures; students find it easier to follow and understand a lecture than come up with the material from scratch on their own.

But it is not just a question about proofs. The two expressions P and NP refer to sets of problems. The set P is the set of problems for which we can decide whether they are true or false efficiently, i.e. in polynomial time (hence the P). These correspond, using the above analogy, to the problems for which we can come up with a proof efficiently. The set NP on the other hand is the set of problems for which given the problem and an additional piece of information called a witness (or proof), we can check the proof efficiently. Why this is called NP is technical and will not be gone into here.

The P vs NP question is whether the set P is equal to the set NP. It is clear that the set P is contained in NP, since we could take the witness to be empty for all problems in P. But if P does equal NP, then all sorts of things will follow: A lot of interesting problems for which we have no efficient algorithms could be efficiently solved. More importantly to us cryptographers, we would all be out of a job. Since all known practical encryption schemes are related to problems in NP.

Vinay Deolalika claims to have a proof that P does not equal NP. If his proof is found to be correct then this is great news for science in general, and cryptographers can breath a sigh of relief for a few more years. However, even if the proof is found to contain some errors it is likely to stimulate new ideas.

We are really living in interesting times

Introduction

This is the start of a blog for the Cryptography Research group at the University of Bristol. In this blog we aim to discuss matters of interest to our research. For example this could be items such as:
  • Security news and our thoughts about it
  • Reports on conferences we have been to
  • Bringing attention to things we have seen and read
  • Discussion of matters cryptographic
Anyway maybe this will die a death and never be used, maybe it will be used a lot. But you dont know until you try.....