After a brief session on the foundations of hardness (which I will not blog about due to the hardness of foundations), the Monday afternoon was dedicated to Cryptanalysis. There were seven talks on the topic (thankfully spread over two sessions with a break in between).
The first talk was suprisingly against cryptanalysis. Marc Stevens introduced a new paradigm which he had coined ``counter cryptanalysis''. For this work he received the conference's Best Young-Researcher Paper Award and Marc did not disappoint with his presentation.
Previously Marc (and coauthors) had won the Best Paper Award at Crypto'09 for creating a rogue CA certificate. For that work, Marc and his coauthors had created two messages that collided under MD5. One message was relatively harmless and was asked to be signed by a higher level certificate authority, the other message was less harmless and contained the credentials of a new, rogue certificate authority. Due to the collision, the signature on the harmless message is equally valid for the harmful message. At the time, this attack clearly demonstrated a serious problem with the hash-then-sign infrastructure based on by now broken hash functions.
As Marc explained, migration from MD5 (and SHA-1) to SHA-2 or beyond is in practice not as easy as it might sound from a theoretical point of view. Any individual signer could migrate, but as long as a few do not migrate, verifiers will still have to accept the old, insecure schemes. An attacker would then seek out the signers using outdated crypto and the problem persists. A verifier could stop accepting weak signatures, but again there is a practical problem as there are simply too many valid signatures around to invalidate them.
The solution proposed by Marc is to accept old, valid signatures, yet to refuse to sign, and signatures on, forged messages. While filtering out forged messages sounds like a great suggestion, the obvious question is how to determine whether a message is honest or forged. This is where Marc's expertise and the new paradigm of countercryptanalysis comes into play. As Marc defined it, the goal is ``to detect cryptanalytic attacks at the cryptographic level; the design itself should not change.''
To understand why this is at all possible, some background on modern collision attacks is required. It turns out that dedicated attacks are highly specialized and tend to introduce certain unavoidable anomalies. For MD5 there are a small number of differential steps used by all known attacks, regardless of the full differential trail or path. Moreover, it is hard to see how to avoid these differential steps without blowing up the attack complexity considerably. If a message pair was crafted to lead to a collision, given just one of these messages and guessing the right differential at the right place will allow efficient recovery of the companion message and the full differential path. On the other hand, for a normal, honest message no collision will be found this way and the message will pass the test. Thus a hash function can be augmented by this additional test, which returns a single bit to indicate if the message led to a collision or not. This extra check can be used online by both the signer and the verifier, to avoid and limit the damage of forgeries.
The final part of the talk concerned the offline uses of the new attack. Recently a new and highly sophisticated virus, Flame, was discovered. It specifically targeted the Middle-East and had gone undetected for over 5 years. What makes it interesting from a cryptographic perspective is the way it operated. Somewhere in Microsoft Windows chain of command, it turned out that there was a hash-then-sign service authenticating the Update functionality. Flame (or its developers) had created an illegitimate sub-CA certificate using a collision attack on MD5. Although only one of the involved messages was known, using countercryptanalysis Marc could recover the colliding message (that must have been signed directly at some point) as well as the differential path. Surprisingly, the path was novel, indicating that some cryptanalytic effort had gone into creating the Flame virus.
The second talk of the session, and the last I will be blogging about, concerned attacks on locking systems. The presentations was by Daehyun Strobel, but he represented a fairly large team of collaborators. Traditional, mechanical locks can be picked quite easily and as a result electronic locks and access control are becoming increasingly popular. One of the more popular models is the SimonsVoss 3060 G2, which consists of a lock and transponder to open the lock. The system supports a large number of digital locking cylinders and transponders, so the right people will be able to access the right rooms. The system is based on proprietary software.
The first step performed by the Bochum team was to open up some of the hardware. This revealed several components and further investigation showed that the PIC16F886 microchip carried out all the cryptographically relevant operations. SimonsVoss had protected this chip by disabling straightforward code extraction. However, by adapting a known fault attack involving decapsulation of the chip and subsequent bathing in UV-C light Daehyun and his colleagues had managed to disable the standard protection mechanism. This allowed reconstruction of the authentication protocol, which turned out to be a multi-round challenge-based protocol using two proprietary functions.
Both functions are based on a modified version of DES combined with a proprietary obscurity function. Closer analysis revealed several weaknesses that, when combined, led to an attack on a live lock taking a measly 0.65 seconds with a standard PC. SimonsVoss has since released a patch to update the challenge protocol. Daehyun or his coauthors had not yet investigated this new challenge protocol or, indeed, the update mechanism itself.
No comments:
Post a Comment