Monday, September 30, 2013

Formal Verification of Security Properties in RTL Hardware Designs

Once more, the entire topic of embedded systems security is awfully under-represented at the otherwise excellent¹ Embedded Systems Week. Only the fairly small but highly interesting WESS Workshop addressed the topic properly; one of the most interesting papers presented at WESS is "Automatized High-Level Evaluation of Security Properties for RTL Hardware Designs" by Andrea Höller (who also gave the talk), Christopher Preschern, Christian Steger, Christian Kreiner, Armin Krieg, Holger Bock and Josef Haid.

Let me start by quickly explaining what "RTL Hardware Designs" are. "RTL" stands for "Register-Transfer Level" and basically defines the logic design phase within the circuit (or system) design process. At the register-transfer level, hardware description languages such as Verilog or VHDL are used to describe the logic circuits that operate on data flows between registers. That means, the RTL phase comes after the functional description of a circuit/system has been developed and after the system components and their interfaces have been specified; at the RTL level, the system components get "implemented" and the finished RTL design gets then translated into a netlist using a specific hardware library, placed and routed.

Formal Verification is in some sense the holy grail of security researchers and engineers: Ideally, we would like to have formal proofs and assurances all the way from the theoretic foundations down to the most obscure implementation detail of security-relevant products. In other words, we would like to have an exhaustive model of the product based on which we can check that all required security properties are provided by the product at all times and under all circumstances. This requires two steps:
  1. Extraction of an accurate model from the implementation.
  2. Model checking, i.e. verification whether the model maintains the specified security properties.
In one way this is nice because it can be automated to a large degree and thus removes the human error factor from the security equation but: Exhaustive formal verification is extremely difficult at best and, quite often, absolutely infeasible. So it will never replace implementation-specific and experimental security verification. But we can use formal verification to cut down on development costs and time-to-market; the sooner security problems are identified during the design stage the easier (and cheaper) they are to fix. With each fixed security problem the chance that your prototype manages to pass (e.g. Common Criteria) security evaluation increases.

In some points the Common Criteria security evaluation process already requires formal verification techniques to be employed. For example, each product needs to come with a model extracted from its functional specification and a proof that the model (and therefore the functional specification) indeed maintains all required security properties. But as far as formal verification of hardware design is concerned, this is all that is required by Common Criteria, leaving a large gap towards exhaustive formal verification of the finished product. This is where Höller et al.'s work comes into play. They show a possible extension to the Common Criteria verification process by extracting a verifiable model from the RTL specification and using conventional model checkers such as NuSMV (http://nusmv.fbk.eu/) together with the LTL (linear temporal logic) property specifications to verify whether the RTL implementation still meets the security requirements. It should be noted that the LTL property specifications are already required within the existing Common Criteria process and can be reused for Höller et al.'s additional verification step. In the second part of their paper Höller et al. show how to use the RTL model checking to check whether an implementation is resistant against certain types of faults.

My personal best paper award for WESS definitely goes to Höller et al.

---
¹) Disclaimer: The mainline conferences haven't actually started yet. My assessment of ESWeek's excellency is based on past experience.

Monday, September 16, 2013

Open Letter From UK Security Researchers

Dear Reader,

The first set of publications based on Edward Snowden’s files were concerned with surveillance of internet communication happening more indiscriminately and on a much larger scale than previously thought. The more recent publications, presenting the systematic undermining of cryptographic solutions and standards, are the cause of much more substantial worry. As some of the leading UK researchers in the field of cryptography and computer security we feel compelled to comment on the recent revelations. 

The UK and US governments recently dramatically increased the funding available to various agencies to help protect our countries against Cyber Attack. Such attacks are now commonplace on both corporations, and individuals. We now all rely on cryptography to secure our mobile phones, credit cards, internet communications etc. and because of this we welcome the government’s prioritization of this area in an era of fiscal squeeze. As researchers in security we understand that the NSA and GCHQ are tasked with conducting operations for purposes of national security. 

However, the documents released show that NSA and GCHQ worked to weaken international cryptographic standards, and to place "backdoors" into security products; such backdoors could of course be potentially exploited by others than the original creators.  One of the prime missions of the security services is to protect citizens and corporations from Cyber Attack. By weakening cryptographic standards, in as yet undisclosed ways, and by inserting weaknesses into products which we all rely on to secure critical infrastructure, we believe that the agencies have been acting against the interests of the public that they are meant to serve. We find it shocking that agencies of both the US and UK governments now stand accused of undermining the systems which protect us. By weakening all our security so that they can listen in to the communications of our enemies, they also weaken our security against our potential enemies. 

We call on the relevant parties to reveal what systems have been weakened so that they can be repaired, and to create a proper system of oversight with well-defined public rules that clearly forbid weakening the security of civilian systems and infrastructures. The statutory Intelligence and Security Committee of the House of Commons needs to investigate this issue as a matter of urgency. In the modern information age we all need to have complete trust in the basic infrastructure that we all use.

Yours

Prof. Kenneth Paterson,
Prof. Mark Ryan,
Prof. Peter Ryan,
Prof. Vladimiro Sassone,
Prof. Steve Schneider,
Prof. Nigel P. Smart,
Dr Eerke Boiten, Dr George Danezis, Dr Jens Groth, Dr Feng Hao.

Saturday, September 14, 2013

Redesigning the Internet

In light of recent revelations over government spying and the NSA's infiltration of crypto standards, Bruce Schneier last week argued that:

"We engineers built the internet – and now we have to fix it"

Adrian Perrig (ETH Zurich) kicked off ESORICS at Royal Holiday Holloway on Monday by proposing how. In the keynote talk he outlined a radical strategy of "revolution, not evolution" on the Internet, to fix current issues and ensure a reliable and secure architecture for the future.

So what are the problems?

The root of the problem is that the Internet was not designed with security in mind. The Border Gateway Protocol (BGP) is the decentralized routing protocol that makes all the key decisions as to where your data goes, and is particularly vulnerable to attack. With techniques like prefix hijacking and DNS spoofing, it is all too easy for Internet traffic to be re-routed through unwanted destinations, for example via China or perhaps GCHQ's cables.

Even if no-one's maliciously trying to re-route your traffic, BGP provides no control over the paths of incoming connections, so there's no way of explicitly avoiding specific nodes. As well as security problems, this can lead to route inconsistencies and stability issues, for example problems with an ISP in Australia can affect connectivity in the US. BGP also suffers from scalability issues caused by large routing tables and increasingly frequent routing updates, leading to more complex route calculation and greater power consumption in data centres.

SCION: building a secure Internet

To combat these issues, Adrian argued we need a system that allows explicit trust for network operations (so you know precisely which nodes you need to trust) and minimizes the number of trusted entities required for any operation, with no single root of trust. He proposed a novel Internet architecture called SCION. The core principle of SCION is trust domains, which divide the network into independent, isolated areas based on geographic or political boundaries that protect routing in one domain from malicious activities from another.

All the nodes in a trust domain must agree on a core root of trust within their domain, which is responsible for communicating with other trust domains. These cores might be today's top-level ISPs, for example. Paths from a source to a destination are computed jointly, so users have greater control over both outgoing and incoming routes, and shortcuts can allow users of different trust domains to directly communicate if they want.

The key concept of isolation seems a good, general way of preventing attacks. The main weakness behind the current BGP-based system is that any adversary who can get between a source and destination can attack the route. The strict isolation in SCION forces attacks to be localized to routes within the adversary's trust domain, and massively limits the effectiveness of many attacks you see today.

The proposal was a radical one, and provoked many questions, but the details seem well thought out. According to their experiments, most existing paths disovered by BGP are likely to be found through SCION's route propagation method, alongside a whole load more routes. And the advantages of isolation, scalability and path control make it an attractive alternative. My only disappointment with the talk was the lack of discussion as to the feasibility of actually implementing the architecture and phasing out BGP. No doubt re-engineering the Internet is a mammoth task with many complex issues, but hopefully those responsible will give SCION some serious consideration.

Cool. But how can I encrypt?

To conclude, Adrian reminded us that as well as a strong Internet architecture, to truly secure our communications we need more usable security. He showed a video demonstration of SafeSlinger, a neat tool for easy-to-use, secure end-to-end and group messaging. The app, which is available for Android and iOS (a Gmail plugin is in the works), has a simple, intuitive system for securely establishing keys between users, with an attractive interface that should appeal to the younger generation - certainly a lot more than PGP! With a committed development team - the app was 6 years in the making - and regular security audits (hopefully avoiding recent Android security issues), the tool shows promise in an area direly lacking in usable options.

Tuesday, September 10, 2013

Practical secure logging ... (Esorics day one)

First of all congratulations to Peter Scholl and the rest of the Bristol SPDZ team for winning the best paper award at Esorics this year.

Just after lunch, Marson gave a talk on seekable sequential key generators that I liked for several reasons. First of all, the talk was well presented. Good slides that support the presenter rather than interfere, a new take on the usual pictures for adversary etc.

A sequential key generator is a widget that has some state and outputs a key. Every now and then, you give it a prod and it updates its state and outputs a new key - in such a way that if you break in to it and read its state, you can't get at any of the past keys (technically, they're indistinguishable from random). Kelsey and Schneier built such a widget without a formal proof and Bellare and Yee developed a new widget, security model and proof.

What's this good for? One application is logging. Logs need to be tamper-evident so you could MAC your logfiles with a rolling key derived from such a widget, the idea being that an attacker who breaks in to the system can't alter any past logs without getting noticed.

Sequential key generators on their own aren't that hard to build out of hash functions. What's new in this talk is that the generator is also seekable: with an additional key, you can recover any of the individual keys in much less time than it would take you to step the widget through its entire sequence. In the logging example, a log auditor could use the additional "express" key to check any of the individual logs and you could have a smaller key update interval without making the auditor's job hard. (The paper gives two constructions based on a nice little bit of number theory.)

In addition to a new idea, the authors have also extended the security model and of course proved their scheme secure. In addition to revealing the widget state at some time, the adversary may now ask for any of the past keys; keys she hasn't asked for still remain pseudorandom. (One question at the end of the talk was whether this is strictly stronger than the original model. I think a positive example could be made if knowledge of a key gives you knowledge of the next key in the sequence too; this would be ok in the old model because the adversary only ever affects you "going forward" but not in the new. Haven't tried this out formally though.)

Finally, what rounds off an already good piece of work and talk: they've also implemented their seekable generator as an extension to linux' journald logging system. (They even wrote some code to output keys as QR codes using ASCII drawing in a terminal - a new one-way output channel that is worth remembering for future applications.) I can only stress how important it is for our cryptography community to actually implement and test our ideas if we want to "sell" them outside our own community and Marson and Poettering have done a good job here.

Thursday, September 5, 2013

Leakage-Resilient Symmetric Encryption via Re-Keying


This blog is on the final session of CHES - appologies for the delay in posting this but I have been busy with submission deadlines.

This final session contained three talks; "Using Bleichenbacher's Solution to the Hidden Number Problem to Attack Nonce Leaks in 384-Bit ECDSA" by De Mulder et al., "A New Model for Error-Tolerant Side-Channel Cube Attacks" by Li et al., and "Leakage-Resilient Symmetric Encryption via Re-Keying" by Abdalla et al.

While I enjoyed all three of the talks, based on my own research interests, I am going to talk about the last talk of the session (and hence the last talk of CHES).

Before I explain what was done within the paper, I will explain the background needed to put the work into context.

Side Channel Attacks (SCA): While "standard" cryptography shows that schemes can not be broken providing that the adversary interacts using the provided input/output channels, in SCA te adversary has access to information from side channels. Common side channels include power, timing and EM, but have also been known to include more exotic things such as light and sound.

Leakage Resilient Cryptography: The goal of leakage resilient cryptography is to create schemes that are still provably secure in the face of side channels. Clearly we have to have some assumptions on what is allowed to leak in this model (else the leakage could be "give me all your secrets") and the assumptions that are used here are as follows:

  • Only Computation Leaks Information - reasonably self explanatory, it says if a piece of data isn't used in this round then it can't leak in this round.
  • Bounded leakage per invocation - this states that only a certain amount can be leaked per round (where a round is defined by the scheme), so that the entire key can not leak out the moment that it is used.
  • Unlimited leakage - while the leakage per invocation is bounded, because the adversary can make an unlimited number of invocations they can get an unlimited amount of leakage.


Re-keying: This does exactly what it says on the tin; you take the key you are using an pass it through a re-keying function f to get a new key. There are two types of rekeying methods to get a new key k_i; serial (k_i=f(k_{i-1},p_i) for public p_i) and parallel k_i=f(k_0,p_i). While parallel is faster (don't have to calculate all previous keys to get the one you want), serial is more secure against SCA because you do not repeatedly use the same key (this helps prevent DPA style attacks).

Now that I have gone through the required background I can speak about the paper. This paper produces a leakage resilient symmetric encryption scheme using (serial) rekeying. The intuition behind the security of this scheme is that if you change the key each time you encrypt then the block cipher (in this particular case AES) only needs to be SPA resistant because the key is not used more than once and hence DPA is not a viable option. The rekeying mechanism of course needs to be protected against both DPA and SPA but this is cheaper to do than protecting the block cipher completely. This has been used as intuitively secure for a while but this paper manages to prove a particular instantiation secure under certain assumptions.

They first show that using a non-adaptive leakage resilient non-adaptive PRF (that is both leakage and input must be chosen non adaptively) as a rekeying function for a block cipher produces a non-adaptive leakage resilient encryption scheme. However I feel that this is not the main contribution of the paper and the main contribution is showing that a PRF is in fact a too strong requirement for what is needed. They go on to simplify the leakage resilient PRF of [1], taking inspiration from skip lists. While the new construction is no longer a PRF, they still prove the whole construction is a non-adaptive leakage resilient encryption scheme (and give various speed/assumption trade-offs that go along with it). The advantage of this new scheme is the efficiency of calculating a key (for resynchronisation purposes); for example if we use their skip list like construction with 5 stages the to calculate key 10,000 while a standard serial key update takes 10,000 calls to the key update function, this new scheme only takes 20 calls! The other nice feature of this scheme is that everything can be instantiated with AES, so no extra infrastructure is required to use this scheme.


[1] S. Faust, K. Pietrzak and J. Schipper: Practical Leakage-resilient Symmetric Cryptography. CHES 2012