Last week the MtGox bitcoin exchange, a web-site originally built to exchange Magic: The Gathering cards online, went offline. The "attack" on bitcoin this time is that people are exploiting a problem known as "transaction malleability". Following the MtGox attack the same problem allowed for denial-of-service attacks on the remaining bitcoin exchanges and further stretched the time needed to confirm a bitcoin transaction. The malleability issue in bitcoin has been known for well over two years now . At the time it seems to have been dismissed as ugly, but not worth bothering about. Now, the issue is back to bite us.
I'm a theoretical cryptographer who spends his time shifting letters around in mathematical formulae that describe abstractions of cryptographic protocols such as encryption or digital signatures. Whether or not this has any relation to how cryptography is used in the real world is a matter of debate . One thing I know for sure though is that in my formulae, when the issue of malleability pops up you can't just dismiss it as not worth bothering about - in many cases, allow things to be malleable and the maths simply doesn't work anymore.
We've known about this for way more than two years. David Chaum and others first suggested cryptographic "digital cash" back in the 1990s . It was based on digital signatures; double-spending and malleability were already known hazards back then for which mitigations were developed. Sadly, Chaum's e-cash never really got off the ground although it's still a staple of cryptography courses around the world. (e-cash differed from bitcoin in that e-cash didn't derive value from cryptographic operations; banks would issue e-coins backed by other instruments of value.)
If malleability was a known and solved problem so long ago, why did Satoshi miss it in the design of bitcoin? I can't answer that question. I can however show another two examples of systems in which malleability is known to exist but had been dismissed by the designers. Satoshi is not alone in ignoring these issues in the design of a cryptographic system.
Direct Anonymous Attestation
The trusted computing group  maintains a standard for a protocol known as Direct Anonymous Attestation or DAA for short. This protocol is built into cryptographic coprocessors known as trusted platform modules. Proponents of DAA say that these modules have been available in all high-end laptops for years and that over 500 million such modules have been shipped. Opponents might counter that that's all well and good, but could someone name a single application that actually uses DAA? I couldn't. (The trusted module itself is used in practice to support Bitlocker hard disk encryption under "ultimate" versions of Windows, but that's not DAA.)
DAA described in the original paper  and every presentation I've seen on it since as a form of signature. Signatures, the theoretician says, are not supposed to be malleable without good reason. Yet a couple of years ago I discovered that the standardised and shipped protocol actually contains a case in which signatures are malleable .
A DAA signature signs a pair of objects, a message and a so-called basename. DAA signatures come in two forms, ones in which the basename is empty and ones in which it's a kind of identifier for the recipient. The idea is that signatures by the same signer on the same basename are 'linked' whereas others are not: if I sign two different messages for the basename 'amazon' then anyone can tell that these two signatures were made by the same person, but if I give you one signature for 'amazon' and one for 'ebay' then no-one can tell if it's two signatures from the same user or from two different ones. And signatures with an empty basename are always supposed to be unlinkable to any other signatures.
The problem is that in the shipped protocol, you can take any signature under any basename and claim that it's a signature for the empty basename: it will still look valid. This way, you can produce signatures that look like they're on the empty basename and so shouldn't reveal they're from the same user, but interpreting them as using a non-empty basename they might very well link. And you can claim someone signed a message under the empty basename when in fact they didn't. The problem isn't at all hard to fix --- this would require a very minor change to the protocol.
If you haven't heard of this vulnerability before, it's probably because no-one actually uses DAA. But let's suppose that suddenly it becomes the next big thing: I'd almost be willing to bet money that the mallebility problem will come back with a vengeance and bite us in a few years' time.
Cryptographic voting is perhaps even more popular than cryptographic currencies: Estonia, Norway and Israel have all used it in some form in national-level political elections. Even the US has deployed it in some local elections such as for the mayoral election in Tacoma Park, Florida.
Here's how it works. Every voter encrypts her vote and posts the encrypted ciphertext on a "bulletin board", which has the same role as the block chain in bitcoin: a public collection of all "transactions" (ballots).
The real magic is that you can tally such a board in a way that you get the result and only the result --- so not, for example, how I voted --- and you can be sure that the talliers haven't been cheating. Anyone can download and audit the board and unless the claimed result matches exactly what's in the ballots, it won't verify. No more urns of ballots for the opposition "accidentally" falling off a truck on the way to the counting centre.
The problem with malleability here is that I could potentially just copy someone else's ballot and call it my own to vote the same way as them. Or even copy and twiddle their ballot to vote for the exact opposite of their choice, thus effictively erasing their ballot. Not only does this create new potential forms of protest votes beyond the usual spoilt ballots but two researchers called Véronique Cortier and Ben Smyth have actually shown that ballot-copying could be used to gain information on how individuals voted in a case study on French legislative elections, were they to use cryptographic voting and not worry about malleable ballots .
One protocol that Cortier and Smyth studied is called Helios and has been used among other things to elect the president of the Catholic University of Leuven and the board of the International Association for Cryptologic Research. And guess what, it has malleable ballots. In my own work , together with Cortier and Smyth, my supervisors and the Helios authors, I discovered further flaws: a "bad ballot" that erases one candidate's tally under certain conditions and a way for the voting authorities to set up a "trapdoor" election in which they can alter the election result without anyone being able to notice. (This will be fixed in the next version of Helios.)
Malleability is bad. Stop it.
Transaction malleability isn't quite as well-known a problem as SQL injection or XSS but if, in the wake of the fallout from the NSA revelations and the popularity of bitcoin, sophisticated cryptographic protocols start to become more common then I wouldn't be surprised if signature malleability makes its way onto the OWASP top 10 some day. Whether you're building a currency, a cryptographic coprocessor with attestation features or a voting scheme, malleability is a hazard. And if you do the maths and malleability breaks your security analysis, "that's annoying but I don't think that's too much of a problem in practice" just doesn't work for me. Eliminate it.