A blog for the cryptography group of the University of Bristol. To enable discussion on cryptography and other matters related to our research.
Thursday, March 24, 2016
Study group: The moral character of cryptographic work
His premise is that cryptography rearranges power, for good or for ill, and is therefore inescapably political. As such, researchers have a moral responsibility to identify the direction of influence of our current and potential future work and to make value-aligned decisions accordingly.
He begins with a broad-brush overview of the wider context: the social responsibility of scientists and engineers generally. This was thrown into sharp relief by the second world war and its aftermath. The invention of the atom bomb, as well as the convictions of many Nazi-serving scientists in the Nuremberg Trials (where 'following orders' was not considered an exculpating defense) left little room for the idea that science was somehow neutral. Many in the academic community were at that time actively campaigning for an ethic of responsibility — for example, Bertrand Russell and Albert Einstein, who collected signatures from 11 leading scientists (9 Nobel Prize recipients) on a statement urging world leaders away to seek peaceful resolution in the face of growing nuclear threat (1955) and physicist Joseph Rotblat, who founded the Pugwash Conferences to proactively work towards that same end. Rogaway also cites the growing environmental movement as a catalyst for heightened ethical awareness in the four decades or so following the war — after which he suggests that science started to lose the sense of its responsibility, hinting towards individualism but expanding on 'extreme technological optimism' as causal components.
Part two expands on the political character of cryptographic work in particular. There are two different prevailing cryptographer "archetypes": the spy — explicitly political, often heroic — of fiction and history, and (as we are more inclined to view ourselves) the scientist — theoretical, politically neutral, going about our business quietly. But in its early days as an academic field cryptography was not so detached from socio-political concerns. Diffie and Hellman have long been explicitly engaged, e.g. in their criticism of DES's key length, and work such as Chaum's on private communication embedded a concern for democracy and individual autonomy. But, Rogaway argues, the field has since fragmented: secure messaging and other privacy applications have been sidelined by the IACR community as "security" rather than "cryptography", and have spawned their own far more socio-politically aware community (see, e.g. PETS) whilst cryptography has tended further towards an assumption of neutrality.
At the other extreme, the loosely organised "Cypherpunk" movement has been pressing for the use of cryptography as a tool for protecting individual autonomy against abuses of power since the 1980s. Projects such as Bitcoin, PGP, Tor and WikiLeaks are consciously targeted to challenge authority and nurture basic human freedoms (e.g. of speech, movement, and economic engagement).
Rogaway points out, though, that cryptography doesn't always favour "ordinary people" as the cypherpunks sometimes seem to suppose. Depending on the involved mechanics, it can just as easily be found to entrench power and erode autonomy. IBE, for example, assumes a trusted central authority, and so naturally embeds key escrow. Differential privacy frames the database owner as the honest party and assumes mass data collection to be a public good. But the former is seldom unambiguously the case and the latter is highly debated. And FHE, he argues, is in danger of becoming a utopian distraction from real, more 'practical' issues, and a convenient cover for defense and intelligence agencies.
However, perhaps as big a worry for the academic community (in Rogaway's eyes) is our lack of any sort of impact. Agencies show tellingly little interest in our activity (see, e.g., the dismissive review of Eurocrypt 1992 in an NSA newsletter, which was followed by two decades worth of silence on the subject). Our research divides, he proposes, into three branches: crypto-for-security, impacting the commercial domain, crypto-for-privacy, pertaining to socio-political matters, and crypto-for-crypto, exhibiting no ostensible benefits beyond academic interest -- the argument being that we are operating far too much along this third branch and not enough along the second.
This under-attention to crypto-for-privacy has helped maintain the increasing trend in pervasive surveillance which, in part 3, is highlighted as a significant threat to human autonomy and social progress. Rogaway compares what law enforcement would have us believe about surveillance with alternative perspectives from the surveillance-studies community. Law enforcement frame privacy and security as in conflict: a personal and a collective good respectively, in trade-off with one another and thrown out of balance by modern technology. The current disproportionate strength of privacy (e.g. widespread encryption) gives the edge to the ‘bad guys’ — terrorists, child abusers and drug dealers — and we must sacrifice some privacy to avoid ‘Going Dark’. Conversely, surveillance studies views privacy as a social good, which enhances security as much as it conflicts with it. Mass surveillance, increasingly enabled by modern technology, is an instrument of power which has, historically, been used to protect the status quo and stifle change (see, for example, the FBI’s use of surveillance data to try to induce Martin Luther King Jr. to commit suicide). Today, dissidents are imprisoned, drone targets assassinated, and investigative journalism supressed via the use of phone and internet monitoring. Cryptography is (or can be) a tool to stem surveillance creep and the uniformity, the erosion of autonomy and the social stagnation it engenders.
Part 4 offers some proposals for action. In the first place, cryptographers should gear problem selection more towards crypto-for-privacy. Rogaway gives some examples of his own ongoing projects of this nature, including (game based provably) secure messaging in an untrusted server model, and ‘bigkey’ cryptography where the total key material is too huge to exfiltrate (megabytes to terabytes) and random extractions are made for use in protocols requiring ‘normal’ length keys, in such a way that the derived key is indistinguishable from uniformly random even if the adversary gets the randomness and a large amount of information about the bigkey. Promising projects from other researchers include ‘Riposte’, a protocol for anonymous file sharing in the presence of pervasive monitoring, and scrypt and Argon, hash functions designed to slow password checking and avoid dictionary attacks which consume lots of memory as well as time so that they can’t be accelerated by custom hardware.
More generally, Rogaway suggests, research should be slower paced and value based. Provable security should be more practice-oriented, keeping enthusiasm for definitions and proofs in submission to the end goals of utility, and should be extended to the realm of mixnets and secure messaging. We should be discerning about where our funding comes from, and only work with organisations whose values align with our own (which perhaps sounds "easier said than done" to not-yet-established researchers). We should exercise our academic freedom in speaking our minds; resist dogma and be open to new models and approaches; foster a more expansive systems-level views, and actually learn to use some privacy tools for ourselves; replace cutesy cartoonised adversaries with suitably frightening pervasive threats — both in our imaginations and on our slides; choose our language with a considered awareness of the impression it makes; and boost commonly available resources of information (starting with an overhaul of relevant Wikipedia pages) and tools. Above all, he urges that this is a collective responsibility: we need to change what is systematically valued and prioritised so that the movement of the whole field is towards a considered, socially beneficial exercise of the power that cryptography has whether we like it or not.
[1] Social critical theory being the philosophical project to understand society with a view to "liberating human beings from the circumstances that enslave them." (Horkheimer, Max. 1982. Critical Theory Selected Essays.)
Thursday, March 3, 2016
Cryptographic Reverse Firewalls
Since Edward Snowden informed us our internet traffic is under constant surveillance and that the US government has likely backdoored a standardised algorithm, threat models in provable security have evolved to consider an adversary within the user's machine. One of the first works addressing this was the substitution attack model of Bellare, Paterson and Rogaway that considered an adversary trying to corrupt an implementation to reveal a secret undetected by the user. A potential attack vector identified here was to use the randomness involved in operations to leak information while still appearing to be an honest implementation.
Today's study group continued in this area, taking a look at Mironov and Stephens-Davidowitz's paper on cryptographic reverse firewalls. While a traditional firewall sits between your machine and the network deciding what traffic gets through to you, protecting you from an outside threat, a cryptographic reverse firewall sanitises your incoming and outgoing traffic to prevent you from your own machine's misbehaviour. A reverse firewall does not need to know Alice's secret keys and so can be run by anyone, and in addition can be "stacked" so Alice can hedge her bets with multiple reverse firewalls in place at a time.
The desired properties of a reverse firewall are as follows.
- Maintains functionality - If a protocol is implemented correctly then when the firewall is in place it should continue to provide the same functionality.
- Preserves security - If a protocol provides a security guarantee then it should continue to do so when the firewall is in place.
- Resists exfiltration - The firewall should prevent a tampered implementation leaking information to the outside world. This is modelled strongly by asking that an adversary cannot tell the difference between a tampered and an honest implementation behind the firewall.
The original paper goes on to consider reverse firewalls for multi-party computation, but we looked at the simpler case of signature schemes as studied by Ateniese, Magri and Venturi. For signature schemes maintaining functionality is straightforward to define - honestly generated signatures modified by the firewall should still verify.
Preserving security is slightly more complicated since the natural model of asking the adversary to forge a signature with access to a normal signing oracle and to an oracle that executes tampered signing algorithms of his choice then processes the output through the firewall admits a generic attack. To launch this attack the adversary submits a tampered signing algorithm that iterates over the bits of the secret key, returning an honest signature when the bit is 1 and a string of zeroes when the bit is 0, eventually recovering the secret key. To avoid this attack the firewall is allowed to output a special symbol that renders the oracle useless from that point on. This can be seen as the tampered implementation being detected and shut down.
Resisting exfiltration for signature schemes is impossible if arbitrary tampering is allowed, since for example the tampered algorithm can output a string of zeroes which the firewall has no way of turning into something indistinguishable from a valid signature.
As mentioned before a potential conduit for leaking information is the randomness used in an algorithm. The functionality maintaining and security preserving firewall given for signature schemes meeting a re-randomisability condition prevents such attacks by re-randomising valid signatures and returning the special symbol when given a signature that fails to verify.