Saturday, December 5, 2015

Secure Computation from Millionaire

In the last session of AsiaCrypt2015, Muthuramakrishnan Venkitasubramaniam presented  his paper ``Secure Computation from  Millionaire'', joint work with abhi shelat.  Given a function f , the standard way to perform secure computation for f consists of two different steps: in the first f is transformed into either an arithmetic/boolean circuit or a RAM program, in the second a generic secure computation protocol (for example Yao/GMW based, or information theoretic) is applied to perform the actual evaluation. In his talk Muthuramakrishnan described a different approach for designing secure computation protocols.

The method he described appeared first at Eurocrypt 2004,  in the work of Aggarwal, Mishra and Pinkas,  where it was used for computing the median: Alice holds  a private dataset S_A and Bob a private dataset S_B,  each party computes the median of its own dataset, say m_A for Alice and m_B for Bob,   and jointly compare them. If  for example m_A < m_B, then Alice deletes  all the values in S_A smaller than m_A and Bob deletes all the values in his dataset bigger than m_B. Thereafter, they recursively repeat this step on the smaller dataset.  By replacing each comparison with a small secure protocol, it is possible to prove security of  the overall protocol in the semi-honest model.  This technique was later generalized by Brickell and Shmatikov and applied to solve shortest path problem.

The natural question which now arises is: what else can we compute using the comparison (millionaire) function? The authors generalized the techniques from both the aforementioned papers to a large class of problems (matroid optimizations, convex hull,  job scheduling, set cover)  that can be seen as instantiations of the greedy-algorithms paradigm.  The main idea is that of reducing these problems to iterative comparison operations and gradually revealing the answer, as for the median computation.  Unfortunately, this implies that simulation-based security can only be guaranteed (under certain conditions) in the   semi-honest and covert setting, because a malicious adversary  could adaptively abort in the middle of the computation.


Muthuramakrishnan  concluded his talk with interesting  open problems, i.e.  trying to  find examples that admit malicious security and  generalize their framework to other primitives or paradigm.

Friday, December 4, 2015

Workshop On Lattice Cryptography

It is the day after AsiaCrypt 2015 and there are two workshops being held in Auckland. The one which is most relevant for my research is that on Lattice Based Cryptography; which consists of four talks. One by Jung Hee Cheon on "Multilinear Maps and Their Cryptanalysis", one by Amit Sahai on "Obfuscation", one by Fre Vercauteren on "Weak Instances of RLWE" and one by Martin Albrecht on "Small Secret LWE".



Cheon first described a very naive version of multi-linear maps and then went on to show how this can be attacked by creating non-trivial encodings of zero, and then taking greatest common divisors. Then he went on to generalise this naive scheme to the CLT scheme (which is a bit like the DGHV FHE scheme). The naive attack does not apply to CLT as the dimension increased, meaning taking naive greatest common divisors would not work. Cheon then showed how to extend the naive attack to the CLT case by turning the gcd extraction into an eigenvalue extraction problem. This done by building quadratic forms which represent encodings of zero. The result is that for the CLT scheme one can break the equivalent of the DLP problem.

Cheon then went on to present the GGH scheme, which is a bit like the NTRU FHE scheme; except the instead of encrypting via c=[(m+r*p)/z] for an integer p, one encodes via c=[(m+r*g)/z] for a polynomial g which generates the ideal lattice <g>. Modifying the prior attack in this situation allows us to recover a basis of this ideal. But finding a short vector in this lattice can be hard. However, by utilizing encodings of zero one can actually solve the equivalent of the CDH problem.

Both attacks rely heavily on the presence of encodings of zero. So the attacks do not apply to situations in which one does not publish such encodings; i.e. applications such as indistinguishability Obfuscation (iO).






Amit Sahai then gave an introduction to iO; he motivated it via an analogy of an attacker who captures your brain and is able to read and tamper with every neuron, yet we still do not want the attacker to know what we are thinking about. This is the problem which obfuscation tries to solve in the computing realm. Martin pointed out that this would be a great way to produce malware!

Amit then put Multi-Party Computation within this analogy. He suggested we can think of MPC as protecting our brain against the tampering adversary, by dividing the brain up into portions. As long as one portion is kept out of the adversaries control we can use MPC to protect our thoughts. Obfuscation tries to do the same, without there needing to be an honest part of the brain.

Any program which is suitable for obfuscation must be unlearnable from query access to the program. Since otherwise the adversary could learn the program from the input/output behaviour. However, black-box obfuscation has been shown to be impossible; essentially because their are contrived programs which are unlearnable but for which one cannot produce an obfuscation, since any obfuscated program has an explicit attack against it.

This is why iO as a concept was presented; since it at least seems possible to achieve. The idea is that if you have two equivalent programs and we obfuscate one of them, then the adversary cannot tell which one we obfuscated. One way of thinking of this is as a psuedo-canonicalizer. The question is what useful can one do if we could create an obfuscator which satisfied the iO definition. Amit gave the application of building demo versions of software, without needing to re-engineer the software.



Fre Vercauteren then discussed a more in depth analysis of a paper from CRYPTO this year on Weak Instances of Ring-LWE. The CRYPTO paper gave instances where decision Ring-LWE was easy, but search appeared to be hard. However, Fre's talk showed that the search problem was in fact easy from the start, and thus the CRYPTO paper was less surprising than it at first seemed to be. As with all things on Ring-LWE the question arises as to how to choose the error distributions.

Fre spend the first part of his talk discussing the geometry of number fields, and in particular the Minkowski embedding. The Ring-LWE problem generates errors according to a discrete Gaussian distribution in the Minkowski embedding, Poly-LWE is to generate the errors according to a discrete Gaussian in the polynomial embedding.

Eisentrager et al discussed cases for which Poly-LWE was easy, these were then extended by Elias et al to special cases of decision Ring-LWE. They did this by mapping the special Ring-LWE instance to a special Poly-LWE instance.This is done by pulling back the problem from Ring-LWE to Poly-LWE via the matrix which defines the Minkowski embedding. The Poly-LWE attack requires that q is larger than f(1), and hence q will "kind of show up" in the coefficients of the defining polynomial f. So the fields being attacked, are very special indeed.








Thursday, December 3, 2015

Garbling with constant gate size


In the final MPC session of Asiacrypt, Carmen Kempka from NTT presented a Garbling scheme for formulas with constant size of garbled gates. The talk described an interesting new technique for garbling Boolean formulas (i.e. circuits with one output) based on trapdoor permutations and "backwards" garbling that, remarkably, achieves a constant size of just 4 bits per garbled gate.

Garbling schemes are a popular technique for constructing secure two-party computation schemes, and most practical approaches are based on the classic technique of Yao, where for each gate of the circuit, the resulting garbled circuit contains several ciphertexts that must be transmitted in the 2-PC protocol, where each ciphertext is of size $O(k)$ bits. At Eurocrypt this year, Zahur, Rosulek and Evans proposed the half-gate technique for garbling with just two ciphertexts per gate, and also showed that any "linear" garbling scheme cannot possibly do better than this. So, to reduce gate size any more, new non-linear techniques are needed; in Carmen's talk, some possible such techniques were given.

The main idea of the scheme is to use randomly generated, independent ciphertexts for each gate, so that the circuit evaluator can reconstruct these from just a single seed. The main difficulty is to ensure that the evaluator cannot then do the same as the circuit constructor, and create additional garbled circuits other than the one specified. To overcome this problem, they use trapdoor permutations (as opposed to typical garbling schemes, which just use hash functions) combined with a backwards garbling technique, where the input wire keys for each gate are chosen by solving a system of equations in the output keys and the ciphertexts.

The overall size of a garbled circuit is 4 bits per garbled gate, plus a ciphertext for each input, so this is the first scheme to achieve a constant gate size without a huge expansion in the input key size (as in Kolesnikov's scheme from 2005). However, since each input wire is uniquely determined by the output wires, gates can only have fan-out one, so the scheme is restricted to garbling formulas only. Extension to general circuits is possible, but at the cost of including 2 ciphertexts per gate.

Overall, the idea is neat, and it seems like a very interesting open problem to overcome the limitations of the scheme for general circuits, and reduce the size of the TDP-based ciphertexts for input gates.


Asiacrypt 2015: The Moral Character of Cryptographic Work

The distinguished IACR lecture at this year's Asiacrypt was given by Phillip Rogaway, who chose to talk more about the political implications of cryptographic work rather than the technology itself. This was certainly refreshing to see.

Phil started his talk by highlighting some historical events on the relationship of ethics and sciences: the nuclear bomb, the Nazi doctors, and the environmental movement. There is now the general idea that scientists should not harm society, but actually contribute to the social good as an obligation from the professional role. This manifests itself in various ethical codes and organizations; even the IACR bylaws oblige it to serve the public welfare. According to the talk, however, these values are in decline. The military has no problems recruiting scientists, and it provides more and more funding for science. At the same time, cryptography, like any technology, is used as a tool of power, which is recognized much more by popular culture than by cryptographers themselves.

An older generation of cryptographers seems to more aware of this, for example David Chaum, who mentions big data collection as early as in the 1980s as well as the possible effects on behavior. Comparing the citations of David Chaum's most cited paper on electronic mail with Goldwasser and Micali's on probabibilistic encryption, one can see that only the latter is picked up by the usual cryptography conferences. Phil argues that this split is more political than anything else and that cryptographic primitives do not inherently favor individuals or powerful entities. For example, conventional encryption can be used as well to protect one's information as to take control from the user as in trusted computing.

All these issues gained much more attention by the Snowden leaks in the summer of 2013. It was revealed that mass surveillance is rife, obscured both by secrecy and complexity. Unsurprisingly, there is significant disagreement between government agencies and surveillance studies. While the former argue that cryptography destroys the balance between security and privacy, the latter show that surveillance simply is an instrument of power that makes people conformant (or killed by drones). Furthermore, they also argue that security and privacy are not necessarily mutually exclusive. There is historic evidence of unsavory uses of political surveillance from the FBI's letter to Martin Luther King Jr. trying to convince him of suicide to totalitarian regimes of today.

Considering all this, Phil claimed that while cryptography for security is a success, cryptography for privacy is not, and moreover, that cryptography becomes more and more self-serving ("crypto-for-crypto"). To counter this, he presented a few suggestions for interesting problems such xMail and big-key cryptography. The former is about sending messages via an untrusted server without allowing the server to link sender and receiver, the latter assumes that an attacker has already subverted the machine holding a key, but only has limited bandwidth to send information.

The last part of the talk consisted of twelve suggestions for cryptographers essentially calling for a more holistic view of our work. The suggestions cover quite a range from thinking twice about military funding to stopping to draw cute little devils for the adversary when it is in fact a large state-sponsored agency. The most interesting suggestions in my opinion is that we should taste our own medicine, that is, we should use privacy tools and improve them if necessary. However, there was also the suggestion to write fewer but more relevant papers, which is orthogonal to the current incentives in science.

Phil concluded with the quote "Just because you don't take an interest in politics doesn't mean politics won't take an interest in you."

There is an accompanying essay on his website.