## Friday, January 27, 2012

### Verifiable Computation

Today's study group was on Verifiable Computation and was given by Ashish and Enrique.
Verifiable Computation (VC) allows a computationally-limited client to outsource an expensive computation on some input to a computationally-powerful server. The main issue in such delegation is for the client to be able (taking into account that it is computationally weak) to verify the correctness of the result returned by the sever. An obvious application of this protocol is cloud computing.

We will denote the function that needs to be computed by F, the input of the client on which he wishes to evaluate the function by x and the output of such a computation by y where y=F(x). The key point is that the verification of the correctness of the returned result should be substantially less expensive than re-computing F(x).

Generally, the VC protocol consists of 3 phases, which are as follows:
• Preprocessing: This is only required once. In this phase, the client generates some auxiliary information (private & public) related to the function F.
• Input Preparation: This phase is required for every input x, the client generates some auxiliary information (private & public) related to the input x.
• Computation and Verification: The public information related to both the function F and the input x are given to the server. The server then returns to the client a value σy which is an encoding of y=F(x). The client can then recover F(x) and verify its correctness.
A VC protocol consists of the following algorithms:
• KeyGen: This algorithm is run by the client and takes a security parameter λ and the function F. It outputs a pair of public\secret keys (pk,sk). The secret key is only known to the client while the public key is sent to the server.
• InputGen: This algorithm uses the secret key sk to generate secret information τx and public information σx which encode the input x. The public information σx is given to the server.
• Compute: This algorithm is given the public key pk and the public information σx and it generates an encoded version σy of the function output on x, i.e. y=F(x).
• Verify: This algorithm uses the secret key sk and the secret information τx to recover y from σy and outputs Accept/Reject depending on whether or not it is correct.
There are a number of desired properties such a protocol offers, which we briefly summarize as follows:
• Correctness: If both the client and the server behave honestly, then the client should accept the result returned by the server.
• Soundness: The probability that the server returns an incorrect result and the client fails to spot that is negligible.
• Input/Output Privacy: The server does not learn the input x and/or the output F(x).
• Efficiency: This concerns the efficiency of both the client and the server sides of the computation as well as the number of rounds involved.
• Public Verifiability: Trusted third parties can verify the correctness of the computation.
• Public Delegation: No pre-computation from the client.
• Generality of the Function: How general the function that such a protocol can evaluate.
• Assumptions: What type of cryptographic assumptions the security of the protocol rests on.
It is still an open problem to have a protocol that satisfies all of the above properties simultaneously.

Three construction approaches were discussed:
The first one is by Gennaro, Gentry and Parno from Crypto 2010 [pdf]. This approach makes use of Yao garbled circuits [pdf] and a Fully Homomorphic Encryption (FHE) scheme. The function F is represented by a Boolean circuit C (hence this construction works for any function) and then the gates are garbled so as to hide the structure of their underlying truth tables.

In this construction, the client is allowed to run expensive computation during the preprocessing phase (possibly through a powerful trusted third party) but it is computationally bounded in the online phase.

To garble a gate G where wz=G(wa,wb), we choose random labels kia and kib for i =0,1 (i.e. the possible values of the wire) then the corresponding truth table row's output value is double encrypted as γ{i,j} =Ekia(Ekjb(G(i,j))) for all possible permutations of i and j ∈ {0,1}. This will result in a ciphertext for each row of the gate's truth table. Then the 4 ciphertexts are shuffled to obtain a garbled gate.

For the Yao construction to be considered secure, the symmetric encryption function E must have three properties:
• Indistinguishability of Ciphertexts: It is impossible to distinguish the encryption of two vectors of messages.
• Elusive Range: Different keys result in ciphertexts which fall into different ranges.
• Efficiently Verifiable Range: Given a key and a ciphertext, one can efficiently verify if the ciphertext is in the correct range for the given key.

We can summarize the VC construction as follows:
• KeyGen: The circuit C representing the function F is garbled and the secret key is the set of labels used for the wires, i.e. sk= ∪i {k0i,k1i} for all wires i, whereas the public key is the set of shuffled ciphertexts, i.e. pk= ∪gg00, γg01, γg10, γg11 } for all gates g.
• InputGen: A pair of secret/public keys (skFHE,pkFHE)\$ for the FHE scheme is generated. Let w1,…,wk be the bits of the input x, the labels for those wires are encrypted under pkFHE to get the ciphertexts c1,…,ck. The public information is then set to σx=(pkFHE,{c1,…,ck}), whereas the secret information is τx=skFHE
• Compute: Using the homomorphic property of the FHE scheme, the ciphertexts containing the labels of the circuit final output wires will be determined by the server. We let w̃i for i=1,…,n be the labels of the circuits' final output wires (i.e. the bits of y). Then σy={ c̃1,…,c̃k}.
• Verify: Use skFHE to decrypt c̃i and then use the secret key of the encryption used in the Yao circuit to map the obtained value to the corresponding value of the bit i of y. Reject if either the mapping or the decryption fails.
The above protocol offers input/output privacy but the offline phase is not efficient.

The second construction is by Chung , Kalai and Vadhan from Crypto 2010 [pdf]. In their paper, they give different variants  of VC  protocols. Their approach works by allowing the client in the preprocessing phase to sample a random input r from the same distribution as the input x and compute F(r). Then in the online phase, the client sends (r,x) in an arbitrary order (unknown to the server) to the server. The server evaluates the function F on both inputs and then returns the results to the client who compares F(r) returned by the sever with that he already computed in the preprocessing phase and accepts if they are the same. The soundness error of this protocol is 1/2.

Note here that if the sever learns x, it can easily cheat and therefore x needs to be hidden from the sever. In another variant of the protocol, a deterministic FHE scheme is used to encrypt the input x by computing x̂=EFHE(pkFHE,x). Also, during the preprocessing phase the client computes r̂=EFHE(pkFHE,0…0) and exploiting the homomorphic property of the FHE scheme, the client computes F(r̂). Instead of giving (x,r) in the clear, the server is given (x̂,r̂) and in a similar way to the previous variant, correctness is determined by comparing the precomputed F(r̂) with that returned by the server. Again, the soundness error in this protocol is 1/2. To further reduce the soundness error, many instances of x̂i and r̂i could be used instead of a single instance. This makes the soundness error exponentially small in the number of instances used.

All the variants we mentioned so far are one-time, i.e. the values r̂i cannot be reused. To convert the previous variant to a many-time use protocol, a double encryption is used where the values {pkFHE,{r̂i},{x̂i}} are further encrypted using a different public key pk'FHE.

The four variants we mentioned so far require the client to perform heavy computation in the offline phase. The authors also propose another variant in which the offline phase is made more efficient but involves interaction. The idea is that the computation in the offline phase is also delegated.

The final construction is by Canetti, Ravi and Rothblum from ACM-CSS 2011 [pdf]. The protocol involves two servers, S0 and S1 one of which is dishonest, whereas the other one is honest. However, the client does not know which is which.

The servers are modeled as deterministic Turing Machines (i.e. they do not have random tape). The client sends the same input to both servers and if the returned results differ, the client iterates through the set of states of both servers (a binary search can be used for this task) and stops whenever it identifies a mismatch between two consecutive states statei and statei+1. At that point, the client computes the transition function itself from statei to statei+1 to identify which of the two servers is the one that returned the incorrect result.

This approach could be extended to more than two servers. This construction does not satisfy input/output privacy. On the other hand, it satisfies public verifiability and public delegation. Also, this approach does not require any preprocessing from the client.