Tuesday, May 29, 2012

Distance Bounding

Correction: the notion of tainted refers to sessions and not to adversaries. There was a small typo. (Thanks to Bogdan)

The reading group of today has been driven by Bogdan and Marcel.

In the first part of the presentation, Bogdan talked about security definitions on distance bounding protocols. These definition best fit to RFID authentication, and they are designed as countermeasures against man-in-the-middle attacks: measuring elapsed time between sending challenges and receiving responses makes, in theory, these type of attacks unfeasible.

The verifiers are called Readers, and provers are RFID Tags.

Main threats that authentication protocols may face are:
1) Mafia fraud: adversary acts as the Reader when communicating  to the Tag.
2) Terrorist fraud: Tag leaks information to the adversary.
3) Distance fraud: Tag claims to be closer that it actually is.

Identification protocols are divided into disjoint rounds of two types:  time-critical and lazy phases. Having in total N_c critical phases. Time elapsed on critical phases is compared against some threshold t_max (this is usually done by the reader). Time measurement errors may occur, so T_max < t_max exceeding critical phases are allowed. Also E_max < N_c is the maximum of allowed erroneous critical transmissions. Depending on this parameters an identification protocol generates the verified output. In this way, the proposed model tight together security and channel reliability.

Security is game-based oriented. The communication model is designed as follows: each identification session is assigned a sid identifier, known only by the adversary. The shared key between Tag and Reader is static across sessions. The adversary learns whether or  not an authentication session succeeds. Also, simultaneously, and in different sessions, the adversary can
1) impersonate the tag (reader-adversary session),
2) impersonate the reader (tag-adversary session),
3) simply be honest but curious (tag-reader session).
The transcript of these possible interactions is also known by the adversary, and its winning advantage is parameterized by his running time and the number of (each type of) sessions it engages in.

In the paper they use the notion of tainted adversaries tainted sessions to define security. In the case of the mafia-fraud model, this formalizes the following: anytime the adversary crosses messages between critical phases of different sessions, he does not change the messages, and these crossings are made in order, (tainted phases). The adversary wins if he manages to force the Reader to accept, tainting at most T_max critical phases.

For the case of terrorist attacks, tainted phases model the case when A queries the Tag's help at some point during the time-critical phase in which A is engaged with the Reader. They make a distinction in how this help is provided. It is trivial if the information provided by the Tag allows the adversary to authenticate himself to the Reader in future sessions without further help from the Tag (e.g. leaking the key). The terrorist attack corresponds to the case when the Tag provides non-trivial help.  To achieve security against terrorist attacks they use simulators S with the  restriction that S only gets the transcript of A in an offline phase  (no communication between S and the Tag in the online phase). S will try to make the Reader accept in further sessions. The advantage of A is defined as p_A - p_S where p_A (p_S) is the probability that A (S) wins. If A, aided non-trivially by the Tag, suceeds, then p_A is significantly bigger than p_S,  or in other words, if the advantage of A is negligible and A suceeds, then the Tag aided A trivially.

In the last attack, distance fraud, the adversary must send his messages before he receives any Reader's communication. This notion is captured by A committing to his first answer (and only his first message: powerful adversary). A session is tainted if A sends out a non-committed message. He wins whenever the Reader accepts, assuming there are at most T_max tainted phases.

All the above are properties of time-critical phases. Impersonation resistance models the case when A try to impersonate a Tag only in lazy phases. A wins when the Reader accepts and A is not merely relaying messages (of possibly different sessions) between the  legitimate Tag and the Reader.

Under this security definitions, they claim that the above attacks are  independent of each other, although this is an informal result, and proofs are not provided in this version of the paper.

In the second part of the presentation, Marcel discussed the security against the four type of attacks in several protocols that make use of critical and lazy phases for authentication.

 Among other ideas, the Tag may use signatures (lazy phase), of the concatenated random bits interchanged in earliest critical-time phases. According to the  paper it is neither terrorist resistance, nor distance resistance. Also it is possible to interchange nonces (lazy phase) and use them as a seed of a PRF (on the shared secret key) to generate a string. Making use of repeteadly critical-time phases, the Reader checks that the Tag holds the same string as his. Again, according to the paper, this protocol is claimed to be neither impersonate resistance, nor terrorist resistance. The last protocol is similar to the previous one, but now the lazy phase generates two values, the output of the PRF is seen as an ephemeral key, used to encrypt the master key. The bits of the ephemeral key and the encrypted master key are hidden within a binary tree. The Reader's challenges sent out in time-critical phases determine what path must the Tag follow along the tree.

Wednesday, May 23, 2012

PKC'12, Day 1: Anonymous Broadcast Encryption

In the last session of the first day at PKC there were two talks dealing with anonymity in broadcast encryption. The aim of broadcast encryption (BE) is to efficiently send a message to a large set of receivers from some universe; application-wise it is epitomised by Pay TV. The notion was introduced by Fiat and Naor in 1993, and since then, security notions have evolved, now considering adaptive adversaries and collusion-resistance.
While the focus of recent research has been on minimizing the ciphertext length, one aspect of security seems to have been overlooked: in the common model of BE, the decryption algorithm requires, besides a user's secret key, knowledge of the set of recipients. Applied to Pay TV, this means a subscriber must know who else subscribed to the programme.
This arguably represents a severe breach of users' privacy, and the paper by Libert, Paterson and Quaglia, presented by Elizabeth, addresses this issue. They define the notion of anonymous BE, which guarantees that a ciphertext not only hides the encrypted message, but also the set of users it is broadcast to. Their first, straightforward instantiation leads to ciphertexts whose length is linear in the size of the universe. They then improve on this presenting a scheme whose ciphertext size only depends on the number of recipients. As previous schemes also require the recipient set to be transmitted, this means that, asymptotically, anonymity essentially comes for free.
The second talk was on outsider anonymity in BE, introduced by Fazio and Perera, which only guarantees that the ciphertext hides the set of users to non-receivers. So, in the application to Pay TV, in order to find out who is receiving a programme, it would suffice to subscribe to it too. Relaxing the security notion, they manage to improve on efficiency by giving a scheme with sub-linear ciphertext length.

Public Key Encryption Against Related Key Attacks


In the first of his three talks at this year's PKC in Darmstadt, Hoeteck Wee (George Washington University) discussed his paper on linear related key attacks (RKAs) [pdf]. The paper presents public key schemes that are resilient against RKAs under standard assumptions in the standard model, in which the ciphertext overhead is only an additive constant number of group elements.

The security definition is that of Bellare et al. [pdf] with attacks on the secret key of the public key schemes, so the focal point is chosen-ciphertext related key attacks (CC-RKA). In this setting an adversary sends decryption queries of the form (C , Δ) where Δ is the linear shift, and the oracle responds with a decryption of ciphertext C under sk + Δ. It is important to note that linear shifts do not capture practical attacks, but this is an important step in understanding RKAs as a whole.

In this attack the decryption oracle refuses to act only when the ciphertext it is given matches the challenge ciphertext and the derived key equals the real one. One can modify this attack model to what is called 'weak-CC-RKA' where the decryption oracle refuses to act whenever the ciphertext it is given matches the challenge ciphertext.

The paper presents strong public key encryption based on bilinear DDH and the hardness of factoring, and weak-CC-RKA schemes based on DDH and LWE. The constructions exploit two properties: 'key homomorphism', which means that the decryption of a ciphertext C using a secret key (sk + Δ) equals the decryption of some other (efficiently computable) ciphertext C' using the original secret key sk, and 'fingerprinting' meaning it is hard to maul cipertexts in a meaningful way, to avoid querying Dec(sk , ·) on the challenge ciphertext (more details in the paper).

The constructions based on bilinear DDH and standard DDH require strong one-time signature schemes based on discrete log. The LWE based scheme utilises trapdoors for lattices, as devised by Micciancio and Peikert [pdf] - the trapdoor for inverting fA(s) = As + e is key homomorphic. The technique of Hofheinz and Kiltz [pdf] to base security on the hardness of factoring is employed along with one-time signatures to fulfill the two properties described above and create a strong IND-CC-RKA scheme.

Naturally the main open problem is developing schemes that are secure against a larger class of attacks, while keeping the overhead as low as possible.

Tuesday, May 22, 2012

Deterministic Public-Key Encryption Schemes



Today reading group has been given by Ming-Feng and Peter. Their presentation focused on public-key deterministic encryption. In particular, Ming-Feng outlined the results in the Crypto08 paper by Boldyreva, Fehr and O'Neill  [BFO] while Peter described more recent results by Fuller, O'Neill and Reyzin [FOR].

Ming-Feng started the presentation by describing three security definitions: PRIV, PRIV-1 and PRIV-IND. PRIV requires that, given the encryption of a list of messages, it is hard to guess any partial information about these messages, as long as each message has high entropy given the others.
PRIV-1 is  the single-message version of PRIV and PRIV-IND requires that it is hard to distinguish the encryption of two messages drawn from a different high-entropy distribution on the message space.
Ming-Feng mentioned that the above three definitions are equivalent. Then, he moved to describe
two general constructions: a CPA-secure scheme and a CCA-secure one.
For the CPA-secure construction we need the notion of deterministic encryption with Hidden Hash Mode (HHM). Informally, we say that AE=(K, K1, E, D) is a HHM deterministic encryption scheme if (K,E,D) is a deterministic encryption scheme, K1 is a key generation algorithm which produces only a public key and induces a hash function and any poly-time adversary has negligible advantage in distinguishing the first output of K from that of K1. For a formal definition of HHM deterministic encryption see [BFO] (pag. 14). A HHM deterministic encryption scheme where the hash function induced by K1 is universal is PRIV-secure.      
Finally, Ming-Feng described the CCA-secure construction which makes use of universal target-collision resistant hash functions and lossy trapdoor functions that operate in two possible modes, injective one and a lossy mode. Details of the construction are in [BFO] (pag. 15).

In the second part of the presentation, Peter gave details and the idea behind the deterministic encryption scheme based on trapdoor functions in  [FOR]. The scheme follows the Encrypt-with-Hardcore (EwHCore) paradigm: let E be any public-key encryption algorithm and let f be a TDF with a hardcore function hc. A message m is encrypted as follows: y = f (m) and then y is encrypted using
E with hc(m) as randomness, (i.e., the encryption of m is E (f (m); hc(m))).
The output of hc must be long enough to provide sufficient randomness for the encryption algorithm.
and satisfy an extra property called ``robustness``.

Finally, Peter mentioned that possible instantiations of hard-core functions  can be obtained from iterated trapdoor permutation and Lossy TDFs.

Tuesday, May 15, 2012

Information-Theoretically Secure MPC

Multi-Party Computation (MPC) allows a group of people, each holding some input, to compute a function of their inputs. An MPC protocol must be correct and secure: all honest parties must get the correct function value and no dishonest party must learn anything more about the honest parties' values than what follows from the result.

Information-theoretically secure MPC achieves these properties even against computationally unbounded adversaries. It does not rely on any computational assumptions and is "forward secure" in the sense that future advances in breaking cryptographic algorithms cannot harm an already executed protocol.

This strong security property comes at a cost: such protocols can only exist if strictly fewer than a third of parties are dishonest (or less than a half if they do not cheat in the protocol).

The basic operation in IT-MPC is Shamir's polynomial secret sharing (other options like arithmetic codices are also available). The function to be computed is modelled as a circuit over a suitably large finite field with INPUT, ADD, MUL, RAND and OUTPUT gates.
Inputs are simply secret shared and addition gates can be computed by adding shares; to obtain an output one reconstructs the share involved.
Multiplying Shamir shares gives a "sharing" that has twice the degree in the polynomial and is not uniformly distributed. What is required is a kind of "refresh" operation to get such a share back down to an ordinary one. 
(Substitute "ciphertext" for "share" everywhere and read "noise" for "degree" to see an interesting conceptual analogy to fully-homomorphic encryption.)
The refresh operation is performed with the help of everyone having shared some random values for each multiplication gate in advance. More such random values can be used to handle random gates.

Setting up these random shares is easy against passive adversaries but harder against active ones. Earlier papers used verifiable secret sharing: this works but has a known lower bound of Omega(n^2) per VSS operation making it rather inefficient.
A better way is to use player eliminiation. The idea is that everyone cross-checks their values and complains if something is amiss. In other words, we are replacing error correction with the (easier) error detection. If anyone has cheated then someone is guaranteed to notice as honest players are assumed to outnumber dishonest ones more than 2:1. If there is a complaint then everyone is forced to reveal all their inputs to this round and the complainer and at least one complainee are eliminiated. All but one of the eliminated players must have been dishonest so, in spirit of the "Mafia" party game, the honest players can use their numerical advantage to eliminiate all cheaters (or get a sharing set up if the dishonest players decide not to cheat).

The techinical tool presented to achieve player eliminiation in random sharing is a hyperinvertible matrix. This is a square matrix of which all square submatrices are invertible.
(One could also take the "PCP" approach and open a random linear combination of everyone's shares: unless everyone created a correct sharing this is only negligibly likely to verify. But we want the soundness error to be exactly zero, hence this approach is not good enough.)

Tuesday, May 8, 2012

Study group: On the Circular Security of Bit-Encryption

The study group on May 8th 2012 was done by Anna Lisa and Gaven. They were talking about circular security.
In particular, they focused on the paper by Rothblum titled: ``On the circular security of Bit-Encryption`` .
Gaven started by intuitively introducing circular security: an attacker is able to obtain encryptions of messages that are related to the private key; and then pointed out the following conjecture:

(Conjecture 1) every semantically secure public-key bit-encryption scheme is circular secure.

Afterwards, he moved to formally define the notion of circular security by means of three different definitions: (1) with respect to key recovery; (2) with respect to message recovery; (3) with respect to indistinguishabilty of oracles.
All definitions allow the adversary to query an oracle (called KDM oracle) which on input a value i, outputs the encryption of the i-th bit of the private key. In the first definition, the adversary's goal is that of recovering the entire key on input the public key. In the second definition, the attacker on input the public key and the encryption of a bit b is asked to find out the value of b. Finally, in the third definition the adversary must distinguish whether she is talking to the KDM oracle or an oracle that always outputs the encryption of zero. The three definitions above are equivalent.
Gaven gave the ideas behind the proofs that definition (i) is equivalent to definition (i+1), for i in {1,2}; and to complete the equivalence circle, he showed that indistinguishability of oracles implies key-recovery: intuitively, given a key-recovery adversary, it can be used to distinguish between the KDM oracle and the other oracle ( that always answering with an encryption of zero). Indeed, if such adversary is talking to the KDM oracle then with non-negligible probability she finds the decryption key, otherwise it should be infeasible to find the key.  

Anna Lisa described and analyzed a bit-encryption scheme which is semantically secure but is not circular secure. The semantically security of the scheme relies on the l-multilinear Symmetric External Diffie Hellmann (l-MSXDH) assumption which is an extension of the SXDH.
To recall, SXDH assumption extends the Decisional Diffie Hellman (DDH) assumption to bilinear groups by assuming that there exist two cyclic groups of order prime p equipped with a bilinear map e, such that DDH assumption holds in both groups.  The l-MSXDH assumption extends SXDH assuming that there exist l groups G1,…,Gl,  equipped with a multilinear map e, such that DDH assumption holds in all groups.
While there exist l-multilinear groups, for l>2, there is no candidate l-multilinear groups for which SXDH is conjectured to be hard. On the other hand there is also no evidence that such groups do not exist. Therefore, the construction that Anna Lisa explained should not be interpreted as a counterexample, but rather as an obstacle to proving the bit-encryption conjecture (Conjecture 1).

The encryption scheme (KeyGen, Enc, Dec) is as follows:
Let GS be any l-multilinear group scheme, i.e. a probabilistic polynomial-time algorithm that for every security parameter n produces a description of l+1 groups G1,…,Gl,GT of order p with generators, respectively g1,…,gl,gT along with an efficiently computable l-multilinear map e that maps the first l groups to the (l+1)-th group.

KeyGen(1n)
  1.     Invoke the group scheme algorithm to obtain  
                                      params = (p, (G1,…,Gl,GT), ( g1,…,gl,gT), e).
  2.     Select a 2 x l matrix X whose elements are in Zp.
  3.     Set the 2 x l matrix U in such a way that, for each j=1, ..., l:
          - U[0,j] = gj^{X[0,j]};
          - U[1,j] = gj^{X[1,j]};
  4.     Select s at random in {0,1}^l and set α=\sum_{i=1}^{l}X[si,i] mod p.
  5.     The public encryption-key is (params,U,α) and the private decryption-key is (X,s).
         
         
Enc_{(params,U,α)}(σ)
   1.   Select at random r1,…,rl in  Zp.
   2.   Output (g1^{r1},(U[σ,1])^{r1}),...,(gl^{rl},(U[σ,1])^{rl}).
         
Dec_{(X,s)}((c1,d1),...,(cl,d1))

   1. If  c1^{X[0,1]} = d1 output 0, otherwise 1.
  
Anna Lisa let us notice that the bit string s and the public value α are never used neither in the encryption or in the decryption algorithm. Indeed, they are meant to help the KDM attacker. If α is ignored and l=1, the scheme is semantically secure under DDH, hence there would be no need to use a bigger l. The idea is that having a big l>>log p, helps to keep semantic security even though α (which is public) is related to s. In the KDM setting instead, the attacker can obtain extra information about s (which is part of the private key) and use them to verify that α is consistent with s. 
         
Afterwards, Anna Lisa showed the correctness of the scheme which is straightforward and then moved to sketch the proof that the scheme is semantically secure under the l-multilinear SXDH assumption. The proof follows a standard hybrid argument.
Finally, she described the attack that shows the scheme to be not circular secure. The details of the attack can be found in the paper [Ro12] (pag.12).            


Gaven, then presented a further result: the bit-encryption conjecture (Conjecture 1) cannot  be proved by a black-box reduction. In other words, there cannot be any reduction of circular security of a bit-encryption scheme to semantic security of the same scheme that uses both the encryption scheme and the adversary as black-box.  In particular, a stronger result has been proved, that the circular security of every CCA-2 secure bit-encryption cannot be shown by means of a black box result.
Finally, Gaven has stressed the meaning of this last result. Such an impossibility result does not rule out the possibility that there exists a black-box construction of a circular secure bit-encryption scheme from any semantically secure bit-encryption scheme, it only rules out the possibility of proving (via black box reduction) that each semantically secure scheme is also circular secure without making any change to the encryption scheme.