- Efficient Authentication from Hard Learning Problems (Kiltz et al, Eurocrypt 2011) [pdf]
- Parallel and Concurrent Security of the HB and HB+ Protocols (Katz et al, Eurocrypt 2006) [pdf]
- HB#: Increasing the Security and Efficiency of HB+ (Gilbert et al, Eurocrypt 2008) [pdf]
The motivating problem for HB-related schemes is secure authentication for low-cost devices such as RFID tags, whose power and storage constraints necessitate lightweight protocols. The motivation for the original HB protocol (Hopper and Blum, Asiacrypt 2001) [pdf] was actually secure human-executable identification, performed using only such undemanding computations as dot products and XORs -- so, although their proposal does not fulfill all the desired security requirements, it is nevertheless an excellent starting point for developing (very) low complexity schemes.
HB authentication derives its security from the NP-hard 'Learning Parity with Noise' (LPN) problem, which corresponds to the task of decoding random linear codes:
- Let A be a (q × k) binary matrix; x a random k-bit vector; η ∈ (0,½) a (fixed) noise parameter; ν a random q-bit vector such that HW(ν) ≤ ηq. Given A, η, z = (Ax)⊕ν the LPN problem is to find a k-bit vector x's.t. HW((Ax')⊕z) ≤ ηq.
The basic HB protocol is as follows:
- Tag and reader share a key x ∈ {0,1}n.
- For rounds j = 1,...r:
- Reader draws aj ∈R {0,1}n and sends to tag
- Tag computes uj = aj · x and draws εj ∈η {0,1}
- Tag sends zj = uj ⊕ εj to reader
- If zj = aj · x for a clear majority of j ∈ {1,...,r} then the reader accepts the tag as authentic.
This is clearly attractive as the only computations performed on the tag are bit-wise computable dot-products and single bit XORs. However, it turns out to be vulnerable to an active attack in which the adversary retrieves x bit-by-bit by sending challenges of the form ajk = 1, aji = 0, i ≠ k, that is: (0,0,...,1,...0,0). Then zj = xk ⊕ εj so that the majority over r rounds will reveal xk.
To resist the above attack Juels and Weis (Crypto 2005) [pdf] proposed HB+:
- Tag and reader share two keys x, y ∈ {0,1}n.
- For rounds j = 1,...r:
- Tag draws bj ∈R {0,1}n and sends to reader
- Reader draws aj ∈R {0,1}n and sends to tag
- Tag computes uj = (aj · x) ⊕ (bj · y) and draws εj ∈η {0,1}
- Tag sends zj = uj ⊕ εj to reader
- If zj = (aj · x) ⊕ (bj · y) for a clear majority of j ∈ {1,...,r} then the reader accepts the tag as authentic.
This is provably secure in the 'detection-based' adversarial model, and therefore no longer vulnerable to the active attack described above. However, several problems remain:
- The revised protocol introduces the problem of generating random vectors on the tag.
- The proof supposes that the rounds are performed sequentially and does not extend to a parallelised implementation (desirable as a strategy to reduce communication complexity).
- The false rejection rates (under the proposed acceptance thresholds) are very high -- as much as 44% for 80 rounds with a noise level of ¼. (See table 1 of (Gilbert et al, Eurocrypt 2008)).
- The adequacy of the adversarial model (which supposes the adversary can interact only with the tag before impersonating it to the reader) has been questioned.
In fact, HB+ is vulnerable to a man-in-the-middle attack in which the adversary is able to interact with the reader as well as the tag before attempting to impersonate (Gilbert et al, 2005) [pdf]. We discussed the fact that such an attack poses a material threat: whilst it would be difficult to carry out an opportunistic MIM on a nearby tag carried by a passer-by, an attacker in possession of a tag with incentive to clone it could very well use a reader of his own to do so, with comparatively unconstrained time and resources.
The same authors solve this problem and also answer to the parallelisation challenge with RANDOM-HB#, in which the secrets become (nX × m) and (nY × m) binary vectors X and Y rather than nX- and nY-bit vectors x and y, and the protocol operates (in one go) in matrix form rather than round-by-round, so that the final verification consists of the comparison of two m-bit vectors a · X ⊕ b · Y and z. This adaptation is secure against the above MIM (known as GRS-MIM after the authors), as well as having a much smaller false rejection rate. However, when the adversary is afforded more powers (i.e. he is allowed to modify more of the elements of the protocol) another MIM attack has been found -- though with increased complexity.
The last paper we looked at was concerned with constructing MACs based on the hardness of LPN (Kiltz et al, Eurocrypt 2011). The protocol can be informally described as follows:
- Tag and reader share a key x ∈ {0,1}n.
- Reader selects a random subset of bits of x and sends to the tag.
- Tag computes a noisy inner product z of the selected bits and sends to the reader.
- Reader verifies that z matches the inner product (without noise) 'most of the time'.
This acheives 2-round authentication with active security. The proof derives from the hardness of the subspace LPN problem (Pietrzak 2010 [pdf]) and because it doesn't use rewinding technqiues it remains valid against quantum adversaries.
No comments:
Post a Comment