Wednesday, March 18, 2015

52 Things: Number 24: Describe the binary, m-ary and sliding window exponentiation algorithms.

This is the latest in a series of blog posts to address the list of '52 Things Every PhD Student Should Know' to do Cryptography: a set of questions compiled to give PhD candidates a sense of what they should know by the end of their first year. Continuing the section on implementation details, we consider some different methods for calculating modular exponentiations.

Brief reminder of what we already know

A large number of cryptographic algorithms are based around the security of exponentiation-based problems, such as RSA or DH. Therefore, having an efficient implementation of Modular exponentiation is required for modern cryptography. We should begin by addressing the naive solution: to calculate $x^a \pmod N$ we use our favourite exponentiation algorithm to calculate $x^a$, then reduce the answer modulo $N$. However, for most cryptographic uses $x^a$ will be excessively large. Now, most traditional methods can be modified simply by reducing modulo $N$ at every possible stage, and this leads to some of the common techniques.  Here we present a few possible methods for calculating $X^E \mod N$.

 

Binary

The binary modular exponentiation method closely resembles the traditional square-and-multiply method for calculating exponentiation. Indeed, the only difference is that at each stage we reduce modulo $N$ (preventing any of the intermediate values getting any larger than necessary). We can do this from left-to-right or right-to-left (either going up or down the bits of the exponent), and there are subtitles that may affect your choice.

m-ary

The m-ary method works in a similar way, but instead of considering the exponent as a sequence of bits, we consider them as elements of some alphabet of $M=2^m$ elements (ie m-bit strings). In fact, the binary method can be thought of as the m-ary method with M=2 (hence the name). So, how does it work? Well, first we calculate a lookup table for all the $X^i$ for $i=1$ to $2^m-1$.
Then, we work our way through the exponent $E$ (in base $M$). Then, for each 'term' in $E$ we simply look up the appropriate value from our table, and then instead of doubling we shift by $m$.
Compared to the binary technique, this means more precomputation (ie calculate more powers of X) and so requires more space, but in return have to make fewer multiplications.

Sliding Window

So, the m-ary window certainly reduces the number of multiplications we had to do, but can we do better? The answer is (unsurprisingly given the retorical question) yes, in the right circumstances. Suppose we are working with $m=4$, and $E=22=(0,0,0,1,0,1,1,0)_2=(1,6)_{2^{4}}$. Then we could use the 4-ary method, but perhaps if we 'reposition' our window, we can do even better: after all there are only 3 set bits, and they're all within a window of 4. If we knew this in advance, we could apply our lookup table at that position, and so only have to do one lookup.  So, for the sliding window method, we first look through the binary representation of $X$, and using this find a representation of $X=\sum x_i 2^i$ where $0\leq x_i \leq 2^m-1$ and as many $x_i$ are zero as possible. This leads to even more precomputation work than the m-ary method, since we have to calculate an efficient representation of $X$ as well as the lookup table used previously. However, if $X$ is known to be sufficiently sparse, a sliding-window method will almost certainly be more efficient.

2 comments:

  1. ... the binary method can be thought of as the m-ary method with m=2 — Shouldn't that be m=1?

    ReplyDelete