A key issue in securing data is that encryption is often not enough. Consider a situation where you encrypt data in a database, it can turn out that your access patterns to the database can reveal a lot of information about your queries. For example suppose you look up the IBM stock price and then buy IBM stock. An attacker who does not see what you access can corelate the access location with you buying IBM stock. This leads to the question as to whether one can hide the access pattern.
The solution to this problem is to use ORAM, and that was the topic of two papers in the Tuesday afternoon of CCS. In the first the authors presented the path-ORAM algorithm. This is a method to hide access patterns. Data is held in a tree structure, with multiple blocks of data held at each node. The tree is assumed to be held on a remote server. To access a specific block the user requests one path in the tree, which contains their desired block. The server only learns the path, and not which specific block is accessed.
This deals with reading data, but we also need to write the data back. Here we re-randomize the locations of all the blocks on the retrieved path and then we re-write the path back. By clever design of this re-randomization step the method can be made incredibly efficient and practical.
In the second talk the path ORAM algorithm was implemented to create a secure processor, namely a processor which operates on secure data. The resulting processor, called PHANTOM, is novel in that it also hides the memory access patterns. This is done by using a path-ORAM implementation in hardware. That this can even be considered is an indication that path-ORAM is an efficient methodology for obtaining ORAM in practice.
The talk on PHANTOM explained some of the engineering challenges in implementing ORAM in hardware. Explaining some of the changes needed to the basic algorithm so as to efficiently realise it, given the underlying hardware constraints. A lot of use is made of pipelining, so as to ensure that operations are performed in parallel. Care is also taken to avoid timing side-channels, by creating a DRAM buffer to mitigate delays between the RAM and the secure processor. Concrete timings obtained were about 25-35 micro seconds per ORAM access, this is a slow down of around 32 to 44 times over standard (non-secured) RAM access.
No comments:
Post a Comment