Friday, February 3, 2012

Study group: Virtualisation

Today's study group was led by Dan and Marcin on the topic of virtualisation: the creation of virtual environments to emulate the physical environments on which software is designed to run. In a traditional computer system, application multi-tasking is enabled by the OS, a software layer in between hardware and applications which manages shared resources and protects applications from one another. Modern virtualisation is the natural next step: allowing multiple OS's to share the same hardware (whilst remaining isolated from one another). This is particularly desirable in the context of cloud computing, facilitating dynamic resource provision which is highly flexible to current demand and permits sales in small units as multiple customers can share the same hardware resources.

However, it isn't trivial because each OS has the same 'ring zero' privilege level, so how does the server now decide which applications get what, and when? The solution to this is to insert another layer in between the hardware and the multiple operating systems, called a virtual machine manager (VMM), which is endowed with 'ring -1' privileges and tasked with the responsibility of allocating resources between OS's and of isolating them from one another. In particular, no VM should be able to access the data or software of another VM (either directly or via a side-channel) or to affect VM availability.

Economic motives have driven the increasing trend towards virtualisation, but whilst it makes 'good business sense' it introduces novel security problems which need to be understood and dealt with. The VMM is designed in such a way as to protect a VM from other potentially malicious VMs on the same server, but how can we be sure if this objective has been achieved?

The first paper we looked at, "Hey, You, Get Off of My Cloud: Exploring Information Leakage in Third-Party Compute Clouds" (Ristenpart et al., 2009) [pdf], explores the possible vulnerabilities of virtualised cloud services by way of a case study. Amazon EC2 has three levels of infrastructure: a region (e.g. US, Asia, etc), an availability zone (i.e. a data centre), and an instance type (e.g. Linux 32-bit). The user creates a VM image and asks Amazon to 'run' it, at which point it is placed onto a physical server (and acquires an internal and external IP address and domain).

The authors describe the three-part challenge facing a would-be attacker:
  1. To 'map' the address space of the cloud and instantiate a VM on the desired machine (i.e. the one hosting the target VM).
  2. To check for co-residency (i.e. confirm desired placement).
  3. To attack the target VM (for example, extract information).
Their subsequent investigation uncovers possible strategies for acheiving each of these goals, and leads naturally to straightforward recommendations for improved security:
  1. The address space can be 'mapped' simply by launching VMs with different parameters and seeing what IPs they are assigned. It turns out that 'similar' requests are placed in 'similar' areas of the map, so that carefully chosen parameters can increase the probability of being placed on the same server as the target VM - even more so if the launch can be timed to coincide. (Non-static IPs would remove the possibility to do this).
  2. The most conclusive check for co-residency is by observing the 'Dom 0' address of a packet sent to or from the target (each VMM has an IP address associated which will be attached to any packet passing through it). Alternatively, round-trip times for packets sent to the target can be measured: if these are small (or, if they are similar to those of packets sent from the adversary to itself), co-residency is likely. Lastly, numerically close IP addresses can also be an indicator. The authors suggest security policy mitigations to increase the challenge to the attacker.
  3. Having achieved and confirmed co-residency, the attacks they suggest relate to previously-discovered micro-architectural strategies, such as side-channel leakage from shared cache memory (see, e.g., "Cache-timing attacks on AES" (Bernstein, 2005) [pdf].
The second paper we looked at, "NoHype: Virtualized Cloud Infrastructure Without the Virtualization" (Keller et al, 2010) [pdf], proposed to mitigate the vulnerabilities of cloud infrastructure by getting rid of the virtualisation layer. They justified this by observing that the increasing trend in the number of cores-to-a-processor means the benefits of fine-grained provision will no longer rely on the ability to put multiple VMs on a single core. They thereby set out to demonstrate that some of the tasks performed in software by the VMM could be equally well performed in hardware, highlighting ways in which existing technology could be tweaked to achieve this as well as identifying gaps which would require new technologies to fill.

No comments:

Post a Comment