DRM is passive and active

In a post regarding DRM (based on another post), Alun Jones of Microsoft says:

“Passive DRM protects its content from onlookers who do not have a DRM-enabled client. Encryption is generally used for Passive DRM, so that the content is meaningless garbage unless you have the right bits in your client. I consider this ‘passive’ protection, because the data is inaccessible by default, and only becomes accessible if you have the right kind of client, with the right key.

Active DRM, then, would be a scheme where protection is only provided if the client in use is one that is correctly coded to block access where it has not been specifically granted. This is a scheme in which the data is readily accessible to most normal viewers / players, but has a special code that tells a DRM-enabled viewer/player to hide the content from people who haven’t been approved.”

The whole problem is his two categories are a false distinction. You can’t arbitrarily draw a line through a system and say “this is passive, this is active.” For your CSS example, if you consider a given player’s decryption code along with an arbitrary encrypted DVD, you have a system with both active and passive elements. If you leave out either of those elements, you have a disc that won’t play or a player with no disc, the only perfectly secure system (assuming your cryptography is good.)

When judging the efficiency of new compression schemes, the size of the decoder is added to the size of the compressed data to get a fair assessment of its efficiency. Otherwise you could win contests with a one-byte file and a 10 GB decoder program that simply contains all the actual data.

Whichever way you design a system, complexity is being pushed from one party to another but never eliminated. For DVD, where most of the complexity is in the player, there is a huge variety of player implementations that each have their own bugs. The author of every disc needs to test against many combinations of players because of that problem.

Likewise, if you push the complexity onto the disc by including executable code there, the player gets simpler but the disc could be buggy. However, in that case, the content author will get a bad reputation for the buggy disc (see the Sony rootkit fiasco he mentions).

This doesn’t just apply to DRM. While he might consider a MPEG4-AVC video file as “passive” in his terminology, it is really a complex series of instructions to the decoder. Look at the number of different but valid ways to encode video and you’ll see it’s closer to a program than to “passive” data.

Now in his definition for “Active DRM”, he is not actually describing the general class of software protection techniques. He is describing a system that is poorly-designed, often due to an attempt to retrofit DRM onto an existing system without it. Of course it makes sense that if you have two ways to access the content, one with DRM and the other without, the additional complexity makes no sense to the end-user or mass copiers. It may make economic sense to the content author, but they have to weigh the potential risks to their business also (annoying users vs. stopping some casual copying.)

Even assuming his terminology makes sense, the Windows Media Center system he references is actually a combination of “active” and “passive”. The cable video stream is encrypted (“passive”), and the Windows DRM component is “active”. In particular, it has a “black box” DLL that checks the host environment and hashes various items to derive a key, hence the problem.

All I can distill from what Alun says is “an unprotected system is made more complex by adding DRM.” I agree, but this doesn’t say anything larger about “active” versus “passive” DRM.

Full disclosure: I was previously one of the designers of the Blu-ray protection layer (BD+), a unique approach to disc protection that involves both cryptography and software protection. You can consider me biased, but my analysis should be able to stand on its own.

IOMMU – virtualization or DRM?

Before deciding how to enable DMA protection, it’s important to figure out what current and future threats you’re trying to prevent. Since there are performance trade-offs with various approaches to adding an IOMMU, it’s important to figure out if you need one, and if so, how it will be used.Current threats using DMA have centered around the easiest to use interface, Firewire (IEEE 1394). Besides being a peripheral interconnect method, Firewire provides a message type that allows a device to directly DMA into a host’s memory. Some of the first talks on this include “0wned by an iPod” and “Hit by a Bus“. I especially like the latter method, where the status registers of an iPod are spoofed to convince the Windows host to disable Firewire’s built-in address restrictions.

Yes, Firewire already has DMA protection built in (see the OHCI spec.) There are a set of registers that the host-side 1394 device driver can program to specify what addresses are allowed. This allows legitimate data transfer to a buffer allocated by the OS while preventing devices from overwriting anything else.  Matasano previously wrote about how those registers can be accessed from the host side to disable protection.

There’s another threat that is quite scary once it appears but is probably still a long way off. Researchers, including myself, have long talked about rootkits persisting by storing themselves in a flash-updateable device and then taking over the OS on each boot by patching it via DMA. This threat has not emerged yet for a number of reasons. It’s by nature a targeted attack since you need to write a different rootkit for each model of device you want to backdoor. Patching the OS reliably becomes an issue if the user reinstalls it, so it would be a lot of work to maintain an OS-specific table of offsets. Mostly, there are just so many easier ways to backdoor systems that it’s not necessary to go this route.  So no one even pretends this is the reason for adding an IOMMU.

If you remember what happened with virtualization, I think there’s some interesting insight to what is driving the deployment of these features.  Hardware VM support (Intel VT, AMD SVM) were being developed around the same time as trusted-computing chipsets (Intel SMX, AMD skinit).  Likewise, DMA blocking (Intel NoDMA, AMD DEV) appeared before IOMMUs, which only start shipping in late 2007.

My theory about all this is that virtualization is something everyone wants.  Servers, desktops, and even laptops can now fully virtualize the OS.  Add an IOMMU and each OS can run native drivers on bare hardware.  When new virtualization features appear, software developers rush to support them.

DRM is a bit more of a mess.  Features like Intel SMX/AMD skinit go unused.  Where can I download one of these signed code segments all the manuals mention?  I predict you won’t see DMA protection being used to implement a protected path for DRM for a while, yet direct device access (i.e., faster virtualized IO) is already shipping in Xen.

The fundamental problem is one of misaligned interests.  The people that have an interest in DRM (content owners) do not make hardware or software.  Thus new capabilities that are useful for both virtualization and DRM, for example, will always first support virtualization.  We haven’t yet seen any mainstream DRM application support TPMs, and those have been out for four years.  So when is the sky going to fall?

Protecting memory from DMA

Previously, we discussed how DMA works in the PC architecture. The northbridge is only aware of physical addresses and directs transactions to the appropriate devices or RAM based solely on that address.

Unlike within the CPU where there is virtual memory translation and protection, the chipset previously did not perform any translation of physical addresses or place restrictions on which addresses can be accessed. From the RAM’s perspective, a memory access that originated from the CPU or from the integrated Ethernet is exactly the same. Stability and security depended on the device driver properly programming the device to DMA only to physical addresses that were within the buffer assigned by the OS. This was fine except for device driver or hardware bugs, since ring 0 code was trusted.

With system-wide virtualization and DRM becoming more common, ring 0 code is no longer trusted. To avoid DMA corruption between guests or the host, the hypervisor previously would create a fake device for each OS instance. The guest talked to the fake device, and the host would multiplex the transactions over the real device. This has a lot of overhead, so it would be preferable to let the guest talk directly to the real device.

An IOMMU provides translation and protection for physical addresses. The hypervisor sets up a page table within the northbridge that groups page table entries by their device IDs. Then, when a DMA request arrives at the northbridge from a device, it is looked up by its ID, translated into the actual destination physical address, and allowed or denied based on the protection settings. If a write is denied, no data is transferred to RAM. If it’s a read, all bits are set to 1 in the response. Either way, an abort error is returned to the device as well.

DMA protection (AMD: DEV, Intel: NoDMA table) is currently available in shipping products and physical address translation (AMD: IOMMU, Intel: VT-d) is coming very soon. While these features were implemented separately, it is expected that they will usually be used together.

There have been a few surprising studies of IOMMU performance. The first paper, by IBM researchers, shows that the overhead in setting up and tearing down mappings consumed up to 60% more CPU than without. They discuss various mapping allocation strategies to address this. However, they all have their disadvantages. One of the strategies, setting up the mappings at guest startup and never changing them, interferes with the hypervisor strategy called “ballooning”, where resources are only allocated to a guest as it uses them. This is what allows VMware to run guests with more RAM available to them than the host actually has. Read the paper for more analysis of their other strategies.

Another paper, by Rice University researchers, proposes virtualization support built into the devices themselves (“CDNA”). They build a NIC that maintains a unique set of registers for each guest. Each guest believes it has direct access to the NIC, although requests to set up DMA go through the hypervisor. The NIC hardware manages the fair scheduling of DMA among all the register contexts, so actual packets going out on the wire will be balanced between the various guests sending them. This approach requires no IOMMU, but each device needs to be capable of maintaining multiple register contexts. Again, read this paper for a different take on device virtualization.

This research shows that an IOMMU is not the only way to achieve DMA protection, and it’s important to carefully design how a hypervisor uses an IOMMU to prevent a loss of performance. Next time, we’ll examine some usage scenarios for IOMMUs, both in virtualization and DRM.

PC memory architecture overview

The topics of DMA protection and a new Intel/AMD feature called an IOMMU (or VT-d) are becoming more prevalent. I believe this is due to two trends: increased use of virtualization and hardware protection for DRM. It’s important to first understand how memory works in a traditional PC before discussing the benefits and issues with using an IOMMU.

DMA (direct memory access) is a general term for architectures where devices can talk directly to RAM, without the CPU being involved. In PCs, the CPU is not even notified when DMA is in progress, although some chipsets do report a little information (i.e., bus mastering status bit or BM_STS). DMA was conceived to provide higher performance than the alternative, which is for the CPU to copy each byte of data from the device to memory (aka programmed IO). To write data to a hard drive controller via DMA, the driver running on the CPU writes the memory address of the data to the hardware and then goes on to doing other tasks. The drive controller finishes reading the data via DMA and generates an interrupt to notify the CPU that the write is complete.

DMA can actually be slower than programmed IO if the overhead in talking to the DMA controller to initiate the transaction takes longer than the transaction itself. This may be true for very short data. That’s why the original PC parallel port (LPT) doesn’t support DMA. When there are only 8 bits of data per transaction, it doesn’t make sense to spend time telling the hardware where to put the data, just read it yourself.

Behind this concept of DMA, common to nearly all modern architectures, the PC has a particular breakdown of responsibilities between the various chips. The CPU executes code and talks to the northbridge (Intel MCH). Integrated devices like USB and Ethernet are all located in the southbridge (Intel ICH), with the exception of on-board video, which is located in the northbridge. Between each of these chips is an Intel or AMD proprietary bus, which is why your Intel CPU won’t work with your AMD chipset, even if you were to rework the socket to fit it. Your RAM is accessed only via the northbridge (Intel) or via a bus shared with the northbridge (AMD).

Interfacing with the CPU is very simple. All complexities (privilege level, paging, task management, segmentation, MSRs) are handled completely internally. On the external bus shared with the northbridge, a CPU has a set of address and data lines and a few control/status lines. Besides power supply, the address and data pins are the most numerous. In the Intel quad-core spec, there are only about 60 types of pins. Only three pins (LINT[0:1], SMI#) are used to signal all interrupts, even on systems with dozens of devices.

Remember, these addresses are purely physical addresses as all virtual memory translation is internal to the CPU. There are two types of addresses known to the northbridge: memory and IO space. The latter are generated by the in/out asm instructions and merely result in a special value being written to the address lines on the next clock cycle after the address is sent. IO space addresses are typically used for device configuration or legacy devices.

The northbridge is relatively dumb compared to the CPU. It is like a traffic cop, directing the CPU’s accesses to devices or RAM. Likewise, when a device on the southbridge wants to access RAM via DMA, the northbridge merely routes the request to the correct location. It maintains a map, set during PCI configuration, which says something like “these address ranges go to the southbridge, these others go to the integrated video”.

With integrated peripherals, PCI is no longer a bus, it’s merely a protocol. There is no set of PCI bus lines within your southbridge that are hooked to the USB and Ethernet components of the chip. Instead, only PCI configuration remains in common with external devices on a PCI bus. PCI configuration is merely a set of IO port reads/writes to walk the logical device hierarchy, programming the northbridge with which regions it decodes to which device. It’s setting up the table for the traffic cop.

Next time, we’ll examine the advent of IOMMUs and DEVs/NoDMA tables.

Mesh design pattern: error correction

Our previous mesh design pattern, hash-and-decrypt, requires the attacker either to run the system to completion or reverse-engineer enough of it to limit the search space. If any bit of the input to the hash function is incorrect, the decryption key is completely wrong. This could be used, for example, in a game to unlock a subsequent level after the user has passed a number of objectives on the previous level. It could also be used with software protection to be sure a set of integrity checks or anti-debugger countermeasures have been running continuously.

Another pattern that is somewhat rare is error correction. An error correcting code uses compressed redundancy to allow data that has been modified to be repaired. It is commonly used in network protocols or hard disks to handle unintentional errors but can also be useful for software protection. In this case, an attacker who patches the software or modifies its data would find that the changes have no effect as they are silently repaired. This can be combined with other techniques (e.g., anti-debugging) to require an attacker to locate all points in the mesh and disable them simultaneously. Turbo codes, LDPC, and Reed-Solomon are some commonly used algorithms.

Hashing and error correction are very similar. A cryptographic hash is analogous to a compressed form of the original data, since by design it is extremely difficult to generate a collision (two sets of data that have the same fingerprint.) Instead of comparing every byte of two data sets, many systems just compare the hash. You can build a crude form of error correction by storing multiple copies of the original data and throwing out any that have an incorrect hash due to a patching attempt or other error. However, this results in bloat, and it’s relatively easy for the reverse engineer to find all copies of the identical data in memory, even if the hash is somewhat hidden.

Turbo codes are an efficient form of error correction. To put it simply, three different chunks of data are stored: the message itself (m bits) and two parity blocks (n/2 bits each). The total storage required is m + n bits, coding data at a rate of m / (m + n). You can think of this as a sort of crossword puzzle where one parity block stores the clues for “across” and the other stores the clues for “down”. Two decoders process the parity blocks and vote on their confidence in the output bits. If the vote is inconclusive, the process iterates.

turbocode.png

To use error correction for software protection, take a block of data or instructions that are important to security. Generate an encoded block for it using a turbo code. Now, insert a decoder in the code which calls into or processes this block of data. If an attacker patches the encoded data (say, to insert a breakpoint), the decoder will generate a repaired version of that data before using it.

This has a number of advantages. If the decoding is not done in-place, the attacker will not see the data being repaired, just that the patch had no effect. The parity blocks look nothing like the original data itself so it looks like there is only one copy of the data in memory. The decoder can be obfuscated in various ways and inlined to prevent it from being a single point of failure. The calling code can hash the state of the decoder as part of hash-and-decrypt so that errors are detected as well, allowing the software protection to later degrade the experience rather than immediately failing. This hides the location of the protection check (temporal distance.)

Like all mesh techniques, error correction is best used in ways that are mutually reinforcing. The linker can be adapted to automatically encode data and insert the decoding logic throughout the program, based on control flow analysis. Continually-running integrity checking routines can be encoded with this approach. The more intertwined the software protection, the harder it is to bypass.