Before deciding how to enable DMA protection, it’s important to figure out what current and future threats you’re trying to prevent. Since there are performance trade-offs with various approaches to adding an IOMMU, it’s important to figure out if you need one, and if so, how it will be used.Current threats using DMA have centered around the easiest to use interface, Firewire (IEEE 1394). Besides being a peripheral interconnect method, Firewire provides a message type that allows a device to directly DMA into a host’s memory. Some of the first talks on this include “0wned by an iPod” and “Hit by a Bus“. I especially like the latter method, where the status registers of an iPod are spoofed to convince the Windows host to disable Firewire’s built-in address restrictions.
Yes, Firewire already has DMA protection built in (see the OHCI spec.) There are a set of registers that the host-side 1394 device driver can program to specify what addresses are allowed. This allows legitimate data transfer to a buffer allocated by the OS while preventing devices from overwriting anything else. Matasano previously wrote about how those registers can be accessed from the host side to disable protection.
There’s another threat that is quite scary once it appears but is probably still a long way off. Researchers, including myself, have long talked about rootkits persisting by storing themselves in a flash-updateable device and then taking over the OS on each boot by patching it via DMA. This threat has not emerged yet for a number of reasons. It’s by nature a targeted attack since you need to write a different rootkit for each model of device you want to backdoor. Patching the OS reliably becomes an issue if the user reinstalls it, so it would be a lot of work to maintain an OS-specific table of offsets. Mostly, there are just so many easier ways to backdoor systems that it’s not necessary to go this route. So no one even pretends this is the reason for adding an IOMMU.
If you remember what happened with virtualization, I think there’s some interesting insight to what is driving the deployment of these features. Hardware VM support (Intel VT, AMD SVM) were being developed around the same time as trusted-computing chipsets (Intel SMX, AMD skinit). Likewise, DMA blocking (Intel NoDMA, AMD DEV) appeared before IOMMUs, which only start shipping in late 2007.
My theory about all this is that virtualization is something everyone wants. Servers, desktops, and even laptops can now fully virtualize the OS. Add an IOMMU and each OS can run native drivers on bare hardware. When new virtualization features appear, software developers rush to support them.
DRM is a bit more of a mess. Features like Intel SMX/AMD skinit go unused. Where can I download one of these signed code segments all the manuals mention? I predict you won’t see DMA protection being used to implement a protected path for DRM for a while, yet direct device access (i.e., faster virtualized IO) is already shipping in Xen.
The fundamental problem is one of misaligned interests. The people that have an interest in DRM (content owners) do not make hardware or software. Thus new capabilities that are useful for both virtualization and DRM, for example, will always first support virtualization. We haven’t yet seen any mainstream DRM application support TPMs, and those have been out for four years. So when is the sky going to fall?