rdist

March 22, 2007

Media binding techniques

Filed under: Hacking,Security,Software protection — Nate Lawson @ 3:12 pm

Media binding is the first step of copy protection, a specific type of software protection. If the attacker can’t just duplicate the media itself, he will have to attack the software’s implementation of media binding. Media binding is a set of techniques for preventing access to the protected software without the original media.

Content protection (a specific form of copy protection) starts by encrypting the video/audio data with a unique key, and then uses media binding and other software protection techniques to regulate access to that key. Since the protected data is completely passive (i.e., stands alone as playable once decrypted), managing access to the key is critical. However, software copy protection is more flexible since software is active and its copy protection can be integrated throughout its runtime.

The Blu-ray content protection system, BD+, is more of a software protection scheme (versus a content protection scheme) since the per-disc security code is run throughout disc playback to implement the descrambling process. Thus, each disc is more of a unique software application instead of passive video/audio data. AACS is more of a traditional, key-based content protection system.

There are three ways to bind software to media:

  1. Verify unique characteristics of the original media
  2. Encode data in a form that can’t be written by consumer equipment
  3. Attack particular copying processes and equipment involved

Verifying the media involves one or more checks for physical aspects that are difficult to duplicate. Checks involve intentional errors, invalid filesystem data, data layout alignment, and timing of drive operations or media. The key point is that some logic in the software is making a decision (i.e., if/then) based on the results of these checks. So attackers will often go after the software protection if they can’t easily duplicate the media characteristics.

Encoding data involves modifying the duplication equipment, consumer drive, and/or recordable media such that the real data cannot be read or written on unmodified consumer equipment and media. This is usually more powerful than verification alone because attackers have to modify their drives or circumvent the software protection before getting access to the software in the first place. Game consoles often use this approach since they can customize their drives and media, even though they usually start with consumer designs (i.e., DVD). Custom encodings can be used with verification of the encoded data for increased strength.

Attacking the copying process involves analyzing the equipment used to make copies and exploiting its flaws. For example, most DVD ripping software uses UDF filesystem information to locate files while hardware DVD players use simpler metadata. So phantom files can be provided that only the copying software sees that corrupt the rip. The problem with attacking the copying process is that it is relatively easy to update copying software, so usually this technique has a short shelf life. However, it can be useful as part of an overall strategy of increasing an attacker’s costs.

Obviously, if the attacker has access to the same duplication equipment that the author uses, nothing can prevent them from making their own media. Other mechanisms, such as revocation, must handle this case farther down the chain.

Next, we’ll discuss protecting the protection code itself.

5 Comments

  1. not to criticize the approaches you propose … a few comments

    for example,

    let’s call encryption coding requiring a secret such that the encrypted information can be extracted by those with the secret.
    the secret is a key which is a family or functions or index of functions.

    the primitives for crypography are based on the computational difficulty of cracking the key. it is a yeah or nay on access to extract the plaintext. but, in an age when such computational complexity would appear to limit enabling a market for content or information that may be subject to the digital copy problem. this is not to say conditional access, white box drm, etc. are not useful.it is to say the competition is for time and attention and may best be served by relaxing primitive to include information that requires access to enable fair pricing (who says what is fair?)

    but, a simpler way to do security, in the sense that keys are easy to change and not based on a strict access vis-a-vis encryption but a nuanced use of a key to alter the associated signal in a manner that makes measurement of the complexity or quanta of “security” applied. additionally functions as part of the key may include specific information which may increase or decrease the computational requirements for rendering the content. no new players just key readers … (call em predetermined or media or content keys, whatever)

    i think a fair way to observe security options are: proprietary coding (though the perceptual model is easy to replicate as it is based on human observation) – encryption is transport layer only and does not reveal anything about the plaintext — ideally; content extensions or wrapping which are active controls but not integral with the content – they aren’t digital signatures that survive transform conversions; watermarking for plausible deniability/traceability and integrity; format manipulation relies on a key describing how the encoding has been manipulated…

    the manipulation can be a measure of security or simply a way to measure how to split the money from transactions.

    code protection, by the way, is related in the sense that the code level is interoperating with digitized signals and vice-a-versa. the index of functions relates inputs to outputs in way that can be manipulated for any number of threat scenarios.

    hope i am not rambling … would like to hear some thoughts …

    Comment by S Moskowitz — October 31, 2007 @ 9:49 pm

  2. I don’t understand what you mean. It sounds like you are talking about software protection in general. Can you summarize?

    This post was about one very specific security goal — tie some set of bits to its original media. Even with a system like AACS that uses cryptography (NNL key tree), preventing bit-for-bit copying of the disc comes to inserting some piece of data (i.e., 128-bit Volume ID) in a location that can’t be written on recordable media. DVD-Rs that come with the CSS key block area already overwritten with zeros use a similar approach.

    So in this very narrow goal (tying bits to their original media), encryption does not get you anything. It ultimately comes down to one of the three categories of media binding I discuss in the article.

    Comment by Nate Lawson — November 1, 2007 @ 3:03 pm

  3. another pass …

    bind key material which is signal-specific, to any level of granularity of the content itself. the constraint access-restriction inherent to *any* crypto cipher (the ciphertext does not reveal or leak information concerning the plaintext – security held “only” in the key) can be relaxed … plainly: if the basic structure of the data is a bit, then we are binding more than a bit up to a level of granularity that is consistent with the processing of the media to be “protected” … speed bump the process … do not completely access restrict the data — that is wasted computation (an opinion).

    this could be the frames of AAC or MPEG or some other “formatting” but it is simplistic enough to generate and replace keys (a la itunes), by focusing on the encoding itself and relaxing the crypto cipher constraint … this has overlap with winnowing and chaffing and was developed earlier than that work.

    another explanation to the very specific goal you are discussing …

    if a digital watermark is encoded with key material eg you can differentiate between 2 copies with knowledge of the key material but otherwise the content is perceptually the same … there needs to be at least one bit of difference between the original unmarked content and the marked content …

    now, instead of a a watermark key which for purposes of discussion describes how the watermark is encoded into the content you have a key which describes the encoding or binding of the media — it is not as secure as crypto keys in the traditional sense but secure enough to enable upgrades with smaller computational overhead …

    we do this at minimal overhead and are able to match a specific piece of content with a specific key — the key is the binding between analog/dsp I/O and “level of complexity” to discourage piracy using devices or software which render the content … the key and content are specific to each other and relate input to output with a measurable amount of complexity …

    this is the narrow goal, achieved…

    Comment by S Moskowitz — November 2, 2007 @ 9:06 am

  4. If you’re working purely in the digital domain (preventing a set of bits from being moved off the original media), the characteristics of the media itself are your foundation. It sounds like you’re talking more from a background of watermarking, where copying involves a decoding transformation.

    While watermarking can provide a social deterrent to copying, it doesn’t prevent the act itself. As part of an overall software protection scheme, I do think marking has its own place.

    Comment by Nate Lawson — November 6, 2007 @ 8:36 am

  5. actually, i am using the notion of a watermark key which instead of embedding some other indepedent data into a signal instead manipulates the bits of the signal at the granularity of the signal characteristics — where the key is the complexity between how the signal is manipulated and how it is intended to be rendered.

    call it a media key … it differs from encryption as the index of functions to be determined (for a given application of media binding depending for instance on use, bandwidth, type of signal, frame setting, other schemes to be integrated, etc.) is not directed at the signal in the sense of an encryption …

    if you know that the signal will have to be rendered in the clear (phish :: all that i see cant be taken from me) wasting computation on access restriction instead of units of complexity enables you to maintain legacy versions of the content … at lower computational cost … the physical media or network settings can provide embedded or meta-data to affect other aspects of the system.

    it’s gotta be my shoes … sorry to be tongue tied on this one

    Comment by S Moskowitz — November 13, 2007 @ 5:24 pm


RSS feed for comments on this post.

Blog at WordPress.com.