Forged CA cert talk at 25C3

A talk entitled “MD5 considered harmful today” (slides) is being presented at 25C3. The authors describe forging a CA cert that will be accepted by all browsers by exploiting the fact that several trusted root CAs sign certs with MD5. This allows them to spoof their identity as any SSL-enabled website on the Internet, and it will look perfectly valid to the user.

The growth of trusted root CAs included in standard browsers has been an issue for a while. Every root CA is equal in the eyes of the browser, thus the end-user’s security is equivalent to the security of the weakest root CA. The default Firefox install will accept a Yahoo cert signed by “TurkTrust”, or any other of more than 100 root certs. I don’t know how good each of those companies are at securing their keys, implementing strict cert chain validation, and checking the identity of every submitter. So, it’s a good bet that putting crypto authority in the hands of that many people will result in some failures, repeatedly.

The attack is interesting since they take advantage of more than one flaw in a CA. First, they find a CA that still uses MD5 for signing certs. MD5 has been broken for years, and no CA should have been doing this. Next, they prepared an innocent-looking cert request containing the “magic values” necessary to cause an MD5 collision. They were able to do this because of a second flaw. The CA in question used an incrementing serial number instead of a random one. Since the serial is part of the signed data, it is a cheap way to get some randomness. This would have thwarted this particular attack until a pre-image vulnerability was found in MD5. Don’t count on this for security! MD4 fell to a second pre-image attack a few years after the first collision attacks, and attacks only get better over time.

This talk definitely points out that crypto attacks are not being addressed quickly enough in the real world. While it is difficult to roll out a new root CA cert, it’s better to do so over the years we have known MD5 to be insecure than in a rush after an attack has already occurred. Another excellent talk at 25C3 on the iPhone described how the baseband processor was compromised via the lack of validation of RSA signature padding. What’s intriguing is that Apple’s own RSA implementation in their CDSA code was not vulnerable to this flaw, but apparently a different vendor supplied the baseband code.

To paraphrase Gibson, “Crypto security is available already, it just isn’t equally distributed.”

3 thoughts on “Forged CA cert talk at 25C3

  1. I have been discussing this attack at work, and my initial point of view mirrored yours in terms of the incremental serial number being a fundamental flaw. However, I had it pointed out to me that the underlying problem is that the X.509 certificate does not _require_ a (hopefully) unpredictable field to add non user assigned entropy.

    As you have so rightly pointed out in the past, bad crypto happens when you start to rely on assumptions, and it should not be necessary for a serial number to perform any other function than that of a unique identifier. Being unpredictable was never part of the deal.

    Of course, the fundamental problem is that MD5 is broken, and should not be used. That is a given. However, as the researchers point out in their presentation, we need defense in depth. Having non-user assigned, unpredictable data in the cert would at least provide us with some time to switch from a broken hash to something better, if (for example) a devastating chosen prefix attack on SHA1 is published tomorrow.

    I find it interesting as well that the attack appears to rely on the fact that the CA being used issued the certificate exactly 6 seconds after issuing the CSR (otherwise they could not predict the certificate expiry, which is set to one year after the issue time according to the presentation). The devil in me can’t help but wonder if this was to prevent timing attacks, and their implementation has just exposed another weakness. If they had implemented random delays instead, this attack would quite possibly be impractical, depending on the size of the delay.

    However, in the real world, things tend not to change if there is not a ‘real and present danger’ in the form of a demonstrable attack. This research proves this point, which you correctly note. So maybe even defense in depth is of no use; it is in the nature of (some) people to ignore possible until it is the practical.

  2. Yes, I agree with designing crypto to have more defense-in-depth. A favorite example of mine is SSL3, designed in 1996 by Paul Kocher. He used two hash functions together in an HMAC-like construction, before HMAC had been standardized. The reason he used two was because MD5 seemed like it might be weak in the future and SHA-1 was too new to have a lot of review. (Remember at the time that SHA-1 had just replaced SHA-0 due to a weakness found in SHA-0 just after it had been released.) He made the right decision, and 13 years later we’re not replacing SSL3/TLS due to MD5.

    One rule of thumb I use when evaluating protocols is that a signer should always know exactly what he is signing. The more random fields or user-specified fields provided, the more “wiggle room” for an attacker to add potentially malicious data. Conversely, adding random data to a server-controlled area can help increase the attacker uncertainty.

    What crypto designers should do is pay more attention to implementation details, including adding defense-in-depth from the beginning. A lot of SSL accelerator implementers complained about having to implement two different hash functions, and it did seem silly at the time. A theoretician who didn’t care about implementation details would never have done this hack, since you should “just use a secure hash function”. However, my credit card is never transmitted over theoretical algorithms, it’s sent over SSL/TLS implementations running on x86 hardware.

  3. Also, every CA should be logging the entire CSR to disk. Consider a high volume CA that has signed 1 billion certs and all of them are 2x the size of Amazon’s cert (1271 bytes). That’s only 2 terabytes total or two commodity hard drives, for keeping the entire history of a CA.

    Such an archive would allow CAs to go back and look for requests which conformed in some way to known attack data, after an attack was discovered. As it stands, they have no way of knowing how many MD5 collision certs have been maliciously signed before this flaw was announced.

Comments are closed.