There’s a new flaw in the W3C XMLDsig standard, found by Thomas Roessler. This standard specifies the procedure for signing and validating XML data, such as for SOAP. The document is validated with HMAC; however, an additional parameter can be supplied, <HMACOutputLength>. This allows the signer to use a truncated HMAC.
As you may remember, an HMAC is a way of validating data by using a secret key and a cryptographic hash algorithm. I avoid using the term “keyed hash” as that leads to bad implementations like “HASH(key || data)”. The output of an HMAC is the size of the hash algorithm, say 160 bits for SHA-1. This value is sometimes truncated in space-constrained designs. For example, only the first 128 bits might be sent and verified. This is sometimes acceptable because circumventing an HMAC requires a second-preimage attack (2n), unlike forging a signature which only requires a collision (2n/2).
The problem is that there is no specified lower bound on <HMACOutputLength>. So this is a perfectly valid signature field in XMLDsig:
<SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#hmac-sha1"> <HMACOutputLength>0</HMACOutputLength> </SignatureMethod>
Yes, an attacker can send any request and any signature and it will be accepted if the server follows the field blindly and validates only 0 bits. Even a server that checked for 0 might allow a truncated length of 1 bit or 8 bits or whatever.
According to the advisory, it has taken 6 months to release this simple spec erratum. This is because it affected so many vendors. This is a great example how a little leniency in crypto can have a huge impact on security.
Also, I’m not certain this bug is actually fixed. For example, I see that the algorithm can be specified by the same field. So could an attacker specify a broken algorithm like MD4, which has a second-preimage attack? If the server’s crypto library just maps the given Algorithm field to a hash function, this might be possible.
The spec says:
Requirements are specified over implementation, not over requirements for signature use. Furthermore, the mechanism is extensible; alternative algorithms may be used by signature applications.
So it’s not clear whether or not this is a problem, although ambiguity in crypto is one of the biggest sources of flaws.
The most annoying thing about this entire vulnerability is why were truncated HMACs even included in the standard at all? This is XML we’re talking about, not some packed binary protocol. The difference between full HMAC-SHA1 and the new minimum allowed truncated value is 10 bytes (plus base64 encoding expansion). You’re telling me that it’s worth exposing all your implementers to this kind of crypto risk to save 10 bytes? Really?
” You’re telling me that it’s worth exposing all your implementers to this kind of crypto risk to save 10 bytes? Really?”
And if you are using XML in the first place you’ve already embraced some level of verbosity…
Exactly my point. There is no valid justification for this field. Eliminating the field completely saves more space than it enables to be saved by existing! The tag overhead more than offsets the space savings.
You can’t even argue this is there for interoperability. Anything that was already using truncated MACs didn’t support this standard, because it was a new standard.
We’ve seen *exactly* the same truncation stupidity before with SNMPv3:
http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-0960
Someone seems to be really keen on creating the HMAC-idiotic-truncation bugclass.
Astounding.
This is actually a specific case of a more general problem in security protocols, that the standards authors never (or at least only rarely) bother specifying sensible limits for values, and because they’re not required by the spec, developers often don’t bother imposing them (there’s a long-standing vulnerability in a particular mechanism used in PKCS #11, for example, that does the same thing, allowing you to guess a key a bit at a time). It’s somewhat scary how many security-protocol implementations you can attack simply by specifying unexpectedly large or small values in fields. Typical behaviour is to spend inordinate amounts of time or consume inordinate amounts of memory trying to do whatever it is the other side has instructed you to do (“iterate the hashing over this 128kB salt ten billion times”, “allocate a window size of four hundred megabytes”, that sort of thing). You can take out printers, routers, servers, CAs (!!), simply by sending a perfectly legitimate (according to the spec) field value as part of a security protocol that’s meant to be keeping them safe from attack.
Nice examples, Dave.
I still can’t get over this. You can only safely shave 10 bytes (16 bytes base64-encoded) off SHA-1 to get an 80-bit truncated hash. Given that the XML start/end tags for this field are 38 bytes total, there’s a net loss of space in exchange for much less security. Lose-lose!