DSA requirements for random k value

Most public key systems fail catastrophically if you ignore any of their requirements. You can decrypt RSA messages if the padding is not random, for example. With DSA, many implementation mistakes expose the signer’s private key.

Many crypto protocols use a nonce. The traditional requirements of a nonce is that it never be repeated. So a simple counter can suffice, as long as it is safely persisted. Using a timestamp is problematic if the clock ever goes backwards (say, NTP) or you need to generate two messages in a short interval.

In DSA, the k value is not a nonce. In addition to being unique, the value must be unpredictable and secret. This makes it more like a random session key than a nonce. When an implementer gets this wrong, they expose the private key, often with only one or two signatures.

If k is predictable, there is a way to recover the private key from a single signature with straightforward algebra. Since a past weakness in the Debian PRNG resulted in only 32767 possible outputs, an attacker could recover any DSA private key where a single signature was generated on a vulnerable Debian system. The key could have been generated securely on a system without this flaw, but that single signature would compromise it.

But what about the requirement that k be unique? Assume there’s an email signing system that has a secure PRNG but it is rate-limited so two requests that come in quickly from the same process produce the same output. As an attacker, you can’t predict k but you suspect the signing system may have this weakness.

To generate a DSA signature, the signer calculates (r, s) as follows:

r = gk mod p mod q
s = k-1 (H(m) + x*r) mod q

If you see a repeated r value, then you know k has been repeated. Build a web crawler to find DSA signatures. Dump its output into a parser for a bunch of formats (X509v3, PGP, PKCS#7, etc.) From each signature file, insert each r value into a key-value database and look for a match. If you find any two matching r values, you can recover the signer’s private key.

The algebra to do so is pretty straightforward. Given two signatures for messages MA and MB, we want to solve for k.

SA = k-1 (H(MA) + x*r) mod q
SB = k-1 (H(MB) + x*r) mod q

Subtract the two signatures. (The modular reduction step is implicit from here on for readability.)

SASB = k-1 (HA + x*r) – k-1 (HB + x*r)

Redistribute. Since the k’s are identical, their inverse is also.

SASB = k-1 (HA + x*r – HBx*r)

The x*r values cancel out.

SASB = k-1 (HA – HB)


k = (HA – HB) / (SASB)

Once you have k, you can solve for x directly as in the previous post. There is a more complicated technique for solving for x if only a few bits of k are known, requiring more signatures.

It is extremely important that all bits of k be unique, unpredictable, and secret. With two DSA signatures on separate messages with the same k, you can recover the signer’s private key.

16 thoughts on “DSA requirements for random k value

  1. This is something that’s always concerned me about ECC algorithms, on paper they may (or may not, depending on who you work for) be better than the more traditional RSA, but in practice they’re critically vulnerable to the requirement for a ready supply of high-quality random numbers. In particular, the very constrained environments that ECC is usually pushed for are the ones that are the least likely to have a good supply of random numbers.

    (Since the ECC vs. RSA comparison is only ever done in terms of mathematics, this never seems to be a consideration).

    1. Excellent point, Dave. However, if your embedded environment has hardware AES or SHA and keys in ROM, you can build a secure PRNG from that as long as you don’t repeat the sequence.

      I think the biggest risk is to embedded environments that try to reuse all off-the-shelf software (e.g., unmodified Linux). They have neither the custom hardware to support a PRNG as above nor a good source of entropy for the traditional /dev/random. Since the device is closed, no one finds out about this weakness until it’s too late.

    2. I realize I’m late to the party here, but I’m curious enough to ask anyway: How does ECC exacerbate problems with randomness? I’ve never heard this claim before, and in any of the algorithms I can think of, you just need a random number modulo a prime as you usually would in traditional dlog-based crypto. Is there a gap in the theory vs practice in that reasoning?

      The issue with DSA above is a really a design flaw in the signature scheme itself. I’ve never understood why a randomized signing algorithm was permissible, especially one with this issue.

      I will mention as an aside that this DSA randomness vulnerability came up in the context virtual machine resets, as detailed in this paper.

      1. It’s not specifically ECC, it’s any DLP/ECDLP-based system, so the problems you mention for DSA also apply to ECDSA. Not only do you need a good source of strong random numbers but you also need to be very, very careful how you apply the algorithm in order to avoid leaking private-key bits to an observer. One of these issues hit a number of DSA implementations a few years ago when you reduce k mod q and sizeof( k ) == sizeof( q ), so you have to make k somewhat bigger than q to eliminate the bias. You also run into problems with the ability to forge signatures if you use shared public parameters and forget to authenticate them (as an early DSA X.509 certificate scheme did)… I really don’t like DLP-based sigs, they’re waaay too brittle and susceptible to the slightest implementation error, even if they’re mathematically more fun than RSA is.

    1. The YouTube embedding system didn’t keep my the #t=5m30s in the original link. Skip to 5:30.

    2. Damn, you beat me to it for saying “I toldja so” :-). I’ll bet this is another case of someone so blinded by the cromulence of ECC that they forgot about, or didn’t even know about, any other security requirements.

  2. Implementation is hard. I’ve seen several systems that either failed to provide a true random value, or failed to provide sufficient entropy with that random number (e.g. hashing the random value to hex output, then using it in bit form). I’d love to see a true post-mortem from Sony to explain how this fell through the cracks.

    1. Absolutely, that’s why many cryptographers say you need to spend 10x your development cost on review and validation.

      In the case of a signing tool as important as this one, Root Labs would have used multiple entropy sources, failure tests on each raw input, and multiple checks for repeating outputs. We would try to make it hard to break this even with developer access to the source (e.g., the Debian PRNG bug).

      You’ll never see public discussion about this flaw by Sony.

      1. The timing on this blog post is a bit creepy, but I’m sure Nate will vouch that he got no advanced warning on our part.

        Nate, maybe you can speculate on this — I’m a little vague on the details, I’m sure Segher will come and correct any mistakes I make — but apparently this wasn’t a simple failure in random-number generation, it was apparently designed that way. Each key (for each different type of loader) seems to have an associated random number ‘m’ — the numbers follow no pattern, but they are consistent between different signatures on different versions of the same loader — almost as if they treated ‘m’ as one of the parameters of the key. Any idea what error in understanding might have caused that?

        Another thing that concealed this for a few months (years?) was that the signatures themselves were encrypted (obfuscated), so it wasn’t apparently obvious that the ‘R’s repeated across different binaries until the plaintext signatures were recovered. Would that be an argument for or against plaintext signatures?

      2. Of course I didn’t hear about your PS3 hack in advance. If anything, you stole the idea from my blog. ;-) I had posted over 1.5 years ago about the weakness of having a predictable k value with DSA in that it allows you to recover the private key with just one signature.

        With DSA, the entropy of the random signature value k is critical. It is so critical that knowledge of only a few bits of k can reveal your entire private key to an attacker. Interestingly enough, the Wikipedia article on DSA doesn’t mention this concern.


        I realized later that I had not described in detail how to recover the private key if k is reused (but otherwise unknown) and you have two signatures, so I wrote this followup post. I still have not written the post on how to recover the private key if only a few bits of k are known but the rest are random. I’ll get to that one of these days.

        From your description, it sounds like they either hashed the key or a key ID to get the secret nonce. Or they randomly generated it once and stored it with the key. The (EC)DSA spec is pretty clear about this not being ok.

        And sure, obfuscation can make it take longer to hack a system. But I wouldn’t argue for that as a design parameter other than in systems like content protection or game consoles. Encrypting signatures does not make them more secure in general.

  3. >You’ll never see public discussion about this flaw by Sony.

    You wouldn’t even have seen discussion inside Sony. Their corporate culture is very stovepiped, quite dysfunctionally so since what would be regarded as normal communication channels in other companies (even the highly regulated ones that exist in Japan where as an engineer or developer you’re given a task and perform it to the best of your ability without thinking of questioning any of it) simply don’t exist. So for something like this development team A would have been handed a fait accompli by development team B without any ability to question it, or even an ability to provide feedback if they noticed a problem. In fact the first that one team may hear about some new techology is when it gets shipped to them from some other development group (people complain about the lack of technical info from Sony to work with the PS3 but it’s not much better for people working inside the company, who have extreme difficulty getting the information they need).

    So not only would Sony not have employed Root Labs to look at this, they wouldn’t have involved anyone else at Sony outside the narrow stovepipe that worked on it.

    1. So can you at least guessed what happened inside Sony when “Other OS” support was removed in response to a hack by GeoHot, even though it required hardware modifications?

    2. That hack did expose the hypervisor’s code. Was the hypervisor that poorly designed?

Comments are closed.