A few months ago, there was an article on attacking an RSA private key by choosing specific messages to exercise multiplier bugs that may exist in modern CPUs. Adi Shamir (the “S” in “RSA”) announced the basics of this ongoing research, and it will be interesting to review the paper once it appears. My analysis is that this is a neat extension to an existing attack and another good reason not to implement your own public key crypto, but if you use a mainstream library, you’re already protected.
The attack depends on the target using a naively-implemented crypto library on a machine that has a bug in the multiplier section of the CPU. Luckily, all crypto libraries I know of (OpenSSL, crypto++, etc.) guard against this kind of error by checking the signature before outputting it. Also, hardware multipliers probably have less bugs than dividers (ahem, FDIV) due to the increase in logic complexity for the latter. An integer multiplier is usually implemented as a set of adders with additional control logic to perform an occasional shift, while a divider actually performs successive approximation (aka “guess-and-check”). The design of floating point divider logic is clever, and I recommend that linked paper for an overview.
The basic attack was first discovered in 1996 by Boneh et al and applied to smart cards. I even spoke about this at the RSA 2004 conference, see page 11 of the linked slides. (Shameless plug: come see my talk, “Designing and Attacking DRM” at RSA 2008.) Shamir has provided a clever extension of that attack, applied to a general purpose CPU where the attacker doesn’t have physical access to the device to cause a fault in the computation.
It may be counterintuitive, but a multiplication error in any one of the many multiplies that occur during an RSA private key operation is enough for an attacker who sees the erroneous result to quickly recover the private key. He doesn’t need to know which multiply failed or how far off it is from the correct result. This is an astounding conclusion, so be sure to read the original paper.
The standard RSA private key operation is:
md mod n
It is typically implemented in two stages that are later recombined with the CRT.
This is done for performance since p and q are about half the size of n.
s1 = md mod p
s2 = md mod q
S = CRT(s1, s2)
The power analysis graph in my talk clearly shows these two phases of exponentiation with a short pause in between. Remember that obtaining either p or q is sufficient to recover the private key since the other can be found by dividing, e.g. n/p = q. Lastly, d can be obtained once you know p and q.
The way to obtain the key from a faulty signature, assuming a glitch appeared during the exponentiation mod p is:
q = GCD((m – S’e) mod n, n)
Remember that the bad signature S’ is a combination of the correct value s2 = md mod q and some garbage G approximately the same size. GCD (which is fast) can be used to calculate q since the difference m – m’ is almost certainly not divisible by p.
The advance Shamir describes involves implementing this attack. In the past, it required physical possession of the device so glitches could be induced via voltage or clock transients. Such glitch attacks were once used sucessfully against pay television smart cards.
Shamir may be suggesting he has found some way to search the vast space (2128) of possible values for A * B for a given device and find some combination that is calculated incorrectly. If an attacker can use such values in a message that is to be signed or decrypted with the private key, he can recover the private key via the Boneh et al attack. This indeed would be a great advance since it could be implemented from remote.
There are two solutions to preventing these attacks. The easiest is just to verify each signature after generating it:
S = md mod n
m’ = Se mod n
if m’ != m:
Fatal, discard S
Also, randomized padding schemes like RSASSA-PSS can help.
All crypto libraries I know of implement at least the former approach, and RSASSA-PSS is also available nearly everywhere. So the moral is, use an off-the-shelf library but also make sure it has countermeasures to this kind of attack.
A friend of mine and I discussed this attack and its implications when it first came out. Both of our day jobs involve fault analysis of crypto against smartcards and other embedded devices (along with other forms of sidechannel analysis). Quite apart from the problems of finding the multiplication bug given such a large search space, we were surprised at Shamir’s claim that the new attack “seems to be much more dangerous in its implications [than the old fault attacks].”
We cannot see how to apply this attack in the real world (of course, always assuming we are acting remotely since this one of the claimed advantages of the bug attack). The attacker wants the private key (obviously) which is only used in decryption and signature. He must be able to control inputs to the CRT exponentiation and see its output. It seemed to us that:
* For decryption, it is unlikely that we can supply an arbitrary ciphertext to the target PC and get it to send back the plaintext, normally the device would keep the decrypted text secret (it may however perform some action using it or based on the contents).
* For signature, we can normally send a message we want to have signed and have the signature returned. However, real world applications will typically hash the message and sign the hash, not the message itself. This means we are not choosing the true input value. Even assuming the hash function is public, choosing an input to the exponentiation then reduces to the problem of finding the hash function preimage of the chosen value, and of course researchers are moving us closer to being able to do this feasibly, but this makes the attack vastly more computationally expensive.
It would be interesting to see if anyone can come up with suggestions of how to get around these problems, we scratched our heads for a few minutes after reading and felt we couldn’t quite see the wood for the trees. Basically, my questions are:
* Does anyone know of any real world scenarios where you send ciphertexts to a remote device and it sends back plaintexts?
* Does anyone know of any real world scenarios where you send data to a remote device and it signs it directly without hashing first?
* Are my colleague and I complete donkeys and totally missing something fundamental here?
Byron, thanks for the thoughtful comment. A lot of this is supposition since details haven’t yet appeared, just the initial announcement. One wild guess is that this is an extension of the multi-core timing attack work. Perhaps they have found a way to induce multiplier faults in a parallel core when executing from the other one. This is just a guess with no evidence, so take it with a grain of salt until we know more.
Your main question remains — how to retrieve the faulty signature or plaintext in a real world remote environment. For decryption, it’s possible that some VPN products would be a good candidate. But I think it’s more likely a signature scenario.
There are some systems that sign a hash without generating it themselves. For example, an internal CA that creates certs for clients might just sign a hash. Such designs are dangerous since it’s a bad idea to sign a hash without knowing the plaintext it matches. You’re attesting to something you’ve never seen. This kind of system would be easy to attack with Shamir’s method since you can just specify the hash directly.
There’s another scary thing about allowing the user to submit a hash that is worth its own post. If the hash size is a big enough fraction of the modulus size, you can forge signatures on arbitrary plaintext once you’ve obtained one signature on your malicious input. I’ll have to leave an explanation of that for later.
can u pleace send me the source code for rsa fault attack & seifert’s attack