May 7, 2007

Glitch attacks revealed

Filed under: Embedded,Hacking,Hardware,Security — Nate Lawson @ 6:00 am

(First in a series of articles on attacking hardware and software by inducing faults)

One of the common assumptions software authors make is that the underlying hardware works reliably. Very few operating systems add their own parity bits or CRC to memory accesses. Even fewer applications check the results of a computation. Yet when it comes to cryptography and software protection, the attacker controls the platform in some manner and thus faulty operation has to be considered.

Fault induction is often used to test hardware during production or simulation runs. It was probably first observed when mildly radioactive material that is a natural part of chip packaging led to random memory bit flips.

When informed that an attacker in possession of a device can induce faults, most engineers respond that nothing useful could come of that. This is a similar response to when buffer overflows were first discovered in software (“so what, the software crashes?”) I often find this “engineering mentality” gets in the way of improving security, even insisting you must prove exploitability before fixing a problem.

A good overview paper is “The Sorcerer’s Apprentice Guide to Fault Attacks” by Bar-el et al. In their 1997 paper “Low Cost Attacks on Tamper Resistant Devices,” Anderson and Kuhn conclude:

“We have improved on Differential Fault Analysis. Rather than needing about 200 faulty ciphertexts to recover a DES key, we need between one and ten. We can factor RSA moduli with a single faulty ciphertext. We can also reverse engineer completely unknown algorithms; this appears to be faster than Biham and Shamir’s approach in the case of DES, and is particularly easy with algorithms that have a compact software implementation such as RC5.”

This is quite a powerful class of attacks, and is sometimes applicable to software-only systems as well. For instance, a signal handler often can be triggered from remote, inducing faults in execution if the programmer wasn’t careful.

Of course, glitch attacks are most applicable to smart cards, HSMs, and other tamper-resistant hardware. Given the movement to DRM and trusted computing, we can expect to see this category of attack and its defenses become more sophisticated.  Why rob banks? Because that’s where the money is.


  1. On this topic, I just interviewed the Director of the Cryptographic Module Validation Program at NIST last week and he mentioned this is one of the areas that FIPS 140-3 addresses. Specifically, there’s a new “non-invasive protection mechanism” section that will cover such topics as power-analysis, EMP tolerance, optical something or other (ahh, my hand-written notes), and fault-induction.

    The first draft is done, but is making it’s way around the Department of Commerce/NIST internally first before it’ll be available for the 90 day public review/comment period.

    Comment by Jordan Wiens — May 7, 2007 @ 9:23 am

  2. Jordan, that’s exciting news. I heard rumors that some of these areas might have recommendations or requirements in 140-3 but didn’t know where that was in the standards process. Currently, 140-2 merely requires that if devices have countermeasures, they must be listed on the certificate. That results in a pretty vague description of the protection, and there’s no way to compare devices. It will be interesting to see how this turns out since it’s tough to give metrics for resistance to glitch attacks or side channel leakage.

    Comment by Nate Lawson — May 7, 2007 @ 5:39 pm

  3. NIST had a physical security workshop, which cryptography research presented at, to discuss these issues. I think one of the problems is that, and I am being a bit presumptuous here, the FIPS labs don’t have near the expertise needed to successfully mount side channel attacks. These sorts of attacks require very specialized knowledge and for some (e.g. laser fault injection) expensive equipment. I imagine that if they do address this at all, it will be more at the design review phase rather than any testing.

    Comment by Shawn F — May 7, 2007 @ 10:24 pm

  4. Shawn, yes, I helped prepare for that presentation. As far as I know, after that workshop, NIST is preparing 140-3 on their own without further consultation with industry. This may be a good thing in preventing too much vendor influence.

    It will be interesting to see what they come up with because it’s hard to establish metrics for these things. What’s level 1 glitch resistance vs. level 2? You’re right that testing will be more dependent than ever on the individual tester’s expertise/creativity. It’s also sad that vendors are incentivized to pick the worst lab (least creative in attacks) because it gives them more likelihood of a higher rating. That’s something that’s been difficult with FIPS 140 since the beginning.

    Comment by Nate Lawson — May 9, 2007 @ 7:21 pm

  5. Nate, yeah, the process is apparently public workshop to get ideas, NIST goes off and makes changes, it’s shopped around internally for a while (I imagine this is where they get NSA and CSE (Canada’s NSA) to give feedback as well), then everybody else gets 90 days. I like your comment about this keeping the vendors from fiddling too much. It seems like FIPS is working pretty well as opposed to say, some of the criticism various internet standards bodies have been getting lately, and that process is probably part of the reason why. ;-)

    Also, to your comment about the different glitch resistance levels, I got the impression that only the higher levels will take those attacks into account, though I could be wrong. Also, speaking of higher levels, the requirement for a formal model has been moved to a new level 5. Apparently that was a huge sticking point for a lot of the level4 modules, so creating a new level 5 that’s similar to the previous level 4 allows for a bit more differentiation.

    Comment by Jordan — May 22, 2007 @ 7:20 pm

RSS feed for comments on this post.

Blog at WordPress.com.