February 28, 2008

Preventing RSA cache timing attacks

Filed under: Crypto,Network,PC Architecture,Security — Nate Lawson @ 1:24 pm

It has been known for a while that side-channel information about crypto operations (i.e., timing) can give enough information to recover secret keys. The original paper by Kocher even indicated RAM cache behavior as a potential source of timing variance.

In 2005, Osvik and Percival separately published papers on cache timing attacks. The latter resulted in a PR incident for Intel since Hyperthreading, which AMD doesn’t have, was announced as the key vantage point for observing the timing information of crypto running on another core. Since cache-related side channels are exploitable even in single-CPU systems, crypto libraries needed to be updated to fix this issue whether Hyperthreading was present or not.

For RSA, it’s necessary to understand modular exponentiation in order to understand the attack. When encrypting a message using the private exponent d, the server computes md mod n. To do this efficiently with big numbers, a number of techniques are used.

The basic technique, square-and-multiply, takes advantage of the fact that squaring is fast in binary (just a shift left). This method starts by walking through each bit of the exponent. If the bit is 0, the accumulating result (initially set to m) is squared. If it is 1, it is squared and then multiplied by m. In nearly all implementations, it takes longer to square and then multiply than it does to merely square, giving a very distinguishable timing difference. An attacker can figure out which bits are 0 or 1 by this difference.

Since basic square-and-multiply is still too slow, a method called “windowed exponentiation” was invented. Instead of examining each individual bit of the exponent, it walks through groups of bits of size w. There are both fixed and sliding window variants, but I’ll stick to fixed-window here for the sake of simplicity. Square-and-multiply is simply fixed-window exponentiation with w = 1.

For the common size w = 3, these values are precomputed mod n: m2, m3, m4, m5, m6, and m7. Obviously, m0 is 1 and m1 is just m so they are already known.

Just like in square-and-multiply, the accumulator is always first “squared”. However, since we are operating on multiple bits at once, it’s actually raised to 2w or 23 = 8 in our example. Then, the value of the bits in the window are checked. If 000b, nothing more happens. If 001b, the accumulator is multiplied by m. If 010b, it’s multiplied by the precomputed m2 and so forth. This speeds up the processing by handling batches of exponent bits, at the expense of some small precomputation.

Those precomputed values were typically stored in an array in memory, each of size of the modulus n (e.g., 256 bytes each for a 2048-bit n). Since the cache line size on modern x86 CPUs is 64 bytes, the multiply operation would take a little longer if the pre-computed value wasn’t already in the cache.

An attacker could repeatedly cause these values to be evicted from the cache and then based on the computation time, see which ones were used. Or, he could “touch” an array of garbage that was aligned along cache lines, access the SSL server, then time how long it took to read each value from his own array. If a value was quickly accessible, the SSL’s RSA implementation was likely to have used the associated pre-computed value. For example, if the 2nd element in the array was “hot”, then the server probably accessed its precomputed m3 and thus the corresponding bits of the private exponent were 011b.

This attack could be repeated many times until the entire key was extracted. Hyperthreading provided a good vantage point since the spy process could be accessing its array over and over while running at the same time as the SSL implementation on another core. However, this same attack could work on a single CPU system as well, albeit with more samples required.

To address these attacks for OpenSSL, Matthew Wood (Intel) submitted a patch that first appeared in version 0.9.7h. The key change is to the modular exponentiation function in openssl/crypto/bn/bn_exp.c.

This patch stripes the pre-computed m2…x values across cache lines instead of storing them sequentially in the array. That is, the first byte of m2 would be at address 0, the first byte of m3 at 1, etc. A memory dump of this region with w = 3 would look like this:

0: m2[0], m3[0], … m7[0]
64: m2[1], m3[1], … m7[1]
128: m2[2], m3[2], … m7[2]

Thus, the access pattern for reading any pre-computed value is exactly the same as any other: 256 sequential cache line reads. This is a clever way of removing the timing leak. I think it is commendable that Intel spent the time developing this code and contributing it to OpenSSL, especially after the widespread criticism that surrounded this problem was directed primarily at them.

There are two problems regarding this approach. The first is the limitation that it can’t fix problems with AES cache timing (see also this detailed analysis of Bernstein’s work and this proposed cache architecture).

The second may only affect certain hardware. The patch is configurable for machines with a 32 or 64-byte cache line size. The default is set to 64. If an older machine or a non-x86 CPU with a smaller cache line size runs the default compile of OpenSSL, it will still have a small leak if a large window is used (>= actual cache line size). For example, consider a machine with an 8-byte cache line size and a 12-bit window. OpenSSL would store m2…9[0] in the first cache line and m10…11[0] in the second, allowing an attacker to determine if a given set of exponent bits was less than 210 or not. This might be exploitable in certain situations.

To be truly safe against timing attacks, you need to carefully examine the behavior of the underlying hardware. Be especially careful if adapting an existing crypto implementation to a new platform and get an expert to review it.

February 24, 2008

Memory remanence attack analysis

Filed under: Crypto,Hacking,PC Architecture,Security,Software protection — Nate Lawson @ 6:00 am

You have probably heard by now of the memory remanence attack by Halderman et al. They show that it is easy to recover cryptographic keys from RAM after a reset or moving the DIMM to another system. This is important to any software that stores keys in RAM, and they targeted disk encryption. It’s a nice paper with a very polished but responsible publicity campaign, including a video.

Like most good papers, some parts of the attack were known for a long time and others were creative improvements. Memory remanence has been a known issue ever since the first key had to be zeroed after use. In the PC environment, the trusted computing efforts have been aware of this as well. (See “Hardware Attacks”, chapter 13 — S3 is suspend-to-ram and SCLEAN is a module that must be run during power-on to clear RAM). However, the Halderman team is publishing the first concrete results in this area and it should shake things up.

One outcome I do not want to see from this is a blind movement to closed hardware crypto (e.g., hard disk drives with onboard encryption). Such systems are ok in principle, but in practice often compromise security in more obvious ways than a warm reboot. For example, a hard drive that stores encryption keys in a special “lock sector” that the drive firmware won’t access without a valid password can be easily circumvented by patching the firmware. Such a system would be less secure in a cold power-on scenario than well-implemented software. The solution here is to ask vendors for documentation on their security implementation before making a purchase or only buy hardware that has been reviewed by a third-party with a report that matches your expectations. (Full disclosure: I perform this kind of review at Root Labs.)

Another observation is that this attack underlines the need to apply software protection techniques to other security applications besides DRM. If an attacker can dump your RAM, you need effective ways to hide the key in memory like white-box crypto, obfuscate and tamper-protect software that uses it, and randomize each install to prevent “break once, run everywhere” attacks. Yes, this is the exact same threat model DRM has faced for years but this time you care because you’re the target.

It will be interesting to see how vendors respond to this. Zeroing memory on reboot is an obvious change that addresses some of their methods. A more subtle hack is to set up page mapping and cache configuration such that the key is loaded into a cache line and never evicted (as done for fast IP routing table lookup in this awesome paper). However, none of this stops attacks that move the DIMM to another system. On standard x86 hardware, there’s no place other than RAM to put keys. However, the VIA C7 processors have hardware AES built into the CPU, and it’s possible more vendors will take this approach to providing secure key storage and crypto acceleration.

Whatever the changes, it will probably take a long time before this attack is effectively addressed. Set your encrypted volumes to auto-detach during suspend or a reasonable timeout and keep an eye on your laptop.

February 18, 2008

BitTorrent peer list encryption

Filed under: Crypto,Network,Security — Nate Lawson @ 11:27 am

BitTorrent has added a new method for encrypting peer lists. This is an attempt to avoid ISPs (such as Comcast) blocking connections that seem to be P2P traffic, as I previously wrote. This extension’s advantages and limitations are a good example for illustrating the fundamental leverage both sides have in this battle.

The actual extension is pretty straightforward.  Trackers manage files by their SHA-1 (aka infohash).  The extension specifies that a tracker RC4-encrypt the peer list with a key of SHA-1(infohash).  Thus, a peer must know the infohash of the file they are requesting to decrypt the peer list.  Obviously, they have the infohash since they had to know it to look up the file in the first place.

There are a couple weaknesses in this design.  If an ISP can read the infohash from the peer’s tracker connection, then they can also decrypt the peer list.  This is mitigated by some trackers supporting SSL connections.  Also, the specification allows for reuse of the RC4 keystream, a definite no-no.

Encryption is only a small skirmish in this battle.  Eventually, all traffic will be encrypted and probably encapsulated in SSL-like headers to avoid signature detection.  That will leave ISPs with traffic analysis as their only tool.  Since it appears that at least Comcast is already doing this and not relying on data in the stream, it’s unclear how effective this additional layer of obfuscation will be.

The BitTorrent developers have the advantage in that they control all endpoints (peers and trackers). Their software can take any measures it wants to obfuscate its data signatures (i.e., encryption) and traffic characteristics (i.e., timing or message sizes).  However, the biggest disadvantage is that they don’t control the behavior of non-BitTorrent hosts.

The ISPs have an advantage in that they see all the traffic between the hosts they care about and the Internet.  They can forge packets or even add filters to block particular IPs.  They know the mapping between IP or MAC address and subscriber account.   However, they can’t be sure what software is running on an endpoint and could lose subscribers if their response is too drastic or the wrong application is affected.

Even though traditional wisdom says BitTorrent has the advantage, I think the ISPs have the technical edge in this battle.  Since the BitTorrent developers can’t control hosts that don’t run their software, they will be forced to conform to the traffic profile of web browsing.  Because this is asymmetrical (uplink is only used for small ACKs), the performance advantages of BitTorrent would be eliminated.

However, it’s likely political/judicial moves will have a longer term impact on which side wins.  I think this would be a good thing.  Since there are only two broadband circuit providers in the US (telco or cable), competition won’t eliminate a progression of more onerous requirements for endpoints.  Without intervention, I could see a slow creep towards requiring virus scanning of endpoints or prohibitions on specific applications.  I already have to lie and claim to be running Windows to get tech support to run a line test when my DSL goes down (“oh yeah, it rebooted fine but the line is still down”).

Assuming a bad law doesn’t get added, I think regulation of ISPs would be a good thing to prevent further interference with our traffic.  I refuse to use the Internet as a TV.

February 11, 2008

2008 security predictions

Filed under: Misc,Security — Nate Lawson @ 6:15 pm

Even though it’s a bit late, Thomas Ptacek and I wrote up our predictions for 2008 over on Matasano’s blog.  If you’re wondering how we did for 2007, you can find out here.  Were we more conservative or crazy this time?

February 10, 2008

Next Baysec: February 27 at Pete’s Tavern

Filed under: Security — Nate Lawson @ 2:45 pm

The next Baysec meeting is at Pete’s Tavern again. Come out and meet fellow security people from all over the Bay Area. As always, this is not a sponsored meeting, there is no agenda or speakers, and no RSVP is needed.

See you on Wednesday, February 27th, 7-11 pm.

Pete’s Tavern
128 King St. (at 2nd)
San Francisco

February 7, 2008

Panasonic CF-Y4 laptop disassembly

Filed under: Hardware,Misc — Nate Lawson @ 6:00 am

I’m a big fan of the lightweight Panasonic ultraportable laptops.  The R-series is small but still usable.  The Y-series offers a full 1400×1050 screen, built-in DVD-RW drive, and long battery life in a 3 pound package.  As a FreeBSD developer, I also find the BIOS in the Panasonic and Lenovo/IBM laptops are mostly compliant, meaning suspend/resume and power management work fine.

Recently, I upgraded the hard drive on my CF-Y4.  I found that these disassembly instructions (another good source) for the CF-Y2 are mostly accurate.  However, there are a few caveats I wanted to note for others with the R/W/T/Y series laptops.

First, all the notes about 3.3 volt logic versus 5 volt logic for the hard drive no longer apply.  The Toshiba hard drive that came in my Y4 uses 5 volt logic, along with 5 volt motor supply.  In fact, the pins are tied together internally.  It was straightforward to swap in a WD 250 GB drive with no clipping pins necessary.  This may apply to the newer R-series as well, though I haven’t verified it.  If in doubt, use an ohmmeter to verify no resistance between pins 41 and 42 on the stock hard drive.

Next, heed the warnings about stripping the top two large hinge screws.  They screw directly into plastic, while the other two hinge screws have a steel sleeve.  Use a good jeweler’s screwdriver for the small screws.  You don’t need to remove the two screws that hold the VGA connector to the case.

When removing the keyboard, pry smoothly in multiple places but don’t be afraid to put a little effort into it.  The glue used to hold it down is surprisingly strong.  Be sure you removed all the small screws from the bottom, of course, otherwise it won’t pop out.

Be sure to clean the CPU’s heat sink connection carefully and use some good thermal paste when reassembling.  These laptops have no fan (awesome!) but that means it’s critical to make a good connection between the CPU and the keyboard heat sink area.  Also, don’t forget the GPU, which sinks heat through the bottom of the motherboard.  I cut a small piece of plastic to use as a spreader to eliminate any bubbles.  I also put a thin amount of paste along other parts of the internal skeleton where it touches the keyboard.  Once you reassemble the case, monitor the system temperature for a while to be sure you didn’t make a mistake.  I found my temperature actually dropped compared to the factory thermal paste.

Next Page »

Blog at WordPress.com.