Are single-purpose devices the solution to malware?

I recently watched this fascinating talk by Jonathan Zittrain, author of The Future of the Internet — And How to Stop It. He covers everything from the Apple II to the iPhone and trusted computing. His basic premise is that malware is driving the resurgence of locked-down, single-purpose devices.

I disagree with that conclusion. I think malware will always infect the most valuable platform. If the iPhone was as widely-deployed as Windows PCs, you can bet people would be targeting it with keyloggers, closed platform or not. In fact, the motivation of people to find ways around the vendor’s protection on their own phone leads to a great malware channel (trojaned jailbreak apps, anyone?)

However, I like his analysis of what makes some open systems resilient (his example: Wikipedia defacers) and some susceptible to being gamed (Digg users selling votes). He claims it’s a matter of how much members consider themselves a part of the system versus outside it. I agree that designing in aspects of accountability and aggregated reputation help, whereas excessive perceived anonymity can lead to antisocial behavior.

Warning signs you need a crypto review

At the RSA Conference, attendees were given a SanDisk Cruzer Enterprise flash drive. I decided to look up the user manual and see what I could find before opening up the part itself. The manual appears to be an attempt at describing its technical security aspects without giving away too much of the design. Unfortunately, it seems more targeted at buzzword compliance and leaves out some answers critical to determining how secure its encryption is.

There are some good things to see in the documentation. It uses AES and SHA-1 instead of keeping quiet about some proprietary algorithm. It appears to actually encrypt the data (instead of hiding it in a partition that can be made accessible with a few software commands). However, there are also a few troublesome items that are a good example of signs more in-depth review is needed.

1. Defines new acronyms not used by cryptographers

Figure 2 is titled “TDEA Electronic Code Book (TECB) Mode”. I had to scratch my head for a while. TDEA is another term for Triple DES, an older NIST encryption standard. But this documentation said it uses AES, which is the replacement for DES. Either the original design used DES and they moved to AES, or someone got their terms mixed up and confused a cipher name for a mode of operation. Either way, “TECB” is meaningless.

2. Uses ECB for bulk encryption

Assuming they do use AES-ECB, that’s nothing to be proud of. ECB involves encrypting a cipher-sized block at a time. This results in a “spreading” of data by the cipher block size. However, patterns are still visible since every 16-byte pattern that is the same will also encrypt to the same ciphertext.

All flash memory is accessed in pages much bigger than the block size of AES. Flash page sizes are typically 1024 bytes or more versus AES’s 16-byte blocksize. So there’s no reason to only encrypt in 16-byte units. Instead, a cipher mode like CBC where all the blocks in the page are chained together would be more secure. A good review would probably recommend that, along with careful analysis of how to generate the IV, supply integrity protection, etc.

3. Key management not defined

The device “implements a SHA-1 hash function as part of access control and creation of a symmetric encryption key”. It also “implements a hardware Random Number Generator”.

Neither of these statements is sufficient to understand how the bulk encryption key is derived. Is it a single hash iteration of the password? Then it is more open to dictionary attacks. Passphrases longer than the input size would also be less secure since the second half of the password might be hashed by itself. This is the same attack that was usable against Microsoft LANMAN hashes but that scheme was designed in the late 1980’s, not 2007.

4. No statements about tamper resistance, side channels, etc.

For all its faults, the smart card industry has been hardening chips against determined attackers for many years now. I have higher hopes for an ASIC design that originated in the satellite TV or EMV world where real money is at stake than in complex system-on-chip designs. They just have a different pedigree. Some day, SoC designs may have weathered their own dark night of the soul, but until then, they tend to be easy prey for Christopher Tarnovsky.

Finally, I popped open the case (glued, no epoxy) to analyze it. Inside are the flash chips and a single system-on-chip that contains the ARM CPU, RAM, USB, and flash controller. It would be interesting to examine the test points for JTAG, decap it, etc.

Knowing only what I’ve found so far, I would be uncomfortable recommending such a device to my clients. There are many signs that an independent review would yield a report better suited to understanding the security architecture and even lead to fixing various questionable design choices.

Next Baysec: May 7 at Pete’s Tavern

The next Baysec meeting is this Wednesday at Pete’s Tavern again. Come out and meet fellow security people from all over the Bay Area.  As always, this is not a sponsored meeting, there is no agenda or speakers, and no RSVP is needed.

See you on Wednesday, May 7th, 7-11 pm.

Pete’s Tavern
128 King St. (at 2nd)
San Francisco

History of TEMPEST and side channel attacks

A very interesting paper, “TEMPEST: A Signal Problem“, was recently declassified by the NSA. It gives a short history of TEMPEST and other side channel attacks, that is ways to discover a secret cryptographic key by monitoring indirect channels like RF emissions, sound, or power. Besides the new information that TEMPEST was discovered by Bell Labs in 1943, there are a number of lessons to learn from this paper.

It’s interesting that RF emissions in the form of oscilloscope perturbation were the first side channel found. After that, the paper covers acoustical, power line, seismics, and flooding. The last two are uncertain since the body of the text is still classified. In modern terminology, the attacks were SPA (simple side-channel analysis) since the plaintext was read directly from distinct “fingerprints” that appeared on the scope. DPA (differential side-channel analysis) involves more complex acquisition and statistical correlation of multiple samples, something this paper does not reveal as known at the time.

The history of attempted countermeasures is a good case study. First, Bell Labs tried to produce a heavily-shielded cipher machine. The army would not purchase it because it wasn’t a simple retrofit to the existing model. The final recommendation was that field personnel control a zone 200 feet in diameter from all cipher machines. There was no study that showed this was the perfect range, only that most field offices could do this without difficulty and it probably helped. This is very similar to countermeasures today, where cost or deployment effort are often more important than achieving the best security.

The categories of countermeasure they identified were:

  • Shielding/filtering: reducing the signal strength of emissions
  • Masking: adding random noise to the environment

If you think of a side channel attack as a communications problem, it’s obvious this is the classic signal-to-noise ratio. The paper states that they had trouble effectively shielding devices and that masking wasn’t effective either. This fits with the environment today, where addressing side-channel leakage in a modern embedded system is extremely difficult.

As power consumption decreases naturally with shrinking hardware, things improve but similar increases in the sensitivity of monitoring equipment improve as well. Also, processors and FPGAs get faster every day, allowing for more complicated signal processing. As the paper concluded in 1972, side-channel attacks today tend to lead countermeasure sophistication. If you’re concerned about such attacks, be sure to get your design reviewed.

Do fuzzed bugs look different?

During a conversation with Thomas Ptacek about bug-hunting techniques, I came up with an interesting question.  Do patches for bugs found through fuzzing or other automated techniques look any different than those found manually?  Of course, the bugs themselves will likely be similar but will the patches also have some signature?

I have a hunch that bugs found via fuzzing show up in the perimeter of code, whereas those found manually may be deeper down the callstack.  Or, they may usually be the same class of header-based integer overflow, fixed by similar range checks.

Can anyone who has more experience in this area enlighten me?  Halvar and Ero, got some neat bindiff stats to show?

Designing and Attacking DRM talk slides

I gave a talk this morning at RSA 2008 on “Designing and Attacking DRM” (pdf of slides). It was pretty wide-ranging, covering everything from how to design good DRM to the latest comparison of Blu-ray protection, AACS vs. BD+. Those interested in the latter should see the last few slides, especially with news that AACS MKBv7 (appearing on retail discs later this month) has already been broken by Slysoft.

The timeline slide (page 25) is an attempt to capture the history of when discs were released into an unbroken environment or not. You want the line to be mostly green, with a few brief red segments here and there. AACS so far has had the inverse, where it is a long red line with a brief segment of green (a couple weeks in the past year and a half).

I also introduced two variables for characterizing the long-term success of a DRM system, L and T. That is, how long each update survives before being hacked (L), and how frequently updates appear (T).

In the case of AACS, L has been extremely short (if you discard the initial 8-month adoption period). Out of three updates, two have been broken before they were widely-available and one was broken a few weeks after release.

Additionally, T has been extremely long for AACS. Throwing out the initial year it took to get the first MKB update (v3), they’ve been following an approximate schedule of one every 6 months. That is much too long in a software player environment. I don’t know any vendor of a popular win32 game that would expect it to remain uncracked for 6 months, for example.

Of course, people in glass houses should not throw rocks. As someone who had a part in developing BD+, I am biased toward thinking a different approach than mere broadcast encryption is the only thing that has a chance of success in this rough world. The first BD+ discs were cracked in mid-March, and it remains to be seen how effective future updates will be. Unfortunately, I can’t comment on any details here. We’ll just have to watch and see how things work out the rest of this year.

2008 will prove whether a widely deployed scheme based on software protection is ultimately better or equivalent to the AACS approach. I have a high degree of confidence it will survive in the long run, both with longer L and shorter T than the alternative.