At the RSA Conference, attendees were given a SanDisk Cruzer Enterprise flash drive. I decided to look up the user manual and see what I could find before opening up the part itself. The manual appears to be an attempt at describing its technical security aspects without giving away too much of the design. Unfortunately, it seems more targeted at buzzword compliance and leaves out some answers critical to determining how secure its encryption is.
There are some good things to see in the documentation. It uses AES and SHA-1 instead of keeping quiet about some proprietary algorithm. It appears to actually encrypt the data (instead of hiding it in a partition that can be made accessible with a few software commands). However, there are also a few troublesome items that are a good example of signs more in-depth review is needed.
1. Defines new acronyms not used by cryptographers
Figure 2 is titled “TDEA Electronic Code Book (TECB) Mode”. I had to scratch my head for a while. TDEA is another term for Triple DES, an older NIST encryption standard. But this documentation said it uses AES, which is the replacement for DES. Either the original design used DES and they moved to AES, or someone got their terms mixed up and confused a cipher name for a mode of operation. Either way, “TECB” is meaningless.
2. Uses ECB for bulk encryption
Assuming they do use AES-ECB, that’s nothing to be proud of. ECB involves encrypting a cipher-sized block at a time. This results in a “spreading” of data by the cipher block size. However, patterns are still visible since every 16-byte pattern that is the same will also encrypt to the same ciphertext.
All flash memory is accessed in pages much bigger than the block size of AES. Flash page sizes are typically 1024 bytes or more versus AES’s 16-byte blocksize. So there’s no reason to only encrypt in 16-byte units. Instead, a cipher mode like CBC where all the blocks in the page are chained together would be more secure. A good review would probably recommend that, along with careful analysis of how to generate the IV, supply integrity protection, etc.
3. Key management not defined
The device “implements a SHA-1 hash function as part of access control and creation of a symmetric encryption key”. It also “implements a hardware Random Number Generator”.
Neither of these statements is sufficient to understand how the bulk encryption key is derived. Is it a single hash iteration of the password? Then it is more open to dictionary attacks. Passphrases longer than the input size would also be less secure since the second half of the password might be hashed by itself. This is the same attack that was usable against Microsoft LANMAN hashes but that scheme was designed in the late 1980’s, not 2007.
4. No statements about tamper resistance, side channels, etc.
For all its faults, the smart card industry has been hardening chips against determined attackers for many years now. I have higher hopes for an ASIC design that originated in the satellite TV or EMV world where real money is at stake than in complex system-on-chip designs. They just have a different pedigree. Some day, SoC designs may have weathered their own dark night of the soul, but until then, they tend to be easy prey for Christopher Tarnovsky.
Finally, I popped open the case (glued, no epoxy) to analyze it. Inside are the flash chips and a single system-on-chip that contains the ARM CPU, RAM, USB, and flash controller. It would be interesting to examine the test points for JTAG, decap it, etc.
Knowing only what I’ve found so far, I would be uncomfortable recommending such a device to my clients. There are many signs that an independent review would yield a report better suited to understanding the security architecture and even lead to fixing various questionable design choices.
Often times marketing type white papers are not written by people in the know, however thats no excuse for not performing proper technical review. Looking at their FIPS 140 Security Policy it does appear they are using AES in ECB mode. Which, is rather worrying and I can’t think of any compelling reason why they would choose to do this. I also can’t imagine they are implementing any countermeasures against side channel attacks, these types of devices almost never do.
All and all the whole thing smells funny…
Shawn, I’ve noticed a lot of people don’t know the difference between a FIPS algorithm cert and a FIPS 140 review. The algorithm cert just tests some known values (“yep, that’s AES”). A FIPS 140 review used to mean someone actually questioned how good the implementation was.
One deficiency in the FIPS 140 program is a lack of a meta-score — how good is the reviewing lab at finding problems? Instead, all labs and certs are considered equivalent from a govt purchasing perspective, leading to a race to the bottom. The most profitable lab is the one that charges the least while doing the least work but just enough to still keep their NVLAP certification.
I’d like to see FIPS evaluation levels tied to the quality of issues a lab finds. For example, a lab that has never found a software flaw can only issue algorithm certs. To hand out the highest levels, including ratings for tamper resistance, a lab would have to demonstrate exploiting power analysis attacks on a sample device and probing attacks on boards.
I thought FIPS-140 meant it gets closed up tight with dark epoxy :-P
Hi Chris. I think the FIPS-140 program has gone down in quality over the years. This was inevitable given the economic model: the people paying for the review are the vendors, whose goal is to get the least results for the lowest cost.
It would be better if the FIPS-140 program was paid for by the ultimate customers of those products. If there were some way to pool the purchaser resources to pay for reviews, then the reports would be more useful. Potential customers also get more assistance from vendors during the review, whereas vendors tend to hide information (“just do a black box review”).
The best model is always one where the person who needs the security pays for it, with the fewest middle-men.