rdist

January 11, 2010

Smart meter crypto flaw worse than thought

Filed under: Crypto,Embedded,Hacking,Hardware,RFID,Security — Nate Lawson @ 1:08 pm

Travis Goodspeed has continued finding flaws in TI microcontrollers, branching out from the MSP430 to ZigBee radio chipsets. A few days ago, he posted a flaw in the random number generator. Why is this important? Because the MSP430 and ZigBee are found in many wireless sensor systems, including most Smart Meters.

Travis describes two flaws: the PRNG is a 16-bit LFSR and it is not seeded with very much entropy. However, the datasheet recommends this random number generator be used to create cryptographic keys. It’s extremely scary to find such a poor understanding of crypto in a device capable of forging billing records or turning off the power to your house.

The first flaw is that the PRNG is not cryptographically secure. The entropy pool is extremely small (16 bits), which can be attacked with a brute-force search in a fraction of a second, even if used with a secure PRNG such as Yarrow. Also, the PRNG is never re-seeded, which could have helped if implemented properly.

Even if the entropy pool was much larger, it would still be vulnerable because an LFSR is not a cryptographically-secure PRNG. An attacker who has seen some subset of the output can recreate the LFSR taps (even if they’re secret) and then generate any future sequence from it.

The second problem is that it is seeded from a random source that has very little entropy. Travis produced a frequency count graph for the range of values returned by the random source, ADCTSTL, a radio register. As you can see from that graph, a few 8-bit values are returned many times (clustered around 0 and 100) and some are not returned at all. This bias could be exploited even if it was used with a cryptographically-secure PRNG.

These problems are each enough to make the system trivially insecure to a simple brute-force attack, as Travis points out. However, it gets worse because the insecure PRNG is used with public-key crypto. The Z-Stack library includes ECC code written by Certicom. I have not reviewed that code, but it seems reasonable to use a library from a company that employs cryptographers. But the ECC code makes the critical mistake of leaving implementation of primitives such as the PRNG up to the developer. Other libraries (such as OpenSSL, Mozilla’s NSS, and Microsoft’s Crypto API) all have their own PRNG, even if seeding it has to be left up to the developer. That at least reduces the risk of PRNG flaws.

ECC, like other public key crypto, falls on its face when the design spec is violated. In particular, ECDSA keys are completely exposed if even a few bits of the random nonce are predictable. Even if the keys were securely generated in the factory during the manufacturing process, a predictable PRNG completely exposes them in the field. Since this kind of attack is based on poor entropy, it would still be possible even if TI replaced their PRNG with one that is cryptographically secure.

Given that these chips are used in critical infrastructure such as smart meters and this attack can be mounted from remote, it is important that it be fixed carefully. This will be difficult to fix since it will require hardware changes to the random source of entropy, and there is already an unknown number of devices in the field. Once again, crypto proves fragile and thorough review is vital.

December 28, 2009

Interesting talks at 26c3

Filed under: Crypto,Embedded,Hacking,Reverse engineering,Security — Nate Lawson @ 1:00 am

I hope to attend a CCC event some day. While there are many great talks in the 26c3 schedule, here are some talks that look particularly interesting.

Others that may be interesting but haven’t posted slides or papers yet:

Hope everyone at 26c3 has a great time. Best wishes for a safe and secure 2010.

October 23, 2009

Just another day at the office

Filed under: Embedded,Hacking,Hardware,RFID,Security,Software protection — Nate Lawson @ 6:00 am

The following does not take place over a 24 hour period. But any one of these situations is a good example of a typical day at Root Labs.

Attack Windows device driver protection scheme

Certain drivers in Windows must implement software protection (PMP) in order to prevent audio/video ripping attacks. Since drivers run in ring 0, they can do a lot more than just the standard SEH tricks. It’s 1992 all over again as we dig into the IDT and attempt to find how they are trying to protect their decryption process.

Whitebox cryptography is a method of combining a key and cipher implementation to create a keyed cipher. It can only do whatever operation it was initialized with and uses a hard-coded key. The entire operation becomes a series of table lookups. The intermediate values are obscured by randomness merged into the tables. But it’s not impossible to defeat.

To get at the cipher though, we’re going to need a way to bypass some of these anti-debugging traps. Since ring 0 code has direct access to all the CPU’s registers (including the debug registers, MSRs, and page tables), it is free to wreak havoc with our attempts to circumvent it. A common approach is to disable any breakpoints in the registers by overwriting them. A less common method is to use the debug registers as a local procedure call gate so that an attacker that writes to them breaks the main loop.

We write our own hook and patch into the int 1 handler at a point that isn’t integrity-checked and set the GD bit. This causes our hook to be called whenever anyone writes or reads the debug registers. The trap springs! There’s the heart of the protection code right there. Now to figure out what it’s doing.

Design self-reinforcing integrity checks

Preventing attackers from patching your code is very difficult. One approach is to insert small hash functions that verify a region of code or data has not been changed. Of course, there’s the obvious problem of “who watches the watcher?” Since it’s easy for attackers to NOP out the checksum routine or modify their patch to compensate if you use CRC, our mission today is to design and implement a more robust approach.

First, we analyze the general problem of mutually-reinforcing checks. The code and data for the check itself both need to be covered. But if two checks exactly mirrored each other, you’d have a chicken-and-egg problem in building them since a change in one would require changes in the other, and so on. It’s like standing between two mirrors, infinite recursion.

A data structure that describes the best we can do is a directed acyclic graph. There are no cycles so the checks (and checksums) can be generated in reverse order. But the multiple paths through the graph provide overlapping coverage so we can be certain that a single patch cannot bypass the protection. Of course, the roots of each of these paths needs to be hidden throughout the code. Putting them all in one big loop run by a single watchman thread would be a mistake. Now we have to come up with a way of automatically generating, randomizing, and inserting them into the code while also hiding the root nodes in various places.

Reverse-engineer RFID transponder

What exactly is in those FasTrak transponders? Do they securely identify themselves via cryptographic challenge/response? Do they preserve your privacy? What about this rumor that they are tracked by antennas all over Bay Area freeways?

Not being an existing user, we bought one at a local supermarket and took it apart. Inside was a microcontroller, some passive electronics, and not a whole lot else. An older one turned out to still have JTAG enabled, so it was simple to dump its firmware. The newer one did have the lock bit set, and Flylogic Engineering was kind enough to decap it and zap it for us. The firmware was identical. Does IDA have an MSP430 plugin? Well, there was one but it was on Geocities and then vaporized. Time to dig around a bit. Find the source code and hack it to work with the newer SDK.

Then it’s off to drop in the code and analyze it for switch statements (protocol handlers usually). Trace everything that starts from the IO pins that map to the receive side of the antenna. Then manually walk up the stack to see where it goes. Hey, there’s an over-the-air update function in here. Just send the magic handshake, then the next packet gets written to flash. There are some checks to try to maintain the writes within the area that stores the ID. But there are multiple paths to this function and one of them is obviously not tested because it pushes the wrong size argument on the stack (a 2-byte instead of 1-byte length argument). Time to notify the agency that 1 million transponders are at risk of a permanent DoS at best and code execution at worst.

Coda

If you’ve made it this far, you might be interested to know we’re hiring. We tackle difficult security problems in environments with clever, persistent adversaries. We’re just as likely to design a system as attack it (and often do both for the same customer). If this sounds like your kind of job, please see here for more details.

December 4, 2008

More dangers of amateur crypto

Filed under: Crypto,Embedded,Security — Nate Lawson @ 1:48 pm

The Good Math blog continues to inadvertently provide examples of how subtle mistakes in cryptography are often fatal. I previously wrote about how slightly incorrect crypto examples on a reputable blog can lead engineers astray. The latest post about RSA is no exception, but I’ll have to write about it here since my comment was deleted. The comments by Oscar on that post are a good criticism.

The most important error in Mark’s description of RSA is that his example for encryption uses the private key D, instead of the public key E. The result of the RSA private key operation with D is called the “CipherText”. The decryption process is described using the public key E.

At first glance, this seems like an awkward description but still sound, right? If you wanted to exchange RSA-encrypted messages between two systems, couldn’t you just generate an RSA key pair and keep both keys secret? The surprising result is that this is completely insecure, and it is impossible to keep an RSA public key secret, even if the key (E, N) is never revealed.

I previously wrote a series of posts about a system that made this exact mistake. The manufacturer had burned an RSA public key (E, N) into a chip in order to verify a signature on code updates. This is perfectly fine, assuming the implementation was correct. However, they additionally wanted to use the same public key parameters to decrypt the message, keeping E and N secret. In the first article, I described this system and in the second, I discussed how to attack it given that the attacker has seen only two encrypted/signed updates. In summary, the modulus N is partially revealed by each message you see “encrypted” with (D, N) and the GCD quickly computes it.

Subtle details matter, especially in public key crypto. The two keys in an RSA pair are indeed asymmetric, and have very different properties. They cannot be substituted for each other. You cannot securely encrypt a message using the private key (D, N). Such a system would be completely insecure.

October 29, 2008

DIY USB to TTL serial adapter

Filed under: Embedded,Hardware — Nate Lawson @ 8:17 pm

When people ask about my day job, I tell them it is “designing/reviewing embedded security and cryptography”.  I haven’t written much on this blog about my embedded work, mostly due to NDAs.  However, I recently have been working on some hobby projects in my spare time that are a good topic for some articles.

When interfacing with an external board over USB, it’s often easiest to use a USB-serial cable like the FTDI 232R.  This cable provides a virtual serial port interface to the PC host and the normal serial signals at the target.  You can then interact with your target using an ordinary terminal program.  One key difference is that the signals are LVTTL (0 – 3.3V) or TTL (0 – 5V), while ordinary serial is -10V to +10V.  So you can’t just use an ordinary USB-serial adapter, the kind used to connect an old PDA to your computer.

On a recent project, I needed to debug some code running in a microcontroller.  Since most microcontrollers these days have a built-in UART, debug prints can be added by configuring the clock divider for the given baud rate and monitoring a single pin (the RX pin).  With AVR libc, it’s as simple as providing a “send byte” routine to a local copy of stdout, then calling printf() as usual.  As I dug through my parts bin, I realized I didn’t have one of these FTDI cables and it was too late to buy anything.

My first thought was to use one of my scope probes to capture the serial I/O and decode it in software.  You can dump an oscilloscope’s trace buffer to a file, then walk through the samples for your given channel and recover the original data.  I decided this would take too long though, so I got a second opinion from bunnie.  I was hoping to scavenge one of the many ordinary USB-serial adapters I had lying around.  He told me that most of them are built with a separate level converter IC, so it was likely I could mod one to get what I wanted.

That was the motivation I needed to tear into an adapter.  It was covered with injection-molded plastic, so I had to use a knife to slice the edges and peel it back.  Inside, I found it had a FTDI 232BL, Sipe SP213ECA, and Atmel 93C46.  I think the Atmel is a serial EEPROM (8-pin, SOIC). If you search for the “Sipe” part, you will see a bunch of Chinese “part number squatters”.  That’s usually the tell-tale sign that a part isn’t sold via retail and is only used by OEMs as a clone.  I think the copied manufacturer is “Sipex”, so I used that datasheet. The FTDI chip is quite common and does all the logic for the USB and serial protocols.

The Sipex chip is a level converter, similar to the MAX232.  It uses a series of capacitors in a configuration called a “charge pump” to step the voltage up or down to +/-10V, while requiring only a 3.3V source of power.

My plan was to bypass the level shifter and connect my TX, RX, and GND pins directly from my microcontroller to the FTDI chip since both run at 3.3V.  However, I needed to disconnect the RX stage of the level shifter from the FTDI, otherwise there could be contention if my microcontroller was driving logic low and the FTDI was driving high.  I looked at the Sipe datasheet and did some investigation with my multimeter.  It turns out that the RX signal comes in from the serial port on pin 23 and goes out at the reduced TTL level on pin 22.

I could have cut pin 22 with a knife but that is hard to reverse if I wanted to put it back.  Instead, I used my soldering iron to release the pin and gently pry up on it.  This is called lifting a pin and is often done by techs doing rework.  I find it works best to nudge the pin sidewise with tweezers while heating it, then lift it.  You can lift a pin just a little if you only want to disconnect it, or you can lift it and bend it back over the package like I did if you plan to solder wire onto the pin and the pad.  I left it this way in case I wanted to add a switch in the future, allowing the same device to be used as a TTL or ordinary serial port.  (The ugly junk around the pins and under the chip is leftover plastic from the cover, a side-effect of the injection molding process.)

Next, I soldered ground, RX, and TX wires from the FTDI chip to a pin header.  I find it works best to put a little extra solder on the pins/pads and then apply heat above the wire until it melts into the solder on top of the pin.  Below is a picture of the header and the three wires.  The header isn’t connected to the serial port below, it’s actually floating above it.

Finally, I used a glob of hot glue to stabilize and insulate the header from the other connections on the chip as below.

I tested for shorts between pins and between Vcc and ground (very important).  It seemed fine so I plugged it in.  I needed a cable to match the pinouts of my microcontroller and the USB-serial adapter, so I took an old ribbon cable and swapped some of the wires around to match.  I find ribbon cables are really useful for building custom cables quickly since the tabs on the connectors come apart easily with a screwdriver and it presses back together once you’re done.  Below is the final result, about 45 minutes of work.

By the way, if you need an overview of the basics, here is an excellent intro to surface-mount soldering.  I especially agree with its complaint against the “flood and wick” style of SMT soldering.  I hope this article encourages you to find creative solutions to your interfacing problems.

September 7, 2008

Xbox 360 security talk

Filed under: Crypto,Embedded,Hacking,Hardware,Security,Software protection — Nate Lawson @ 2:56 pm

This recent video of Michael Steil and Felix Domke talking about the Xbox 360 security scheme is the best overview I’ve seen so far.

Michael previously gave a nice talk summarizing the security flaws in the original Xbox.

The CPU itself supports hashing and/or encryption of physical pages, based on flags set in the upper word of the 64-bit virtual address.  They talk about how Felix was able to leapfrog off shader-based DMA to write to an unencrypted register save state structure, jumping through a syscall gate (sorta like return-to-libc) that was improperly validated by the hypervisor.  The end result was arbitrary code execution in the context of the hypervisor.  Quite impressive.

I’ve always wondered how different security features like encrypted RAM that have long been present in smart cards would take to “trickle-up” to the more complex platforms like game consoles.  While the Xbox 360 security is much better than the original Xbox, it seems like the big-systems people are reinventing techniques already tested and worked out in the microcontroller world.

For example, the 360 was vulnerable to a timing attack, where an internal secret key can be guessed by timing how long it takes to validate the submitter’s HMAC.  I’d be extremely surprised if any mainstream smart card were vulnerable to such a well-known legacy bug.

I have yet to see anyone publish information about applying power or RF-based side channel analysis to a game console, despite smart cards adding countermeasures to these almost 10 years ago.  Even earlier attacks on encrypted RAM have still not been attempted on modern systems.

These attacks probably haven’t been needed yet since software bugs were still present. However, the push by game consoles and cellphone manufacturers to increase their resistance to software attacks means it won’t be long before side-channel resistance becomes a must-have feature.  It will be interesting to see how long it takes big-system manufacturers to add countermeasures and whether they’ll choose to learn from the hard lessons we have seen in the smart card world.

« Previous PageNext Page »

Blog at WordPress.com.