root labs rdist

October 29, 2008

DIY USB to TTL serial adapter

Filed under: Embedded,Hardware — Nate Lawson @ 8:17 pm

When people ask about my day job, I tell them it is “designing/reviewing embedded security and cryptography”.  I haven’t written much on this blog about my embedded work, mostly due to NDAs.  However, I recently have been working on some hobby projects in my spare time that are a good topic for some articles.

When interfacing with an external board over USB, it’s often easiest to use a USB-serial cable like the FTDI 232R.  This cable provides a virtual serial port interface to the PC host and the normal serial signals at the target.  You can then interact with your target using an ordinary terminal program.  One key difference is that the signals are LVTTL (0 – 3.3V) or TTL (0 – 5V), while ordinary serial is -10V to +10V.  So you can’t just use an ordinary USB-serial adapter, the kind used to connect an old PDA to your computer.

On a recent project, I needed to debug some code running in a microcontroller.  Since most microcontrollers these days have a built-in UART, debug prints can be added by configuring the clock divider for the given baud rate and monitoring a single pin (the RX pin).  With AVR libc, it’s as simple as providing a “send byte” routine to a local copy of stdout, then calling printf() as usual.  As I dug through my parts bin, I realized I didn’t have one of these FTDI cables and it was too late to buy anything.

My first thought was to use one of my scope probes to capture the serial I/O and decode it in software.  You can dump an oscilloscope’s trace buffer to a file, then walk through the samples for your given channel and recover the original data.  I decided this would take too long though, so I got a second opinion from bunnie.  I was hoping to scavenge one of the many ordinary USB-serial adapters I had lying around.  He told me that most of them are built with a separate level converter IC, so it was likely I could mod one to get what I wanted.

That was the motivation I needed to tear into an adapter.  It was covered with injection-molded plastic, so I had to use a knife to slice the edges and peel it back.  Inside, I found it had a FTDI 232BL, Sipe SP213ECA, and Atmel 93C46.  I think the Atmel is a serial EEPROM (8-pin, SOIC). If you search for the “Sipe” part, you will see a bunch of Chinese “part number squatters”.  That’s usually the tell-tale sign that a part isn’t sold via retail and is only used by OEMs as a clone.  I think the copied manufacturer is “Sipex”, so I used that datasheet. The FTDI chip is quite common and does all the logic for the USB and serial protocols.

The Sipex chip is a level converter, similar to the MAX232.  It uses a series of capacitors in a configuration called a “charge pump” to step the voltage up or down to +/-10V, while requiring only a 3.3V source of power.

My plan was to bypass the level shifter and connect my TX, RX, and GND pins directly from my microcontroller to the FTDI chip since both run at 3.3V.  However, I needed to disconnect the RX stage of the level shifter from the FTDI, otherwise there could be contention if my microcontroller was driving logic low and the FTDI was driving high.  I looked at the Sipe datasheet and did some investigation with my multimeter.  It turns out that the RX signal comes in from the serial port on pin 23 and goes out at the reduced TTL level on pin 22.

I could have cut pin 22 with a knife but that is hard to reverse if I wanted to put it back.  Instead, I used my soldering iron to release the pin and gently pry up on it.  This is called lifting a pin and is often done by techs doing rework.  I find it works best to nudge the pin sidewise with tweezers while heating it, then lift it.  You can lift a pin just a little if you only want to disconnect it, or you can lift it and bend it back over the package like I did if you plan to solder wire onto the pin and the pad.  I left it this way in case I wanted to add a switch in the future, allowing the same device to be used as a TTL or ordinary serial port.  (The ugly junk around the pins and under the chip is leftover plastic from the cover, a side-effect of the injection molding process.)

Next, I soldered ground, RX, and TX wires from the FTDI chip to a pin header.  I find it works best to put a little extra solder on the pins/pads and then apply heat above the wire until it melts into the solder on top of the pin.  Below is a picture of the header and the three wires.  The header isn’t connected to the serial port below, it’s actually floating above it.

Finally, I used a glob of hot glue to stabilize and insulate the header from the other connections on the chip as below.

I tested for shorts between pins and between Vcc and ground (very important).  It seemed fine so I plugged it in.  I needed a cable to match the pinouts of my microcontroller and the USB-serial adapter, so I took an old ribbon cable and swapped some of the wires around to match.  I find ribbon cables are really useful for building custom cables quickly since the tabs on the connectors come apart easily with a screwdriver and it presses back together once you’re done.  Below is the final result, about 45 minutes of work.

By the way, if you need an overview of the basics, here is an excellent intro to surface-mount soldering.  I especially agree with its complaint against the “flood and wick” style of SMT soldering.  I hope this article encourages you to find creative solutions to your interfacing problems.

October 24, 2008

Quantum cryptography is useless

Filed under: Crypto,Hardware,Security — Nate Lawson @ 1:08 pm

Bruce Schneier is right on the money with this article criticizing quantum crypto.  Quantum cryptography (not quantum computing) is one of those concepts that appeals to the public and the more theoretically-minded, but is next to useless for providing actual security.  Here are some less-known problems with it that I haven’t seen discussed much elsewhere.

Quantum cryptography is focused on one narrow aspect of security: key establishment.  To put it simply, this involves exchanging photons with a particular polarization.  The transmitter randomly sets their polarizer to one angle or the other and sends a photon.  The receiver also tunes their detector to a particular polarization.  The measured value at the receiver is dependent both on the sending and receiving polarizer states, as well as the bit transmitted.  The sender and receiver repeat this process many times in both directions to get a set of bits.  Some will be errors and some will be usable as a key, shared between both but secret.

To receive a photon, an attacker also has to choose a polarization.  By measuring the photon, the state of the attacker’s polarizer is encoded in the photon.  This means that the attacker cannot learn the bits without desynchronizing the sender and receiver.  If the error rate is too high, an attacker is present (or the fibre is bad).

There are a number of well-known issues with quantum crypto.  This key exchange process requires a reliable, non-influencable source of random bits at both the sender and receiver.  Also, there needs to be some kind of pre-existing shared secret between both parties to authenticate themselves.  The only way to do this is through classic crypto (e.g., public key).  Otherwise, the attacker could just splice a receiver and transmitter into the cable and perform a standard MITM attack.  Finally, the actual communication between the two parties is encrypted with a traditional cipher like AES, using the shared key.

I think this alone is enough to undermine the case for quantum crypto.  If you are convinced to spend lots of money and effort to replace classical crypto for the sole purpose of key exchange, shouldn’t standard crypto be considered unreliable for authentication and bulk encryption also?  This is Bruce’s point and is based on simple threat analysis.

Recently, this paper was published describing how bright flashes of light could temporarily overwhelm the detector circuit, allowing an attacker to trick the receiver into accepting bits as unmodified.  The receiver has multiple detectors, each with a different polarization.  Normally, only a photon with the proper polarization triggers the corresponding detector.  But a bright pulse at a particular frequency can temporarily blind multiple detectors, leaving the remaining one to trigger on a subsequent pulse.  By varying the frequency of the bright pulse to select individual detectors, the attacker can manipulate the receiver into thinking the originally transmitted bits are being received correctly.

This is a clever attack because it is based on unstated assumptions.  Quantum crypto doesn’t specify how a perfect detector circuit can be designed.  That’s just a black box.  The designers assume that individual photons can go into the black box and be measured securely.  But it’s not a black box, it’s a real world circuit that depends on optoelectronics and has associated limitations.

You can draw an analogy here to side channel or differential fault analysis.  The AES algorithm might be considered computationally secure, but there are many underlying assumptions in a real-world system that implements it.  There has to be a computer running the algorithm.  It may be realized in hardware or software.  It requires memory (DRAM, disk, core, tape?) to store intermediate state.  It takes up physical space somewhere.  It gets its input over some kind of busses.  The environment may be hostile, with varying voltage, heat, clock frequency, etc.

What other attacks could there be?  Is it possible to determine which polarizer was selected in the transmitter by probing it with light while it is preparing to send the photon?  Does it consume more current when switching from one state to another?  Are there timing variations in the transmitted photons based on whether the polarizer switched state or not?

Classical crypto implementations have been continually hardened in response to these kinds of attacks.  I think the perceived theoretical invulnerability of quantum crypto has resulted in less attention to preventing side channel or fault analysis attacks.  In this sense, quantum crypto systems are less secure than those based on classical crypto.  Given its cost and narrow focus on strengthening only key exchange, I can’t see any reason for using quantum crypto.

October 17, 2008

All about ACPI

Filed under: FreeBSD,PC Architecture — Nate Lawson @ 10:54 am

Up until recently, I was the maintainer for the FreeBSD ACPI implementation.  ACPI is a standard for making features like power management, which were originally in the BIOS, available to the OS to enumerate and control.  I implemented the FreeBSD CPU frequency control framework and drivers for individual CPUs, as well as numerous bugfixes and updates to support new BIOS versions.  I wanted to share my perspective on ACPI, as well as provide resources for other open source developers interested in getting involved in this obtuse area of PCs.

ACPI is a complex, 600-page standard.  It assumes the reader has a lot of low-level knowledge about PC hardware and the BIOS.  In the past, the BIOS would be the first thing that started on the PC.  First, it would configure peripherals like PCI slots and boards.  To retain control, it would hook the system management interrupt (SMI) before booting the OS.  The SMI was tied to the actual hardware by the OEM, say by linking a laptop’s lid switch to a microcontroller.  The BIOS had hard-coded event types and logic to process them (“if lid switch pressed, turn off backlight”).  The nice thing was that the OS was completely unaware of the BIOS since the SMI is mostly invisible to software, so it could run on all PCs without changes.  The downside was that the user had to go into the BIOS settings to change things and the BIOS had to be relatively complex in order to coexist with the OS.

In the ACPI model, the BIOS (or EFI now) still performs basic initialization.  After the hardware is setup, it hooks the SMI.  The difference is that it now provides a set of tables in RAM for the OS to consume.  While most of the static tables describe the hardware, the most important one (“DSDT”) provides bytecode called AML.  The OS sets up its interpreter very early in boot process and loads the BIOS’s AML.  The AML is composed of methods and data types and is arranged in a tree that matches the logical arrangement of devices.  For example, methods related to a video device would be located under the AML Device object for its parent PCI host interface.

The OS walks the AML tree, initializing the various devices and assigning resources to them using AML and device-specific methods.  Usually, the OS keeps a mapping between its internal device driver tree and the handle to the AML node that shadows it.  This allows it to later call methods under that node that change the power level or configure its wake capabilities.  For example, you want the OS to power down an internal modem if it’s not in use.  This is done by calling AML methods for the Device node that represents the modem.

The OS interoperates with ACPI this way constantly.  The OS calls into AML (or vice versa via an interrupt) to power up/down devices, change backlight settings, change CPU frequency/voltage, or suspend/resume the system.  While this level of control is nice to have as an OS developer, there have been a lot of bumps along the way.

ACPI could be seen as an attempt by Microsoft and Intel to take control over areas formerly owned by Phoenix/AMI and the numerous Taiwan integrators that actually build systems for companies like Dell.  They had arguably good reasons for doing so, including the support cost of working around so many BIOS bugs.  The conspiracy theorists may think that this was an attempt by Bill Gates to undermine Linux, but the pain that ACPI caused has been also borne by Microsoft.  Gates never really got his proprietary extensions to ACPI, but Windows did enjoy a privileged position of OEMs using it as the validation suite for their BIOS implementations.  Because there was no open interoperability testing, the spec was so complex, and information about the low-level BIOS and hardware were tied up in NDAs, open source kernels suffered many compatibility problems as new PCs appeared.  The process definitely should have been more open and seems to be getting better, largely due to the efforts of Intel with Linux.

If you’re interested in learning more about ACPI, I recommend the following resources.  The papers and talks are the best introduction, and I haven’t read the book yet.  I hope this helps the open-source community improve support for ACPI.

Papers and talks

Specs

Web

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 85 other followers