RFID industry tries to spin toll findings

I was interviewed today regarding my findings regarding the Fastrak electronic toll collection system.  It seems that the industry PR spin machine has already begun responding.

First, this article questions my credibility and claims that the Fastrak transponder is read-only.

‘If Lawson has not even established that FasTrak transponders are a read-only device (best called a “tag”) rather than read-write, then he’s totally unqualified to be talking about potential misuse.’

Apparently the author of the article has not even opened the cover on a Fastrak transponder.  They use an MSP430F1111A microcontroller, which is flash-based.  The firmware and all the data (i.e., your unique ID) are stored in flash.  I can easily authenticate this claim by revealing that your 32-bit ID appears at address 0x1002, which is part of the full “0x0001” Title 21 response packet.

Also reverse engineering this device is hardly much of an accomplishment since all the specifications and protocols of Title 21 are open source.

The base specification for Title 21 is freely available, but the extensions to it are not.  On disassembling a firmware dump from a transponder, I found some surprising things, including messages that allow a reader to update the transponder flash in the field.  Again, I can back up this claim with the message IDs that start this update process: 0x00DE and 0x0480.  To unlock the update process, you need to provide a global key that I will not reveal.

Second, I’ve heard that a vendor plans to issue a press release.  Expect the standard claims that privacy is protected because they “encrypt” your unique ID in their database and data is not retained for very long.  What they mean by “encrypt” is “replace each unique ID with a different one”.  The problem is, replacing the unique ID “CAR-A” with “WXYZ” does not change much.  There is still a unique ID that is stored which always corresponds to the same car, enabling tracking.  Somehow, that information is subject to subpoena, something few Fastrak users are aware of.  Corporations issue privacy policies describing exactly what information is collected and how long it is stored.  Where can I find that information about Fastrak?

Finally, I spoke last week to a consultant to Caltrans who offered to get the local MTC agency technical staff in touch with me.  I explained that I’d be happy to describe my findings and recommendations to them in advance of my Blackhat talk at no charge.  Anyone from those agencies can contact me via my company website here.

China hax0rs US

Like any mainstream article on security, this recent AP article sensationalizes China’s response to multiple accusations of state-sponsored hacking. First, the money quote:

“Is there any evidence? … Do we have such advanced technology? Even I don’t believe it.”
— Foreign Ministry spokesman Qin Gang

Is this supposed to play into some pompous Western belief that China is a backwater and thus incapable of hacking computers? Does anyone believe it takes advanced technology to break into PCs?

Next we have the meaningless numbers. The Pentagon claims its network is scanned or attacked 300 million times a day. For this to be true, that would be an average of 3400 times per second. If we consider every packet to be a scan, that is about 200 KB/second. However, the entire port scan should be considered a single attempt. Of course, bigger numbers sound more scary and justify a higher budget. Perhaps each TCP option in the header of each packet could be considered a separate attempt since they could be attacking both timestamp and window scaling implementations!

The more interesting allegations are that China copied the contents of a laptop of the visiting U.S. Commerce Secretary and hacked into the office computers of two House representatives. The laptop incident is more interesting since it seems easier to prove. Did they confiscate the laptop and take it to another room? Did the file access times change or was it powered off? I assume he continued using the laptop during the trip and thus it would be harder to tell. Was he using disk encryption? Why not?

The allegations regarding the two House members are much less provable. The FBI investigated their computers and said they’d been accessed by people in China. How did they first decide they should call the FBI? Porn popups? Without more evidence showing a clear intent, this is more likely a malware incident. It is surprisingly convenient that their allegations appear alongside House Intelligence committee meetings on hacking.

Chris Tarnovsky demos smart card hacking

Chris Tarnovsky spoke to Wired and demonstrated his smart card hacking techniques. Chris became famous for hacking Echostar cards, which resulted in a multi-billion dollar lawsuit against News Corp. The trial ended in a pretty big win for DirecTV smart card supplier NDS, with the jury only finding them guilty of trying out a packaged attack that converted a DirecTV smart card to decrypt Dish Network video. The penalty for this? A whopping $46.95 (i.e., one month’s subscription fee) plus statutory damages.

In the video, Chris demonstrates some interesting techniques. It shows him decapsulating an ST16 CPU made by ST Micro from an IBM Java card. Then, he uses nail polish to mask the die and rust remover (i.e., hydrofluoric acid) to etch away the top metal layer of protective mesh to get at the CPU’s bus. He then uses a sewing needle to tap each line of the 8-bit bus in turn and then reassemble the data in software. He could just have easily driven the lines, allowing him to change instructions or data at will.

This attack is pretty amazing. It is simple in that it does not require an expensive FIB or other rework tools. Instead, a microscope and careful work with some household chemicals is all it takes. While the ST16 is a bit old, it was used in Echostar, hence the relevance of Chris’s demonstration. It will be interesting to see what happens next in the evolving capabilities for home-based silicon analysis.

Anti-debugging: using up a resource versus checking it

After my last post claiming anti-debugging techniques are overly depended on, various people asked how I would improve them. My first recommendation is to go back to the drawing board and make sure all your components of your software protection design hang together in a mesh. In other words, each has the property that the system will fail closed if a given component is circumvented in isolation. However, anti-debugging does have a place (along other components including obfuscation, integrity checks, etc.) so let’s consider how to implement it better.

The first principle of implementing anti-debugging is to use up a resource instead of checking it. The initial idea most novices have is to check a resource to see if a debugger is present. This might be calling IsDebuggerPresent() or if more advanced, using GetThreadContext() to read the values of hardware debug registers DR0-7. Then, the implementation is often as simple as “if (debugged): FailNoisily()” or if more advanced, a few inline versions of that check function throughout the code.

The problem with this approach is it provides a choke point for the attacker, a single bottleneck that circumvents all instances of a check no matter where they appear in the code or how obfuscated they are. For example, IsDebuggerPresent() reads a value from the local memory PEB struct. So attaching a debugger and then setting that variable to zero circumvents all uses of IsDebuggerPresent(). Looking for suspicious values in DR0-7 means the attacker merely has to hook GetThreadContext() and return 0 for those registers. In both cases, the attacker could also skip over the check so it always returned “OK”.

Besides the API bottleneck, anti-debugging via checking is fundamentally flawed as suffering from “time of check, time of use“, aka race conditions. An attacker could quickly swap out the values or virtualize accesses to the particular resources in order to lie about their state. Meanwhile, they could still be using the debug registers for their own purposes while stubbing out GetThreadContext(), for example.

Contrast this with anti-debugging by using up a resource. For example, a timer callback could be set up to vector through PEB!BeingDebugged. (Astute readers will notice this is a byte-wide field, so it would actually have to be an offset into a jump table). This timer would fire normally until a debugger was attached. Then the OS would store a non-zero value in that location, overwriting the timer callback vector. The next time the timer fired, the process would crash. Even if an attacker poked the value to zero, it would still crash. They would have to find another route to read the magic value in that location to know what to store there after attaching the debugger. To prevent merely disabling the timer, the callback should perform some essential operation to keep the process operating.

Now this is a simple example and there are numerous ways around it, but it illustrates my point. If an application uses a resource that the attacker also wants to use, it’s harder to circumvent than a simple check of the resource state.

An old example of this form of anti-debugging in the C64 was storing loader code in screen memory. As soon as the attacker broke into a debugger, the prompt would overwrite the loader code. Resetting the machine would also overwrite screen memory, leaving no trace of the loader. Attackers had to realize this and jump to a separate routine elsewhere in memory that made a backup copy of screen memory first.

For x86 hardware debug registers, a program could keep an internal table of functions but do no compile-time linkage. Each site of a function call would be replaced with code to store an execute watchpoint for EIP+4 in a debug register and a “JMP SlowlyCrash” following that. Before executing the JMP, the CPU would trigger a debug exception (INT1) since the EIP matched the watchpoint. The exception handler would look up the calling EIP in a table, identify the real target function, set up the stack, and return directly there. The target could then return back to the caller. All four debug registers should be utilized to avoid leaving room for the attacker’s breakpoints.

There are a number of desirable properties for this approach. If the attacker executes the JMP, the program goes off into the weeds but not immediately. As soon as they set a hardware breakpoint, this occurs since the attacker’s breakpoint overwrites the function’s watchpoint and the JMP is executed. The attacker would have to write code that virtualized calls to get and set DR0-7 and performed virtual breakpoints based on what the program thought should be the correct values. Such an approach would work, but would slow down the program enough that timing checks could detect it.

I hope these examples make the case for anti-debugging via using up a resource versus checking it. This is one implementation approach that can make anti-debugging more effective.

[Followup: Russ Osterlund has pointed out in the comments section why my first example is deficient.  There’s a window of time between CreateProcess(DEBUG_PROCESS) and the first instruction execution where the kernel has already set PEB!BeingDebugged but the process hasn’t been able to store its jump table offset there yet.  So this example is only valid for detecting a debugger attach after startup and another countermeasure is needed to respond to attaching beforehand.   Thanks, Russ!]

Are single-purpose devices the solution to malware?

I recently watched this fascinating talk by Jonathan Zittrain, author of The Future of the Internet — And How to Stop It. He covers everything from the Apple II to the iPhone and trusted computing. His basic premise is that malware is driving the resurgence of locked-down, single-purpose devices.

I disagree with that conclusion. I think malware will always infect the most valuable platform. If the iPhone was as widely-deployed as Windows PCs, you can bet people would be targeting it with keyloggers, closed platform or not. In fact, the motivation of people to find ways around the vendor’s protection on their own phone leads to a great malware channel (trojaned jailbreak apps, anyone?)

However, I like his analysis of what makes some open systems resilient (his example: Wikipedia defacers) and some susceptible to being gamed (Digg users selling votes). He claims it’s a matter of how much members consider themselves a part of the system versus outside it. I agree that designing in aspects of accountability and aggregated reputation help, whereas excessive perceived anonymity can lead to antisocial behavior.

Designing and Attacking DRM talk slides

I gave a talk this morning at RSA 2008 on “Designing and Attacking DRM” (pdf of slides). It was pretty wide-ranging, covering everything from how to design good DRM to the latest comparison of Blu-ray protection, AACS vs. BD+. Those interested in the latter should see the last few slides, especially with news that AACS MKBv7 (appearing on retail discs later this month) has already been broken by Slysoft.

The timeline slide (page 25) is an attempt to capture the history of when discs were released into an unbroken environment or not. You want the line to be mostly green, with a few brief red segments here and there. AACS so far has had the inverse, where it is a long red line with a brief segment of green (a couple weeks in the past year and a half).

I also introduced two variables for characterizing the long-term success of a DRM system, L and T. That is, how long each update survives before being hacked (L), and how frequently updates appear (T).

In the case of AACS, L has been extremely short (if you discard the initial 8-month adoption period). Out of three updates, two have been broken before they were widely-available and one was broken a few weeks after release.

Additionally, T has been extremely long for AACS. Throwing out the initial year it took to get the first MKB update (v3), they’ve been following an approximate schedule of one every 6 months. That is much too long in a software player environment. I don’t know any vendor of a popular win32 game that would expect it to remain uncracked for 6 months, for example.

Of course, people in glass houses should not throw rocks. As someone who had a part in developing BD+, I am biased toward thinking a different approach than mere broadcast encryption is the only thing that has a chance of success in this rough world. The first BD+ discs were cracked in mid-March, and it remains to be seen how effective future updates will be. Unfortunately, I can’t comment on any details here. We’ll just have to watch and see how things work out the rest of this year.

2008 will prove whether a widely deployed scheme based on software protection is ultimately better or equivalent to the AACS approach. I have a high degree of confidence it will survive in the long run, both with longer L and shorter T than the alternative.