rdist

May 11, 2010

A new direction for homebrew console hackers?

A recent article on game console hacking focused on the Wii and a group of enthusiasts who hack it in order to run Linux or homebrew games. The article is very interesting and delves into the debate about those who hack consoles for fun and others who only care about piracy. The fundamental question behind all this: is there a way to separate the efforts of those two groups, limiting one more than the other?

Michael Steil and Felix Domke, who were mentioned in the article, gave a great talk about Xbox 360 security a few years ago. Michael compared the history of Xbox 360 security to the PS3 and Wii, among other consoles. (Here’s a direct link to the relevant portion of the video). Of all the consoles, only the PS3 was not hacked at the time, although it has since been hacked. Since the PS3 had an officially supported method of booting Linux, there was less reason for the homebrew community to attack it. It was secure from piracy for about 3 years, the longest of any of the modern consoles.

Michael’s claim was that all of the consoles had been hacked to run homebrew games or Linux, but the ultimate result was piracy. This was likely due to the hobbyists having more skill than the pirates, something which has also been the case in smart phones but less so in satellite TV. The case of the PS3 also supports his theory.

Starting back in the 1980’s, there has been a history of software crackers getting jobs designing new protection methods. So what if the homebrew hackers put more effort into protecting their methods from the pirates? There are two approaches they might take: software or hardware protection.

Software protection has been used for exploits before. The original Xbox save game exploit used some interesting obfuscation techniques to limit it to only booting Linux. It stored its payload encrypted in the JPEG header of a penguin image. It didn’t bypass code signature verification completely, it modified the Xbox’s RSA public key to have a trivial factor, which allowed the author to sign his own images with a different private key.

With all this work, it took about 3 months for someone to reverse-engineer it. At that point, the same hole could be used to run pirated games. However, this hack didn’t directly enable piracy because there were already modchip-based methods in use. So, while obfuscation can add some time to pirates getting access to the exploit, it wasn’t much.

Another approach is to embed the exploit in a modchip. These have long been used by pirates to protect their exploits from other pirates. As soon as another group clones an exploit, the price invariably goes down. Depending on the exploitation method and protection skill of the designer, reverse-engineering the modchip can be as hard as developing the exploit independently.

The homebrew community does not release many modchips because of the development cost. But if they did, it’s possible they could reduce the risk of piracy from their exploits. It would be interesting to see a homebrew-only modchip, where games were signed by a key that certified they were independently developed and not just a copy of a commercial game. The modchip could even be a platform for limiting exploitation of new holes that were only used for piracy. In effect, the homebrew hackers would be setting up their own parallel system of control to enforce their own code of ethics.

Software and hardware protection could slow down pirates acquiring exploits. However, the approach that has already proven effective is to limit the attention of the homebrew hackers by giving them limited access to the hardware. Game console vendors should take into account the dynamics of homebrew hackers versus the pirates in order to protect their platform’s revenue.

But what can you also do about it, homebrew hackers? Can you design a survivable system for keeping your favorite console safe from piracy while enabling homebrew? Enforce a code of ethics within your group via technical measures? If anyone can make this happen, you can.

February 15, 2010

Reverse-engineering a smart meter

Filed under: Embedded,Hacking,Hardware,Reverse engineering,RFID,Security — Nate Lawson @ 7:00 am

In 2008, a nice man from PG&E came out to work on my house. He installed a new body for the gas meter and said someone would come by later to install the electronics module to make it a “smart meter“. Since I work with security for embedded systems, this didn’t sound very exciting. I read up on smart meters and found they not only broadcast billing information (something I consider only a small privacy risk) but also provide remote control. A software bug, typo at the control center, or hacker could potentially turn off my power and gas. But how vulnerable was I actually?


I decided to look into how smart meters work. Since the electronics module never was installed, I called up various parts supply houses to try to buy one. They were quite suspicious, requesting company background info and letterhead before deciding if they could send an evaluation sample. Even though this was long before IOActive outed smart meter flaws to CNN, they had obviously gotten the message that these weren’t just ordinary valves or pipes.

Power, gas, and water meters have a long history of tampering attacks. People have drilled into them, shorted them out, slowed them down, and rewired them to run backwards. I don’t think I need to mention that doing those kinds of things is extremely dangerous and illegal. This history is probably why the parts supplier wasn’t eager to sell any smart meter boards to the public.

There’s always an easier way. By analyzing the vendor’s website, I guessed that they use the same radio module across product lines and other markets wouldn’t be so paranoid. Sure enough, the radio module for a water meter made by the same vendor was available on Ebay for $30. It arrived a few days later.

The case was hard plastic to prevent water damage. I used a bright light and careful tapping to be sure I wasn’t going to cut into anything with the Dremel. I cut a small window to see inside and identified where else to cut. I could see some of the radio circuitry and the battery connector.


After more cutting, it appeared that the battery was held against the board by the case and had spring-loaded contacts (see above). This would probably zeroize the device’s memory if it was cut open by someone trying to cheat the system. I applied hot glue to hold the contacts to the board and then cut away the rest of the enclosure.


Inside, the board had a standard MSP430F148 microcontroller and a metal cage with the radio circuitry underneath. I was in luck. I had previously obtained all the tools for working with the MSP430 in the Fastrak transponder. These CPUs are popular in the RFID world because they are very low power. I used the datasheet to identify the JTAG pinouts on this particular model and found the vendor even provided handy pads for them.


Since the pads matched the standard 0.1″ header spacing, I soldered a section of header directly to the board. For the ground pin, I ran a small wire to an appropriate location found with my multimeter. Then I added more hot glue to stabilize the header. I connected the JTAG cable to my programmer. The moment of truth was at hand — was the lock bit set?


Not surprisingly (if you read about the Fastrak project), the lock bit was not set and I was able to dump the firmware. I loaded it into the IDA Pro disassembler via the MSP430 CPU plugin. The remainder of the work would be to trace the board’s IO pins to identify how the microcontroller interfaced with the radio and look for protocol handling routines in the firmware to find crypto or other security flaws.

I haven’t had time to complete the firmware analysis yet. Given the basic crypto flaws in other smart meter firmware (such as Travis Goodspeed finding a PRNG whose design was probably drawn in crayon), I expect there would be other stomach-churning findings in this one. Not even taking rudimentary measures such as setting the lock bit does not bode well for its security.

I am not against the concept of smart meters. The remote reading feature could save a lot of money and dog bites with relatively minimal privacy exposure, even if the crypto was weak. I would be fine if power companies offered an opt-in remote control feature in exchange for lower rates. Perhaps this feature could be limited to cutting a house’s power to 2000 watts or something.

However, something as important as turning off power completely should require a truck roll. A person driving a truck will not turn off the mayor’s power or hundreds of houses at once without asking questions. A computer will. Remote control should not be a mandatory feature bundled with remote reading.

December 28, 2009

Interesting talks at 26c3

Filed under: Crypto,Embedded,Hacking,Reverse engineering,Security — Nate Lawson @ 1:00 am

I hope to attend a CCC event some day. While there are many great talks in the 26c3 schedule, here are some talks that look particularly interesting.

Others that may be interesting but haven’t posted slides or papers yet:

Hope everyone at 26c3 has a great time. Best wishes for a safe and secure 2010.

August 31, 2009

Repeating a check sometimes makes sense

Filed under: Crypto,Reverse engineering,Security,Software protection — Nate Lawson @ 7:00 am

When reviewing source code, I often see a developer making claims like “this can’t happen here” or “to reach this point, X condition must have occurred”. These kinds of statements make a lot of assumptions about the execution environment. In some cases, the assumptions may be justified but often they are not, especially in the face of a motivated attacker.

One recent gain for disk reliability has been ZFS. This filesystem has a mode that generates SHA-1 hashes for all the data it stores on disk. This is an excellent way to make sure a read error hasn’t been silently corrected by the disk to the wrong value or to catch firmware that lies about the status of the physical media. SHA-1 support in filesystems is still a new feature due to the performance loss and the lingering doubt whether it’s needed since it ostensibly duplicates the builtin ECC in the drive.

Mirroring (RAID1) is great because if a disk error occurred, you can often fetch a copy from the other disk. However, it does not address the problem of being certain that an error had occurred and that the data from the other drive was still valid. ZFS gives this assurance. Mirroring could actually be worse than no RAID if you were silently switched to a corrupted disk due to a transient timeout of the other drive. This could happen if its driver had bugs that caused it to timeout on read requests under heavy load, even if the data was perfectly fine. This is not a hypothetical example. It actually happened to me with an old ATA drive.

Still, engineers resist adding any kind of redundant checks, especially of computation results. For example, you rarerly see code adding the result of a subtraction to compare to the original input value. Yet in a critical application, like calculating Bill Gates’ bank account statement, perhaps such paranoia would be warranted. In security-critical code, the developer needs even more paranoia since the threat is active manipulation of the computing environment, not accidental memory flips due to cosmic radiation.

Software protection is one area where redundancy is crucial. A common way to bypass various checks in a protection scheme is to patch them out. For example, the conditional branch that checks for a copied disc to determine if the code should continue to load a game could be inverted or bypassed. By repeating this check in various places, implemented in different ways, it can become more difficult for an attacker to determine if they have removed them all.

As another example, many protection schemes aren’t very introspective. If there is a function that decrypts some data, often you can just grab a common set of arguments to that function and repeatedly call it from your own code, mutating the arguments each time. But if that function had a redundant check that walked the stack to make sure its caller was the one it expected, this could be harder to do. Now, if the whole program was sprinkled with various checks that validated the stack in many different ways, it becomes even harder to hack out any one piece to embed in the attacker’s code. To an ordinary developer, such redundant checks seem wasteful and useless.

Besides software protection, crypto is another area that needs this kind of redundant checking. I previously wrote about how you can recover an RSA private key merely by disrupting any part of the signing operation (e.g., by injecting a fault). The solution to this is for the signer to verify the signature they just made before revealing the signature to an attacker. That is, a sign operation now performs both a sign and verify. If you weren’t concerned about this attack, this redundant operation would seem useless. But without it, an attacker can recover a private key after observing only a single faulty signature.

We need developers to be more paranoid about the validity of stored or computed values, especially in an environment where there could be an adversary. Both software protection and crypto are areas where a very powerful attacker is always assumed to be present. However, other areas of software development could also benefit from this attitude, increasing security and reliability by “wasting” a couple cycles.

August 11, 2009

Awesome C64 visual debugger

Filed under: C64,Hacking,Retrocomputing,Reverse engineering — Nate Lawson @ 2:46 pm

I recently ran across a new visual debugger for C64 emulators called ICU64, written by Mathfigure. It will be released later this year and provides some amazing visualizations of both memory access and memory-mapped devices like the video and sound chip. See the video below for how it works.

There is also another video showing how the classic game Boulder Dash can be expanded to span multiple screens. He implemented this by grabbing graphics from the same RAM areas the game uses and displaying them on a custom screen. On this forum thread, the author discusses his goal of allowing more graphical expandability for classic games.

Learning to program on a small machine with a full view of every memory location is something I’ve always seen as a great way to start. I got a lot of insight into how my computer worked by running a background task that continually copied pages of RAM to screen, so you could “scroll through” memory. For example, the tick counter showed up as a blur.

I’ve always said that my kids would have to learn to program a C64 before they got a more modern system. Wouldn’t this be a great tool for teaching anyone new to low-level computing how assembly language and memory-mapped devices work? What about adapting some of these techniques to fuzzing or modern debuggers?

[Edit: added a link to the project now that it has been released]

July 3, 2008

Dead-listing while on vacation

Filed under: Embedded,Reverse engineering,Security — Nate Lawson @ 10:21 pm

I just got back from a nice vacation with no laptop.  What do I do on long plane rides or while listening to the waves lap against the beach?  Dead-listing.

Dead-listing is analyzing the raw disassembly of some target software and figuring it out using only pen and paper.  This is great for vacations because your laptop won’t get sand in it.  You usually have a long period of time to muse about the code in question without interruptions, something hard to find at home these days.  And I’ve gotten some of my best ideas after setting aside my papers for a while and going for a long swim.

Before you leave, pick an interesting target: not too big, not too small.  Not just x86 either — ever wondered how one of your cellphone’s applications worked?  Never looked at a familiar application’s Java bytecode?  Then, get a copy of a disassembler for your target and run it.  I often use IDA but for less common CPU architectures, any one will do.  Remember that you don’t need full analysis at this point, although it’s useful to be able to separate data from instructions and get basic symbols resolved.

Search the assembly code for any external libraries that might be important.  Disassemble them too.  While you’re at it, see if you can find the source code or API reference for any of those files.  Code reuse helps you reduce the amount you have to reverse yourself.

Now take the listing file and do some cleanup.  I like to use a small font that prints well and convert all the call (subroutine) instructions to bold.  I insert some extra empty lines before each call target to allow me room to write notes.  Finally, I print the listing in landscape with two columns per page.  This leaves room for notes on the righthand side of each column and some at the bottom.  A binder clip makes it easy to keep pages in order or remove them to compare.

Since you won’t have access to the Internet, you’ll need to find some basic reference materials.  The key is to get the basics without carrying a ream of paper.  For an embedded system, I usually find or make a small instruction set reference sheet including status flags, I/O port table, PCB layout photos, and any integrated peripherals I find during a brief search of the data accesses.  Print these out and put a full copy of all the files on a USB flash drive.  You might stop by an Internet cafe if you find you really need to read a missing page.  I’ve never found that necessary though.  If I can’t infer the function of a particular I/O access, I’ll just look at the whole block’s general effect and take a guess.

The most important part is to stay light.  You don’t need exhaustive manuals (e.g., instruction timing).  Instead, treat these barriers as a challenge.  For example, most timing loops are relative to each other.  You can figure out that one loop is twice as slow as the other without knowing its exact delay in nanoseconds.  Doing hexadecimal conversions on paper builds character.

Once you’re away, enjoy doing a little bit at a time.  I’d often work through a subroutine or lay out a switch statement over coffee before the family gets moving, then set it aside for the rest of the day.  Before putting it down, keep a separate log of open questions and tasks, marking them off as you solve them.  I usually reserve a page in this notebook as my “symbol table”, marking down function names/addresses as I finalize them.

I mostly work through the code in two modes: looking for high level blocks or walking through a single subroutine in detail.  Was this compiled C or C++?  How does the optimizer work?  When accessing hardware, where did the author use inline assembly?  Draw arrows, bracket blocks, use highlighters if that’s your style.  Since it’s more free-form than working on a computer, you’ll find new ways to annotate it.  The amount of marks per page gives a good idea as to code coverage so you can start a day by refining an existing page or try to take a broad guess what a blank page does.

I am always surprised how much I accomplish in only a little bit of time using this method.  It’s also so much fun.  Throw in reading a few interesting but unrelated books and you may wax philosophical on the meaning of life or why the compiler suddenly switched to byte instead of word operations for a single function.

I hope you have a great summer.  Be sure to stop by my Blackhat 2008 talk, “Highway to Hell: Hacking Toll Systems“.  I’ll be posting more details about it here in the coming weeks.

« Previous PageNext Page »

Blog at WordPress.com.