rdist

June 22, 2009

Vintage Tech needs help moving

Filed under: C64,Retrocomputing — Nate Lawson @ 10:55 am

If you’re in the Bay Area and are interested in computing history, you should know about Vintage Tech. Sellam has put together a warehouse with the world’s largest private computer collection. He also put on the VCF computer fairs. However, now he is moving to a bigger warehouse in Stockton and needs help loading the truck in Livermore.

I was out at his place last week to help with the move. The sheer size of the whole thing is astounding. It feels somewhat similar to the last scene of Raiders of the Lost Ark, where the crate with the ark in it disappears into a giant warehouse full of boxes. There are shelves stacked high with all kinds of computer equipment, manuals, and disks. I saw IMSAI 8080s and a Be workstation, among thousands of others I couldn’t identify.

Sellam needs help moving. Work consists of loading computers and boxes onto a pallet or disassembling shelves so bring gloves if you have them. The heavy work is done with a forklift. If you’d like to help out and do a good deed, he is out there all day, every day. Sellam is a lot of fun to talk with. You can contact him here, phone or email.

March 15, 2009

Felten on fingerprinting blank paper

Filed under: C64,Crypto,Security,Software protection — Nate Lawson @ 9:00 pm

Ed Felten posted last week about his upcoming publication on fingerprinting techniques for blank paper. This is a great paper with practical applications. It reminded me to discuss some of the fundamental issues with fingerprinting and the potential problems when it comes to real-world forgery.

Long ago, some copy protections for floppy disks used a method known as “weak bits”. The mastering equipment would write a long string of zeros to the disk. This would cause the read head to return a varying string of bits with some pattern to it. The software would check that this region returned different values each time it was read to make sure it wasn’t a copy.

Similar techniques have also been applied to magstripe media for credit cards. Magtek makes a reader that attempts to measure physical characteristics of the magnetic stripe and submit this as a fingerprint for a given card. The general idea is that while the data on a card can be easily duplicated with a reader, the manufacturing process for the physical media leaves behind certain random “noise” that is difficult to reproduce.

This is similar to the Felten paper. They attempt to create a unique fingerprint for a piece of paper based on the variations in fibers that are visible with a high-resolution scan. They take multiple scans from different angles as well.

All of these techniques have something in common. The characteristic being measured must actually have some uniqueness. There must be a cost-effective way to measure that characteristic. There must be a sampling mechanism that chooses different areas to examine. The fingerprint algorithm must combine the samples in a way that is resilient to natural errors (i.e., no false positives). Yet it also must be difficult for a forger to create a copy that is close enough to the original to be accepted by the verifier (i.e., no false negatives).

Both magstripes and paper appear to have enough inherent uniqueness. The manufacturing techniques of both do create a lot of low-level variation. But once this requirement is satisfied, the fingerprint approach itself is still subject to fundamental limitations. No fingerprinting method can avoid them. It needs to be resilient not only in the face of regular use (e.g., crumpling the paper) but also with intentionally malicious manipulation. The conflicting requirements to avoid false positives and yet also be difficult to clone are always the most difficult part of any kind of fingerprinting scheme. This is a fundamental problem with any kind of statistical decision process.

There are two kinds of forgery attacks: second pre-image and collision. The former is the most obvious one, where an attacker creates a copy that matches some existing original. The latter is much harder to prevent. To create a collision, that attacker can pre-process two pieces of paper in order to create two documents that the fingerprint algorithm judges as close enough to be identical. For example, the attacker can write a sequence of small dots to both pages in a similar pattern before printing the text. He can repeat this multiple times while varying the pattern until the verifier judges the papers as close enough. Depending on the sampling algorithm and the attacker’s printing capabilities, this may be more difficult. Section 6 of the paper discusses this kind of attack but it mostly focuses on preventing a second pre-image attack and most of the analysis is left for the future.

The key thing to remember is that the attacker does not need to make the papers actually identical by reproducing the exact pattern of fibers on the paper. The attacker doesn’t even have to have a particularly fine dot resolution, as long as the position of the dots can be controlled. The idea is that the printed pattern overwhelms the fine characteristics measured by the scanner and thus two documents are judged to be close enough by the verifier. It also would be interesting to see how the fingerprint technique does against darker colored paper.

This attack illustrates the fundamental limitation of this kind of fingerprint method. The verifier has to allow for some variation to prevent false positives. But an attacker can repeatedly try to exploit that rejection region by creating various pairs of documents until they pass.

All of this is based on a preliminary read of the paper, so I’m interested in what the Felten team plans to do to address this kind of problem.

January 21, 2009

Introducing xum1541: the fast C64 floppy USB adapter

Filed under: C64,Hardware — Nate Lawson @ 4:26 pm

I’ve been working on a project in my spare time that I’m now ready to announce. It is a USB interface for the C64 1541 floppy drive, which allows it to be connected to a modern PC. In conjunction with the OpenCBM software, this allows files and whole disk images to be transferred to and from the drive.

Previously, there were a number of ways to connect your PC to a 1541, but they all required a built-in printer port. These have become rare on modern systems. USB is the logical choice for a new adapter but has its own complexities.

In 2007, the xu1541 project by Till Harbaum developed a simple USB adapter. This was a nice improvement. On the plus side, the hardware was very cheap to build and it offered decent compatibility. However, the device was slow due to implementing the USB protocol in software and required a lot of skill to set up since it had to be hand-built and bootstrapped with JTAG.

The xum1541 (pronounced “zoom”) is built from a modified version of the xu1541 firmware. It is a USB full speed device and supports high-speed parallel cables. The hardware USB support significantly speeds up transfers. It will support mnib (aka nibtools), which provides low-level imaging to backup copy protected disks. I’m most excited about this feature since it is critical to archiving original floppies for the C64 Preservation Project.

The first version of the hardware is based on the AT90USBKEY development board. This board costs about $30 and comes with a preinstalled USB bootloader. To turn it into an xum1541, it just needs a small daughtercard for the IEC connectors and parallel cable. It’s easy to upload the firmware with the Atmel FLIP software, no JTAG cable needed. I’m hoping that future versions will be a fully custom board and that someone will manufacture them for users who don’t have any hardware skills.

xum1541 USB floppy adapter

xum1541 USB floppy adapter

The project is currently in the alpha stage. I have a working firmware that is mostly compatible with the xu1541. It runs out of the CVS version of OpenCBM and works although it has a few bugs. I’m currently working to implement nibbler support and to improve the transfer speed. I’m trying to do this without sacrificing too much xu1541 compatibility to keep the OpenCBM software changes minimal.

Both Wolfgang Moser and Spiro Trikaliotis have been helpful on this project. Wolfgang has been testing my firmware on his own setup, so there are two xum1541s in existence now. Also, he has been prototyping various designs for both a daughterboard for the USBKEY and the second version of the xum1541, which would not be based on the USBKEY developer’s kit. Instead, it would be a fully custom board which will allow it to be even cheaper. Spiro has assisted with debugging some IEC problems.

Wolfgang Moser's xum1541

Wolfgang Moser's xum1541

All of this is in the early stages so no promises on delivery date. The last 10% of a project is always 90% of the effort. The first step is to finish support for the nibbler protocol and improve performance. Next, we will polish the firmware and OpenCBM software to support the new device (too many #ifdefs right now). The first release would provide firmware and software for people willing to build their own daughterboard for the USBKEY. Eventually, I hope there would be custom boards available for those who don’t want to build anything.

This project has been a lot of fun, and I look forward to posting more updates soon. Here’s a video of the xum1541 in operation:

March 25, 2008

Wii hacking and the Freeloader

Filed under: C64,Crypto,Embedded,Hacking,Security,Software protection — Nate Lawson @ 9:59 am

tmbinc wrote a great post describing the history of hacking the Wii and why certain holes were not publicized. This comes on the heels of Datel releasing a loader that can be used to play copied games by exploiting an RSA signature verification bug. I last heard of Datel when they made the Action Replay debug cartridge for the C64, and it looks like they’ve stayed around, building the same kind of thing for newer platforms.

First, the hole itself is amazingly bad for a widely-deployed commercial product. I wrote a long series of articles with Thomas Ptacek a year ago on how RSA signature verification requires careful padding checks. You might want to re-read that to understand the background. However, the Wii bug is much worse. The list of flaws includes:

  1. Using strncmp() instead of memcmp() to compare the SHA hash
  2. The padding is not checked at all

The first bug is fatal by itself. As soon as a terminating nul byte is reached, strncmp() returns. As long as the hash matched up to that point, the result would be success. If the first byte was nul, no comparison would be done and the check would pass.

It’s easy to create a chunk of data that hashes to a leading 0x00 byte. Here’s some sample code:

a = "rdist security blog"
import binascii, hashlib
for i in range(256):
    h = hashlib.sha1(chr(i)+a).digest()
    if ord(h[0]) == 0:
        print 'Found match with pad byte', i
        print 'SHA1:', "".join([binascii.b2a_hex(x) for x in h])
        break
else:
    print 'No pre-image found, try increasing the range.'

I got the following for my choice of string:

Found match with pad byte 80
SHA1: 00d50719c58e45c485e7d497e4021b48d814df33

The second bug is more subtle to exploit, but would still be open if only the strncmp() was fixed. It is well-known that if only 1/3 of the modulus length is validated, forgeries can be generated. If only 2/3 of the modulus length is validated, existential forgeries can be found. It would take another series of articles to explain all this, so see the citations of the original article for more detail.

tmbinc questions Datel’s motive in releasing an exploit for this bug. He and his team kept it secret in order to keep it usable to explore the system to find deeper flaws. Since it was easily patchable in software, it would be quickly closed. It turns out Nintendo fixed it two weeks after the Datel product became available.

I am still amazed how bad this hole was. Since such an important component failed open, it’s clear higher assurance development techniques are needed for software protection and crypto. I continue to do research in this area and hope to be able to publish more about it this year.

December 7, 2007

C64 25th anniversary event

Filed under: C64,Security,Software protection — Nate Lawson @ 3:31 pm

Next Monday, December 10th, I will be at the Computer History museum to hear a panel discussing the 25th anniversary of the C64. It includes Jack Tramiel, founder and CEO of Commodore, Adam Chowaniec (manager of the Amiga), and some other guy.

There’s a lot that’s been written about retrocomputing, most recently this CNN article. I myself started with a VIC-20 and a 300 baud modem around 1983. I still have a few pages of old homework where I wrote an assembly joystick decoding routine in the margin. I later got a C64c in 1986. My Commodore era ended when I upgrade to a 486DX-33 in 1991. The 486 was my desktop for years, running DOS, Linux, and finally FreeBSD. It then served up root.org until I replaced it in 1999.

The most fascinating things about the C64 were games, demos, and copy protection. Games and demos made me ask “how do they do that?” It was easy to run a disassembler and see surprising techniques like self-modifying code and tricky raster interrupt timing. Copy protection was also a big eye-opener since it seemed to violate the principle that if bits can be read, they can also be written. (Of course, this principle is still generally true, but the skill of the protection author can greatly affect the difficulty.)

I don’t like to admit defeat, and there were some copy protection schemes I was never able to figure out. Now with the power of emulators and ways to physically connect a floppy drive to my PC, I can dust off those old disks and figure out how they worked. Most crackers didn’t need to understand the media layout or protection scheme in detail since they could often “freeze” and capture the game code from memory and then piece together a loader for it. In the race to get the first release of the latest game out, a lot of interesting details about how the protection worked would be overlooked. I think the protection code is as interesting as the game.

There is something refreshing about using a computer where every signal is 5 volts, instructions are a single byte, the clock is 1 microsecond, and ROM gives you reset times of a couple seconds. You just can’t make a mistake and lose all the time spent reinstalling software as you can with today’s hard drive-based systems. Hopefully, the advent of virtualization and good network backup software is going to return us to some of that carefree attitude.

As a hobby, I continue to help with the C64 Preservation Project. My next planned project is creating a USB interface to the parallel cable so that I can use nibtools with my computers that no longer have a printer port. Also, I find that loading an image of a protected floppy into an emulator on my laptop and disassembling it makes for a nice travel diversion during the holidays.

I hope you will enjoy the holidays in your own way and have a great 2008!

[Edit: the official video of the event has now been posted here and here]

October 5, 2007

C64 screen memory and anti-debugging

Filed under: C64,Hacking,Security,Software protection — Nate Lawson @ 5:00 am

I think it’s fun to stir your creativity periodically by analyzing old software protection schemes. I prefer the C64 because emulators are widely available, disks are cheap and easy to import, and it’s the system I became most familiar with as a kid.

One interesting anti-debugging trick was to load the protection code into screen memory. Just like on the PC, the data on your screen is just a series of values stored in memory accessible to the main CPU. On the C64, screen memory typically was located at 0x400 – 0x7FF. Data could be loaded into this region by setting the block addresses in the file’s directory entry (very simple version of shared library load address) or by explicitly storing it at that address using the serial load routines.

To keep users from seeing garbage, the foreground and background colors were set to be the same. If you tried to break into a debugger, the prompt would usually overwrite the protection code. This could be worked around by relocating actual screen memory (by reprogramming the VIC-II chip) or by manually loading the code at a different address and disassembling it.

This is an example of anti-debugging based on utilization of shared resources. The logic is that a debugger needs to use the screen to run, so if the protection is using that resource also, the attacker will disrupt the system by activating the debugger. It is usually much more effective to use up a shared resource than to just check for signs that a debugger is present, an approach that is still important today.

« Previous PageNext Page »

Blog at WordPress.com.