SSL design principles talk

I recently gave a talk to a networking class at Cal Poly on the design principles behind SSL. The talk was a bit basic because I had to assume the class had little security experience. My approach was to discuss how it works and stop at each phase of the protocol, asking what security flaws might be introduced by changing or removing elements. This worked well to get them thinking about why SSL has certain components that appear weird at first glance but make sense after closer inspection.

Others have told me that they use a similar technique when learning a new crypto algorithm by starting with the simplest primitive, identifying attacks, and then adding subsequent elements until the whole algorithm is present. If attacks still exist, the algorithm is flawed.

For example, consider DSA, one of the more complex signature schemes. Use the random value k directly (instead of calculating r = (gk mod p) mod q) and the signature operation is simply:

s = k-1(H(m)) + x mod q

This introduces a fatal flaw. k-1 can be calculated from k via the extended Euclidean algorithm. The message is usually known, and thus H(m) is also. Thus, this would directly reveal the private key x to any recipient!

The references section at the end of the talk gives a good intro to the design principles behind SSL, especially the Wagner et al paper. My next articles will explain some SSL attacks in more detail.

FreeBSD 7 upgrade experience

I recently upgrade my laptop to FreeBSD 7 in anticipation of the upcoming 7.0 release. This should be a major advance with new features such as ZFS and increased scalability due to improvements in the ULE scheduler. It went ok overall with a few important changes to note.

First, don’t use make delete-old-libs until you’ve updated all your ports. That article also includes a script for figuring out what’s no longer used. There was a bump in major version numbers for all libraries when symbol versioning was enabled, so you’ll get lots of “not found” errors for third-party applications if you run this before upgrading your ports. If this happens, all is not lost. You can continue running by symlinking most libraries to their new versions:

	ln -s /usr/lib/libc.so.7 /usr/lib/libc.so.6

Next, there are some changes to the Intel wireless drivers. The iwi-firmware-kmod port is no longer needed now that the firmware is included in the base kernel. You just have to set a tunable to agree to the license:

	legal.intel_iwi.license_ack=1

And load the various firmware files via /boot/loader.conf:

	iwi_bss_load="YES"
	iwi_ibss_load="YES"
	iwi_monitor_load="YES"

For more information, see the iwi man page. With these changes, my system again works fine. Now I need to look into fixing some acpi problems.

For those of you looking for more security and crypto posts, you won’t be disappointed with my next series.

C64 25th anniversary event

Next Monday, December 10th, I will be at the Computer History museum to hear a panel discussing the 25th anniversary of the C64. It includes Jack Tramiel, founder and CEO of Commodore, Adam Chowaniec (manager of the Amiga), and some other guy.

There’s a lot that’s been written about retrocomputing, most recently this CNN article. I myself started with a VIC-20 and a 300 baud modem around 1983. I still have a few pages of old homework where I wrote an assembly joystick decoding routine in the margin. I later got a C64c in 1986. My Commodore era ended when I upgrade to a 486DX-33 in 1991. The 486 was my desktop for years, running DOS, Linux, and finally FreeBSD. It then served up root.org until I replaced it in 1999.

The most fascinating things about the C64 were games, demos, and copy protection. Games and demos made me ask “how do they do that?” It was easy to run a disassembler and see surprising techniques like self-modifying code and tricky raster interrupt timing. Copy protection was also a big eye-opener since it seemed to violate the principle that if bits can be read, they can also be written. (Of course, this principle is still generally true, but the skill of the protection author can greatly affect the difficulty.)

I don’t like to admit defeat, and there were some copy protection schemes I was never able to figure out. Now with the power of emulators and ways to physically connect a floppy drive to my PC, I can dust off those old disks and figure out how they worked. Most crackers didn’t need to understand the media layout or protection scheme in detail since they could often “freeze” and capture the game code from memory and then piece together a loader for it. In the race to get the first release of the latest game out, a lot of interesting details about how the protection worked would be overlooked. I think the protection code is as interesting as the game.

There is something refreshing about using a computer where every signal is 5 volts, instructions are a single byte, the clock is 1 microsecond, and ROM gives you reset times of a couple seconds. You just can’t make a mistake and lose all the time spent reinstalling software as you can with today’s hard drive-based systems. Hopefully, the advent of virtualization and good network backup software is going to return us to some of that carefree attitude.

As a hobby, I continue to help with the C64 Preservation Project. My next planned project is creating a USB interface to the parallel cable so that I can use nibtools with my computers that no longer have a printer port. Also, I find that loading an image of a protected floppy into an emulator on my laptop and disassembling it makes for a nice travel diversion during the holidays.

I hope you will enjoy the holidays in your own way and have a great 2008!

[Edit: the official video of the event has now been posted here and here]

Vintage Computer Festival 2007

This past weekend I attended the Vintage Computer Festival at the Computer History Museum (article). There were numerous highlights at the exhibits. I saw a demo of the Minskytron and Spacewar! on an original PDP-1 by Steve Russell. The Magic-1 was a complete homebrew computer made of discrete 74xx logic chips running Minix. The differential analyzer showed how analog computers worked. I also met Wesley Clark and watched team members type demo code into the LINC, similar to ed on a very small terminal.

One question I asked other attendees was what recent or modern laptop I could get for outdoor use. I am looking for a low-power device with a high-contrast screen for typing notes or coding while camping. Older LCD devices like the eMate met these criteria but a more modern version is preferable. Most recommended the OLPC XO-1, and in monochrome mode, it sounds like what I want. But I think I’ll wait for the second version to be sure the bugs are worked out.

After looking around at attendees, I was concerned for our future. Other than a few dads with their kids, most people were 40+ years old. While I missed out on the golden era of computer diversity (I got my first C64 in 1987), I was always fascinated with how computers were invented. I checked out books from the library and read old copies of Byte magazine found in a dumpster. Once I got on the Internet, I browsed the Lyons Unix source code commentary and studied the Rainbow Books to understand supervisor design.

So where was the under-30 crowd? Shouldn’t computer history be of interest to most computer science/electrical engineering students, and especially to security folks? Many auto mechanics enjoy viewing and maintaining old hotrods. Architectural history is important to civil engineers. I appreciate the work bunnie is doing to educate people on semiconductor design, including old chips. Is this having an effect?

If you’re under 30, I’m interested in hearing your response.

Trapping access to debug registers

If you’re designing or attacking a software protection scheme, the debug registers are a great resource. Their use is mostly described in the Intel SDM Volume 3B, chapter 18. They can only be accessed by ring 0 software, but their breakpoints can be triggered by execution of unprivileged code.

The debug registers provide hardware support for setting up to four different breakpoints. They have been around since the 386, as this fascinating history describes. Each breakpoint can set to occur on an execute, write, read/write, or IO read/write (i.e., in/out instructions). Each monitored address can be a range of 1, 2, 4, or 8 bytes.

DR0-3 store the addresses to be monitored. DR6 provides status bits that describe which event occurred. DR7 configures the type of event to monitor for each address. DR4-5 are aliases for DR6-7 if the CR4.DE bit is clear. Otherwise, accessing these registers yields an undocumented opcode exception. This behavior might be useful for obfuscation.

When a condition is met for one of the four breakpoints, INT1 is triggered. This is the same exception as for a single-step trap (EFLAGS.TF = 1). INT3 is for software breakpoints and is useful when setting more than four breakpoints. However, software breakpoints require modifying the code to insert an int3 instruction and can’t monitor reads/writes to memory.

One very useful feature of the debug registers is DR7.GD (bit 13). Setting this bit causes reads or writes to any of the debug registers to generate an INT1. This was originally intended to support ICE (In-Circuit Emulation) since some x86 processors implemented test mode by executing normal instructions. This mode was the same as SMM (System Management Mode), the feature that makes your laptop power management work. SMM has been around since the 386SL and is the original x86 hypervisor.

To analyze a protection scheme that accesses the debug registers, hook INT1 and set DR7.GD. When your handler is called, check DR6.BD (also bit 13). If it is set, the instruction at the faulting EIP was about to read or write to a debug register. You’re probably somewhere near the protection code.  Since this is a faulting exception, the MOV DRx instruction has not executed yet and can be skipped by updating the EIP on the stack before executing IRET.

If you’re designing software protection, there are some interesting ways to use this feature to prevent attackers from having easy access to the debug registers. I’ll have to leave that for another day.