root labs rdist

August 30, 2010

Theories of how PS Jailbreak works

Filed under: Embedded,Hacking,Reverse engineering,Security — Nate Lawson @ 5:59 pm

A company recently announced a modchip for the PS3 and claims they will be shipping them soon. It plugs into the USB port and allows running backup games. Inside the device is an ATmega8U2 USB microcontroller, something I’ve worked with before. While I didn’t have access to this device and don’t know PS3 internals, I spent a few minutes looking at a hex dump from a USB trace.

An article on Gamefreax (German, alternate in English)¬†claimed to have reverse-engineered this device. They believe that a stack overflow in USB config descriptor processing allows the device to execute code on the PS3 host. However, their page shows a useless set of packets from a USB trace and some hex code obscured with a logo so it’s not possible to verify their claims from the info given.

In a later comment, a user named Descrambler posted a more complete dump of hex data from the USB trace. It is still not fully complete, but it’s enough to look into more of the details.

The first trace starts with a standard configuration descriptor. It is followed by a relatively standard interface descriptor, except it is class 254, subclass 1, and protocol 2. Since this is in the reserved class range, perhaps this is related to an internal Sony test tool. Following this descriptor is some data and PowerPC code.

The second trace starts with a config descriptor that is a bit weird. It claims the total length is 77 bytes (68 bytes of interface descriptors after the 9-byte config descriptor). It also claims to support 10 interfaces. With the standard interface descriptor length of 9 bytes, the total length should be 99 bytes.

There are multiple ways this might affect the PS3. If it believes the total length field for sizing the buffer, the first bytes after the initial 77 could overflow a buffer (in this case, “00 00 fe 01 02 00 09 04″ are what follows). Or it might simply copy it into a static buffer (if 256 bytes long, the first bytes to overflow would be “fe 01 02 00 09 04 00 00″).

If it’s not an overflow, it could be related to how the PS3 parses the first 10 interface descriptors. The sequence is not regular. Taken in 9-byte chunks, it diverges after the first 6 interface descriptors, giving a next descriptor of “09 00 09 04 00 00 00 fe 01″. This is not a valid descriptor. Or, if the PS3 parses lengths of descriptors, it will end up with a few very short ones (“02 00″).

These are all just theories. It’s quite possible the second trace is just a decoy, meant to slow down reversing. The behavior described by Gamefreax cannot be validated from the USB traces posted by Descrambler. It appears Gamefreax may have misread the trace (77 bytes in the second trace is 0x4d, but they claim a descriptor length is 0xAD). Also, Descrambler’s hex dumps are incomplete and don’t show the various phases described in the Gamefreax post.

It’s definitely too early to claim that the PS Jailbreak exploit has been reverse-engineered. However, it should be quite easy to clone since all the data needed to do so is present in a USB trace. Just paste the data into example code for an AT90USB and replay the same descriptors. You might have to add in a bus disconnect in the right place but it should be relatively simple.

August 18, 2010

Crypto 2010 rump session

Filed under: Crypto,Security — Nate Lawson @ 2:59 pm

One of the leading indicators of upcoming advances in cryptography is the rump session at the CRYPTO conference. Speakers are given 3 minutes to introduce works in progress or crack jokes. For example, I remember Paul Kocher’s groundbreaking presentation on extremely low exponent RSA (e=1). Here is more history of this casual, but important, event each year.

This year’s session had some interesting talks, mostly about SHA-3 hash candidates. Dinur and Shamir announced an algebraic attack on Hamsi-256, which has had other attacks announced previously. They also attacked the Grain-128 stream cipher. Leurent spoke about distinguishing attacks and whether a hash function can remain secure even when the underlying compression function has efficient distinguishers.

Cohn and Heninger presented a survey of applications of Coppersmith’s theorem, which has many uses beyond cryptanalysis of public key systems. There was a lot of interest in making public key systems resilient in the face of leakage (e.g., via side channels). This is good since traditional (EC)DSA falls apart if the nonce is even partially predictable. A presentation on noisy Diffie-Hellman looked interesting, although the applications are unclear to me.

On the implementation front, Mroczkowski described a fast implementation of Trivium in Python. It used the CorePy library to generate SSE3 instructions. This was to optimize the cube attack previously announced by Dinur and Shamir in 2008.

And finally, the humorous CFP for the Journal of Craptology was a great way to end. What were your favorite rump session talks?

August 11, 2010

Next Baysec: August 17 at Irish Bank

Filed under: Security — Nate Lawson @ 4:27 pm

The next Baysec meeting is Tuesday, August 17, 7 pm at the Irish Bank. Come out and meet fellow security people from all over the Bay Area. As always, this is not a sponsored meeting, there is no agenda or speakers, and no RSVP is needed.

10 Mark Lane
San Francisco, CA
415.788.7152
http://www.theirishbank.com/

August 5, 2010

Optimized memcmp leaks useful timing differences

Filed under: Crypto,Hacking,Network,Protocols,Security — Nate Lawson @ 5:00 am

One of the main questions raised after our Blackhat talk on exploiting remote timing attacks was how memcmp() optimizations might affect our results.

Our talk focused on a particular timing vulnerability. If an HMAC comparison exits early when a mismatch is found, it reveals information to an attacker about the correct HMAC for the forged message. This early-exit behavior is desirable in general but can leak timing information about secret data, allowing an attacker to iteratively guess the secret.

In C, memcmp() is almost always used for comparisons of binary data. Its API specifies that it compares two fixed-length buffers and returns the difference between them or zero if they are identical. In most implementations, memcmp() exits as soon as a difference is found in the two buffers.

Even if memcmp() exits early, the size of comparisons might affect how exploitable this timing delta is. For example, an optimized memcmp() may work in units of 64-bit quad words. Such an implementation might not be exploitable even if the timing delta of a mismatch was 1 second since you would have to brute-force 64-bits before seeing the timing delta increase due to a match. However, things aren’t so simple and using an optimized memcmp() does not always save you.

To prepare for our talk, we disassembled a few memcmp() implementations and found a surprising number were byte-based. This gives an attacker only 256 possibilities per guess, which is quite tractable. We decided to follow up with more detailed results for various operating systems.

We found two things:

  1. Even an optimized memcmp() usually leaks bytewise timing differences. In fact, it is often required by its API to do so.
  2. OS and compiler settings often select a bytewise compare even when a more-optimized version is available.

Anyone who is assuming particular compiler behavior is advised to review their disassembly. It’s almost always best to use a constant-time comparison instead of memcmp() when working with secret data.

Windows XP SP3 (32-bit)

We disassembled all .exes and DLLs in the system32 directory. All of them import memcmp() from either ntdll.dll or msvcrt.dll. All of them were compiled with some version of MSVC.

The msvcrt.dll memcmp() is 32-bit optimized. It compares 32-bit words if both pointers are 32-bit aligned (rep cmpsd). The remainder is compared with up to a 3-byte series of individual byte compares (cmp/jnz). If either of the input pointers is not 32-bit aligned, it goes into an unrolled loop that compares single bytes and exits early if any mismatch. ntdll.dll has the same memcmp() implementation.

Even with this optimization, the Windows memcmp() leaks bytewise timing information. When a difference is found in the 32-bit compares, it goes through each byte of the 32-bit word to find which byte differs. This means there is a 2-instruction timing delta per byte the attacker got correct, even though the original compare was in 32-bit chunks.

Windows memcmp() doesn’t actually have to do this set of bytewise comparisons. MSDN states the following behavior:

Return value Relationship of first count bytes of buf1 and buf2
< 0 buf1 less than buf2
0 buf1 identical to buf2
> 0 buf1 greater than buf2

The cygwin man page for memcmp() states:

The function returns an integer greater than, equal to or less than zero according to whether the object pointed to by S1 is greater than, equal to or less than the object pointed to by S2.

Since neither implementation requires identifying the bytewise difference, the implementation could just subtract the two 32-bit words. Only the sign of the return value matters, not the magnitude. This is analogous to treating the two buffers as multi-precision integers, just like in an RSA implementation.

We also compiled our own code using the cygwin and mingw gcc compilers. With no optimization flags, the cygwin gcc links to cygwin1.dll, which has a 32-bit optimized memcmp() similar to msvcrt.dll. It returns the bytewise difference and checks the input pointers for 32-bit alignment. The mingw compiler links to the msvcrt.dll version of memcmp() when no optimization flags are specified.

However, if an optimization level higher than default is used (-O1, -O2, -O3), both cygwin and mingw gcc use a 1-byte compare loop (rep cmpsb). This is surprising. If the -fno-builtin flag is specified, both cygwin and mingw gcc revert to the previous behavior, calling the external implementation in their respective DLLs.

We disassembled the default cygwin system utilities and libraries and found they all used a 1-byte compare loop. This is likely because they were compiled with optimization enabled but without the -fno-builtin flag.

Summary

The default Windows memcmp() (msvcrt.dll or ntdll.dll) leaks bytewise timing information even for chunks of 32-bits due to its search for the differing byte once a mismatch is found. It also leaks bytewise information if either input pointer is not 32-bit aligned. MSVC always uses an external DLL memcmp().

Both cygwin and mingw gcc use a 1-byte compare loop if optimization is enabled (-O1 or higher). If no optimization is enabled, gcc links to a DLL (cygwin1.dll or msvcrt.dll) that has a 32-bit optimized memcmp(). This memcmp() leaks the bytewise difference just like msvcrt.dll.

Windows 7 (64-bit)

This memcmp() implementation in msvcrt.dll compares buffers longer than 8 bytes in 64-bit chunks. It does not return the bytewise difference if it finds a mismatch. It returns the difference of the trailing 32-bits. The only bytewise timing leak occurs if the input length is not a multiple of 8. In this case, the remainder of the bytes are compared one at a time.

Linux (32 and 64-bit, Ubuntu 10.04 with glibc 2.12.1)

The x86 32-bit memcmp() is a long unrolled set of 32-bit compares. When it finds some difference, it jumps to find_diff which, like Windows, identifies exactly which byte differs. This leaks a bytewise timing delta.

Like Windows, the Linux memcmp() does not require finding the differing byte:

The memcmp() function returns an integer less than, equal to, or greater than zero if the first n bytes of s1 is found, respectively, to be less than, to match, or be greater than the first n bytes of s2.

The 64-bit memcmp() uses a tricky combination of a bit-set instruction (bsf) and shifts to identify the first byte difference without branching. So if the input is a multiple of 4, 8, or 16, it should not reveal a timing delta for any smaller units. If the input length is not a multiple of these values, smaller differences are visible.

Linux has optimized versions for other architectures as well, but we did not examine them.

Summary

Only the Linux 64-bit memcmp() does not reveal bytewise timing deltas. The 32-bit version behaves the same as Windows.

FreeBSD 8.0 (32 and 64-bit, gcc 4.2.1)

The system libc.so and libc.a have an optimized memcmp() for the most popular CPU architectures (i386, amd64, and arm). For i386, it consists of a 32-bit compare for the length in chunks of 4 (rep cmpsd), and a single-byte rep cmpsb for the remaining up to 3 bytes. There is no check for alignment first, so behavior with unaligned pointers is identical. The amd64 version has the same behavior as i386 but the libc memcmp() works in units of 8-byte quadwords (rep cmpsq).

If a difference is found in the word comparison, all implementations of memcmp() back up and find the first different byte using rep cmpsb. This required by the man page:

The memcmp() function returns zero if the two strings are identical, otherwise returns the difference between the first two differing bytes (treated as unsigned char values, so that ‘\200′ is greater than ‘\0′, for example). Zero-length strings are always identical.

For all other platforms (sparc, mips, ia64), the generic implementation of memcmp() is used. It is a loop of 1-byte compares, implemented in C.

GCC does not use the libc version of memcmp() by default. It uses a built-in version that consists of a rep cmpsb for the entire input. This occurred no matter what the optimization level (-O1, -O2, -O3). If we added the -fno-builtin flag, gcc generated calls to the libc memcmp(). Nothing changed based on optimization level.

We disassembled the default system utilities on FreeBSD 8.0 and found none of them were compiled with -fno-builtin, and thus all used the 1-byte memcmp.

Summary

By default, gcc on FreeBSD uses a completely timeable memcmp() that works in 1-byte units. This is regardless of optimization level. If the -fno-builtin flag is used, the libc memcmp() is linked in instead. The libc memcmp() is not more resistant to timing attacks because it finds the first differing byte. The sparc, mips, and ia64 platforms always use a 1-byte loop.

August 2, 2010

Magic numbers in Excel waste my time

Filed under: Software engineering,Windows — Nate Lawson @ 8:00 am

One of the tools I created recently output its data in CSV format. The Python CSV library is quite nice. However, opening the file in Excel gave the error “SYLK: file format is not valid” or “Excel has detected that ‘test.csv’ is a SYLK file, but cannot load it.” OpenOffice handled the file just fine.

It turns out that a CSV file with the first two bytes set to “ID” (case-sensitive) is detected as a different file format by Excel. And this is why I hate software.

Excel has detected that 'test.csv' is a SYLK file, but
cannot load it.

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 93 other followers