root labs rdist

August 31, 2009

Repeating a check sometimes makes sense

Filed under: Crypto,Reverse engineering,Security,Software protection — Nate Lawson @ 7:00 am

When reviewing source code, I often see a developer making claims like “this can’t happen here” or “to reach this point, X condition must have occurred”. These kinds of statements make a lot of assumptions about the execution environment. In some cases, the assumptions may be justified but often they are not, especially in the face of a motivated attacker.

One recent gain for disk reliability has been ZFS. This filesystem has a mode that generates SHA-1 hashes for all the data it stores on disk. This is an excellent way to make sure a read error hasn’t been silently corrected by the disk to the wrong value or to catch firmware that lies about the status of the physical media. SHA-1 support in filesystems is still a new feature due to the performance loss and the lingering doubt whether it’s needed since it ostensibly duplicates the builtin ECC in the drive.

Mirroring (RAID1) is great because if a disk error occurred, you can often fetch a copy from the other disk. However, it does not address the problem of being certain that an error had occurred and that the data from the other drive was still valid. ZFS gives this assurance. Mirroring could actually be worse than no RAID if you were silently switched to a corrupted disk due to a transient timeout of the other drive. This could happen if its driver had bugs that caused it to timeout on read requests under heavy load, even if the data was perfectly fine. This is not a hypothetical example. It actually happened to me with an old ATA drive.

Still, engineers resist adding any kind of redundant checks, especially of computation results. For example, you rarerly see code adding the result of a subtraction to compare to the original input value. Yet in a critical application, like calculating Bill Gates’ bank account statement, perhaps such paranoia would be warranted. In security-critical code, the developer needs even more paranoia since the threat is active manipulation of the computing environment, not accidental memory flips due to cosmic radiation.

Software protection is one area where redundancy is crucial. A common way to bypass various checks in a protection scheme is to patch them out. For example, the conditional branch that checks for a copied disc to determine if the code should continue to load a game could be inverted or bypassed. By repeating this check in various places, implemented in different ways, it can become more difficult for an attacker to determine if they have removed them all.

As another example, many protection schemes aren’t very introspective. If there is a function that decrypts some data, often you can just grab a common set of arguments to that function and repeatedly call it from your own code, mutating the arguments each time. But if that function had a redundant check that walked the stack to make sure its caller was the one it expected, this could be harder to do. Now, if the whole program was sprinkled with various checks that validated the stack in many different ways, it becomes even harder to hack out any one piece to embed in the attacker’s code. To an ordinary developer, such redundant checks seem wasteful and useless.

Besides software protection, crypto is another area that needs this kind of redundant checking. I previously wrote about how you can recover an RSA private key merely by disrupting any part of the signing operation (e.g., by injecting a fault). The solution to this is for the signer to verify the signature they just made before revealing the signature to an attacker. That is, a sign operation now performs both a sign and verify. If you weren’t concerned about this attack, this redundant operation would seem useless. But without it, an attacker can recover a private key after observing only a single faulty signature.

We need developers to be more paranoid about the validity of stored or computed values, especially in an environment where there could be an adversary. Both software protection and crypto are areas where a very powerful attacker is always assumed to be present. However, other areas of software development could also benefit from this attitude, increasing security and reliability by “wasting” a couple cycles.

August 11, 2009

Awesome C64 visual debugger

Filed under: C64,Hacking,Retrocomputing,Reverse engineering — Nate Lawson @ 2:46 pm

I recently ran across a new visual debugger for C64 emulators called ICU64, written by Mathfigure. It will be released later this year and provides some amazing visualizations of both memory access and memory-mapped devices like the video and sound chip. See the video below for how it works.

There is also another video showing how the classic game Boulder Dash can be expanded to span multiple screens. He implemented this by grabbing graphics from the same RAM areas the game uses and displaying them on a custom screen. On this forum thread, the author discusses his goal of allowing more graphical expandability for classic games.

Learning to program on a small machine with a full view of every memory location is something I’ve always seen as a great way to start. I got a lot of insight into how my computer worked by running a background task that continually copied pages of RAM to screen, so you could “scroll through” memory. For example, the tick counter showed up as a blur.

I’ve always said that my kids would have to learn to program a C64 before they got a more modern system. Wouldn’t this be a great tool for teaching anyone new to low-level computing how assembly language and memory-mapped devices work? What about adapting some of these techniques to fuzzing or modern debuggers?

[Edit: added a link to the project now that it has been released]

August 10, 2009

Next Baysec: Aug 18 at Kate O’Briens

Filed under: Security — Nate Lawson @ 3:27 pm

The next Baysec meeting is August 18th at Kate O’Briens. Come out and meet fellow security people from all over the Bay Area. As always, this is not a sponsored meeting, there is no agenda or speakers, and no RSVP is needed.

See you Tuesday, August 18th, 7-11 pm. We’ll be towards the back.

Kate O’Briens
579 Howard St. @ 2nd, San Francisco
(415) 882-7240

August 6, 2009

Google Tech Talk on common crypto flaws

Filed under: Crypto,Network,Protocols,Security — Nate Lawson @ 10:11 pm

Yesterday, I gave a talk at Google on common crypto flaws, with a focus on web-based applications. Like my previous talk, the goal was to convince developers that the most prevalent crypto libraries are too low-level, so they should reexamine their design if it requires a custom crypto protocol.

Designing a crypto protocol is extremely costly due to the careful review that is required and the huge potential damage if a flaw is found. Instead of incurring that cost, it’s often better to keep state on the server or use well-reviewed libraries that provide SSL or GPG interfaces. There are no flaws in the crypto you avoid implementing.

I hope you enjoy the video below. Slides are also posted here.

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 93 other followers