March 17, 2008

Apple iPhone bootloader attack

Filed under: Crypto,Embedded,Hacking,Hardware,Security,Software protection — Nate Lawson @ 1:53 pm

News and details of the first iPhone bootloader hack appeared today. My analysis of the publicly-available details released by the iPhone Dev Team is that this has nothing to do with a possible new version of the iPhone, contrary to Slashdot. It involves exploiting low-level software, not the new SDK. It is a good example how systems built from a series of links in a chain are brittle. Small modifications (in this case, a patch of a few bytes) can compromise this kind of design, and it’s hard to verify that all links have no such flaws.

A brief disclaimer: I don’t own an iPhone nor have I seen all the details of the attack. So my summary may be incomplete, although the basic principles should be applicable. My analysis is also completely based on published details, so I apologize for any inaccuracies.

For those who are new to the iPhone architecture, here’s a brief recap of what hackers have found and published. The iPhone has two CPUs of interest, the main ARM11 applications processor and an Infineon GSM processor. Most hacks up until now have involved compromising applications running on the main CPU to load code (aka “jailbreak”). Then, using that vantage point, the attacker will run a flash utility (say, “bbupdater”) to patch the GSM CPU to ignore the type of SIM installed and unlock the phone to run on other networks.

As holes have been found in the usermode application software, Apple has released firmware updates that patch them. This latest attack is a pretty big advance in that now a software attack can fully compromise the bootloader, which provides lower-level control and may be harder to patch.

The iPhone boot sequence, according to public docs, is as follows. The ARM CPU begins executing a secure bootloader (probably in ROM) on power-up. It then starts a low-level bootloader (“LLB”), which then runs the main bootloader, “iBoot”. The iBoot loader starts the OSX kernel, which then launches the familiar Unix usermode environment. This appears to be a traditional chain-of-trust model, where each element verifies the next element is trusted and then launches it.

Once one link of this chain is compromised, it can fully control all the links that follow it. Additionally, since developers may assume all links of the chain are trusted, they may not protect upstream elements from potentially malicious downstream ones. For example, the secure bootloader might not protect against malicious input from iBoot if part or all of it remains active after iBoot is launched.

This new attack takes advantage of two properties of the bootloader system. The first is that NOR flash is trusted implicitly. The other is that there appears to be an unauthenticated system for patching the secure bootloader.

There are two kinds of flash in the iPhone: NOR and NAND. Each has different properties useful to embedded designers. NOR flash is byte-addressable and thus can be directly executed. However, it is more costly and so usually much smaller than NAND. NAND flash must be accessed via a complicated series of steps and only in page-size chunks. However, it is much cheaper to manufacture in bulk, and so is used as the 4 or 8 GB main storage in the iPhone. The NOR flash is apparently used as a kind of cache for applications.

The first problem is that software in the NOR flash is apparently unsigned. In fact, the associated signature is discarded as verified software is written to flash. So if an attacker can get access to the flash chip pins, he can just store unsigned applications there directly. However, this requires opening up the iPhone and so a software-only attack is more desirable. If there is some way to get an unsigned application copied to NOR flash, then it is indistinguishable from a properly verified app and will be run by trusted software.

The second problem is that there is a way to patch parts of the secure bootloader before iBoot uses them. It seems that the secure bootloader acts as a library for iBoot, providing an API for verifying signatures on applications. During initialization, iBoot copies the secure bootloader to RAM and then performs a series of fix-ups for function pointers that redirect back into iBoot itself. This is a standard procedure for embedded systems that work with different versions of software. Just like in Windows when imports in a PE header are rebased, iBoot has a table of offsets and byte patches it applies to the secure bootloader before calling it. This allows a single version of the secure bootloader in ROM to be used with ever-changing iBoot revisions since iBoot has the intelligence to “fix up” the library before using it.

The hackers have taken advantage of this table to add their own patches. In this case, the patch is to disable the “is RSA signature correct?” portion of the code in the bootloader library after it’s been copied to RAM. This means that the function will now always return OK, no matter what the signature actually is.

There are a number of ways this attack could have been prevented. The first is to use a mesh-based design instead of a chain with a long series of links. This would be a major paradigm shift, but additional upstream and downstream integrity checks could have found that the secure bootloader had been modified and was thus untrustworthy. This would also catch attackers if they used other means to modify the bootloader execution, say by glitching the CPU as it executed.

A simpler patch would be to include self-tests to be sure everything is working. For example, checking a random, known-bad signature at various times during execution would reveal that the signature verification routine had been modified. This would create multiple points that would need to be found and patched out by an attacker, reducing the likelihood that a single, well-located glitch is sufficient to bypass signature checking. This is another concrete example of applying mesh principles to security design.

Hackers are claiming there’s little or nothing Apple can do to counter this attack. It will be interesting to watch this as it develops and see if Apple comes up with a clever response.

Finally, if you find this kind of thing fascinating, be sure to come to my talk “Designing and Attacking DRM” at RSA 2008. I’ll be there all week so make sure to say “hi” if you will be also.


  1. You could have give credit to those who deserve it, instead of just saying “the hackers”. You are profiting on somebody else’s deeds and sayings (at least to promote your talk), and consciously avoiding to name them, shame on you.

    Comment by anonymous — March 19, 2008 @ 3:29 am

  2. anon, I’ve added a link to the iPhone Dev Team now, sorry to have missed that. I kept “the hackers” in the rest of the article since there’s four or five. The original links went directly to their page also, so it should have been clear I’m not claiming credit for their work nor do I have inside information from them.

    Comment by Nate Lawson — March 19, 2008 @ 8:37 am

  3. Speaking of “designing and attacking” DRM, I saw this come across Engadget earlier today:


    Comment by PaulM — March 19, 2008 @ 11:33 am

  4. PaulM, two comments.

    First, the point of BD+ is that people will eventually break older versions of the per-disc security code. In this case, it took Slysoft 5 months but they’ll probably get faster.

    Second, the fact that some discs still can’t be ripped is evidence to support the fact that they haven’t broken BD+, just some per-disc security code. That’s what you’d expect since BD+ is merely a platform for the per-disc security code. It doesn’t make sense to say that “BD+ is broken” any more than you’d say “x86 is broken” when a particular game is cracked.

    [Edit: I can’t and won’t be commenting in any more detail than that on BD+, given that it’s part of my day job still. Thanks for understanding.]

    Comment by Nate Lawson — March 19, 2008 @ 1:21 pm

  5. Why you didn’t join to design the IPHONE? Instead of attacking those HACKERS who are tirelessly doing it free for the sake of others, like the ZIPHONE. Apple [IPHONE] is very STUPID to SIM lock the phone. Is APPLE didn’t realized that they have more profit when selling those IPHONES to outside US? Apple is the most STUPID and CAPITALISM company in the US, like the MAC, without MSwindows PC or other OS, maybe the price of MAC computer is still unreachable. Iphone must be SIM LOCK FREE even no any other programs added.

    Comment by F RUN — April 27, 2008 @ 8:23 am

  6. I like the zfone (Zimmerman’s latest effort). I don’t understand the rest of your comment but admire the range, from politics to economics in so few sentences.

    Comment by Nate Lawson — April 30, 2008 @ 3:03 pm

  7. I’m trying to find out if its at all possible to run a .kext on an unjailbroken iPhone.

    When the kernel is loaded, is the signature for the entire kernel image verified? Or just the signatures of any plugins there might be i.e. the .kext?

    If the entire kernel image signature is checked then I presume there’s no chance of a .kext running unless it was inside the kernel when signed, thus no possibility at all of running a .kext.
    If however its not the entire kernel that is checked, but .kexts’ signatures are checked individually when loaded, then there is a slim chance perhaps that I am able to sign the .kext with a specific certificate tied to the device’s IMEI or similar, and thus would be able to load the .kext on one phone only (which would suit my purposes).

    Comment by Martin Harrison — June 15, 2010 @ 8:23 am

RSS feed for comments on this post.

Blog at WordPress.com.