Well-funded and motivated attackers are typically the hardest to defend against when designing a system. Governments can attack systems in different ways and with more resources than a typical threat. Consider a recent example where a British aide lost his Blackberry after spending the night with a woman who approached him in a Chinese disco. While it’s possible he just lost it while drunk, this is a good example of how unconventional threats need to be carefully considered.
Let’s analyze the cost of two routes to getting this same information: hacker or hooker. The hacker might try to crack passwords or develop a 0-day exploit against the Blackberry server. Or, build a custom trojan and send it via a forged email that appears to come from the Prime Minister. The hooker would try to get to his hotel room and steal the phone. It would actually suffice to just borrow it for a few minutes and dump the RAM since passwords are often cached there. This has the added advantage that he might never know anything had happened.
A 0-day exploit could be in the $20,000 range. Hiring someone to develop and target a trojan at this aide would be less, but the chance of succeeding would be lower. According to the stories about Eliot Spitzer, a high-end call girl is $1,500 per hour. Assuming it takes four hours, the total cost would be $6,000. The fact that both these approaches could be done in China means the actual cost would be lower but probably still a similar ratio.
There are a lot of other advantages to the hooker approach besides cost. There is good deniability if the call girl gets caught. Since the call girl remains within the attacking country’s jurisdiction, the police can be used to recover the Blackberry if she makes an extortion attempt. The hacker approach has a lot more uncertainty as flaws could be patched or blocked, making the exploit useless.
I also think this gives good support to my claim that software protection techniques are on the verge of wider adoption. With cold boot attacks and growing news of governments seizing laptops or stealing cell phones, systems must remain secure even when an attacker has physical possession of a powered-up device. The only way to do this is to adopt software and hardware techniques that are already used to protect games, DRM, and satellite TV. Traditional approaches like those used in network security are no longer enough.
I’ll be speaking on this topic along with Thomas Ptacek at WOOT, co-hosted at USENIX on July 28th in San Jose. Since this event is invite-only, send me email if you’re a security researcher who would like to attend. Please include a brief summary of your background.
This post reminds me of a quote attributed to Bob Morris (the elder). Paraphrasing: When considering the security of information, don’t forget about the 3 B’s: Blackmail, Bribery, and Burglary. Typically, it is the cheapest of those three that ends up getting the job done.
Talking about defenses against attackers with possession of the device, you might notice that even if you still someone’s Blackberry, you’re not home free to getting the data off of it.
It wipes the data if you try the wrong password 10 times. So assuming the user didn’t have its native encryption turned on (articles seem to imply that Mr Downing Street Casanova didn’t), you still have to have 0day to beat the lock, or else you have to open it up to take the flash off the board (anyone know how easy/hard this is?). I have heard from friends who play with this sort of thing that Blackberries don’t have JTAG exposed.
I have a dumb question. Why isn’t encryption built-in with no option to enable or disable it? There’s an easy way to add encryption at the factory.
I’m looking at this teardown of the Blackberry 8700. It has an Intel PXA (was XScale) processor with NOR flash and SDRAM in the same package. So, when you install the OS for the first time in the factory, it would create a random encryption key and all data that was written to the external, bulk storage flash could be encrypted with it. That key would then be encrypted with the default password (“password”?) and written to the internal flash.
When a user took delivery of the device, they could change the password and thus the encrypted key. As long as no one had dumped the internal NOR flash before the user set a password, the original key would just be decrypted with the default and re-encrypted with the new passphrase. If the user was paranoid, they could request re-encryption of all the data under a new key instead.
Some day that will be expected of manufacturers. As for attacking a Blackberry today, it’s likely that the device does not auto-lock and require a password so you can transfer data off it just like the user would. To catch dumb thieves, it would probably suffice to review Blackberry server logs for outgoing messages.
For hardware attacks, it seems the external flash is NOR as well. That is not as cheap as NAND so it’s possible they execute directly out of it. If that’s the case, pick a binary and patch it to get root.
I may be talking out of a random orifice, but I would guess encryption is not universal because it hurts performance too much on system with a 300 MHz processor.
How easy is it to patch flash without JTAG?
Company Blackberries do have required passwords and auto-lock screens (mine locked after 1 minute inactivity, or as soon as I put it in holster). I think the ‘consumer’ variety don’t do that, but I’ve never had one so I’m not sure.
I have another explanation why encryption is not universal. Data recovery when user has lost his/her passphrase/password. And he will forget it, believe me. Then either you provide a trapdoor (and then we are back to initial place from the point of view of hacker) or he will definitively loose his/her data.
Law 6: You are the weakest link. Handling of passwords are a difficult task.
By the way, nice blog. You will hear from me often. #;-)