SSL gives point-to-point, not end-to-end security

At Summercon, Jon Oberheide gave a talk on Android security. He described a trojan app called RootStrap that checks for native code updates on a remote server and executes them. This is possible because Android doesn’t place any restrictions on native code that is installed with or later downloaded by an app. The only limitation is that the code runs as the app’s unprivileged UID, but there are no additional restrictions.

The more interesting part gives an overview how the app installation process interacts with GTalkService. In a followup post today, Jon gave more analysis of this installation mechanism. Unlike other parts of Android, this service is not open source so you have to disassemble the DEX file to see how it works.

When you select “install” on the Market app, the phone doesn’t download and install the app immediately. Instead, Google’s server sends an INSTALL_APP message to GTalkService on the phone, which downloads and installs the app. The message is delivered over SSL and includes a SHA-1 hash of the app’s installer (.apk). While this is better than no authentication, the link between the user’s action and the actual code installed is tenuous.

SSL provides good point-to-point privacy and integrity protection. However, there is no guarantee to upper layers that SSL is indeed in use. Few, if any, programs query the SSL layer to check the state of the connection, do additional cert validation, etc. Even when active, SSL provides point-to-point, not end-to-end security.

In today’s computing environment, there are seldom only two systems involved in a transaction. Even if the apps were stored on a single Google server, they are still compiled and signed on other systems. Anywhere along that production chain, a compromise could lead to apps being trojaned and surreptitiously pushed to many Android phones.

Android does provide some security in its code signing model. The developer’s signature on the .apk is basically a JAR signature. The hash of the APK cert is used to determine if a new app can access the same data as the previous app since it determines which UID an app gets. However, this only protects data created by existing apps from being accessed by other apps that are not signed with the same key. It also doesn’t say anything about the legitimacy of the code since the developer signs it themselves, often with a self-signed cert.

Since it appears that the INSTALL_APP message does not have any signature on itself, this message is not protected other than with SSL. Could an attacker who could inject some messages into the Google server replay this message, causing phones everywhere to install their malware? Will phones install apps without the Market service requesting it?

We’ll have to see what happens as more info is found out about GTalkService. The installation process should include a challenge/response value for liveness (perhaps this is the “tickle_id” field?) The installed APK should be linked to the phone’s install challenge with a Google signature. After all, Android ships with a list of CAs. Why can’t Google include some limited CA for their own domains to enable this signing?

This is a good example of how SSL only provides point-to-point, not end-to-end security. While SSL is great for transactions, additional protection is needed for application-level functions such as updates, especially in today’s multi-server environment.

Don’t use included version of IDAPython

I ran into a problem the other day with IDAPython. It’s a great tool but the version that comes by default with IDA is often out-of-date. Lesson: always update your version of IDAPython after installing or upgrading IDA.

The problem I saw was that the idautils.Functions() generator would never return any functions. I added lots of debugging prints and found that Segments() worked but Functions() never found a function, no matter what the range of addresses was. I then found that the Functions() routine would never location any functions if the first address was not the exact EA of a function.

This was in IDA 5.5 with its default IDAPython. Here’s the commit that fixed the bug. Since there are other bugs in older releases, I recommend updating to 1.3.2 (Windows binary or SVN source).

A new direction for homebrew console hackers?

A recent article on game console hacking focused on the Wii and a group of enthusiasts who hack it in order to run Linux or homebrew games. The article is very interesting and delves into the debate about those who hack consoles for fun and others who only care about piracy. The fundamental question behind all this: is there a way to separate the efforts of those two groups, limiting one more than the other?

Michael Steil and Felix Domke, who were mentioned in the article, gave a great talk about Xbox 360 security a few years ago. Michael compared the history of Xbox 360 security to the PS3 and Wii, among other consoles. (Here’s a direct link to the relevant portion of the video). Of all the consoles, only the PS3 was not hacked at the time, although it has since been hacked. Since the PS3 had an officially supported method of booting Linux, there was less reason for the homebrew community to attack it. It was secure from piracy for about 3 years, the longest of any of the modern consoles.

Michael’s claim was that all of the consoles had been hacked to run homebrew games or Linux, but the ultimate result was piracy. This was likely due to the hobbyists having more skill than the pirates, something which has also been the case in smart phones but less so in satellite TV. The case of the PS3 also supports his theory.

Starting back in the 1980’s, there has been a history of software crackers getting jobs designing new protection methods. So what if the homebrew hackers put more effort into protecting their methods from the pirates? There are two approaches they might take: software or hardware protection.

Software protection has been used for exploits before. The original Xbox save game exploit used some interesting obfuscation techniques to limit it to only booting Linux. It stored its payload encrypted in the JPEG header of a penguin image. It didn’t bypass code signature verification completely, it modified the Xbox’s RSA public key to have a trivial factor, which allowed the author to sign his own images with a different private key.

With all this work, it took about 3 months for someone to reverse-engineer it. At that point, the same hole could be used to run pirated games. However, this hack didn’t directly enable piracy because there were already modchip-based methods in use. So, while obfuscation can add some time to pirates getting access to the exploit, it wasn’t much.

Another approach is to embed the exploit in a modchip. These have long been used by pirates to protect their exploits from other pirates. As soon as another group clones an exploit, the price invariably goes down. Depending on the exploitation method and protection skill of the designer, reverse-engineering the modchip can be as hard as developing the exploit independently.

The homebrew community does not release many modchips because of the development cost. But if they did, it’s possible they could reduce the risk of piracy from their exploits. It would be interesting to see a homebrew-only modchip, where games were signed by a key that certified they were independently developed and not just a copy of a commercial game. The modchip could even be a platform for limiting exploitation of new holes that were only used for piracy. In effect, the homebrew hackers would be setting up their own parallel system of control to enforce their own code of ethics.

Software and hardware protection could slow down pirates acquiring exploits. However, the approach that has already proven effective is to limit the attention of the homebrew hackers by giving them limited access to the hardware. Game console vendors should take into account the dynamics of homebrew hackers versus the pirates in order to protect their platform’s revenue.

But what can you also do about it, homebrew hackers? Can you design a survivable system for keeping your favorite console safe from piracy while enabling homebrew? Enforce a code of ethics within your group via technical measures? If anyone can make this happen, you can.