root labs rdist

June 24, 2014

Timing-safe memcmp and API parity

Filed under: Security — Nate Lawson @ 4:03 am

OpenBSD released a new API with a timing-safe bcmp and memcmp. I strongly agree with their strategy of encouraging developers to adopt “safe” APIs, even at a slight performance loss. The strlcpy/strlcat family of functions they pioneered have been immensely helpful against overflows.

Data-independent timing routines are extremely hard to get right, and the farther you are from the hardware, the harder it is to avoid unintended leakage. Your best bet if working in an embedded environment, is to use assembly and thoroughly test on the target CPU under multiple scenarios (interrupts, power management throttling clocks, etc.) Moving up to C creates a lot of pitfalls, especially if you support multiple compilers and versions. Now you are subject to micro-architectural variance, such as cache, branch prediction, bus contention, etc. And compilers have a lot of leeway with optimizing away code with strictly-intended behavior.

While I think the timing-safe bcmp (straightforward comparison for equality) is useful, I’m more concerned with the new memcmp variant. It is more complicated and subject to compiler and CPU quirks (because of the additional ordering requirements), may confuse developers who really want bcmp, and could encourage unsafe designs.

If you ask a C developer to implement bytewise comparison, they’ll almost always choose memcmp(). (The “b” series of functions is more local to BSD and not Windows or POSIX platforms.) This means that developers using timingsafe_memcmp() will be incorporating unnecessary features simply by picking the familiar name. If compiler or CPU variation compromised this routine, this would introduce a vulnerability. John-Mark pointed out to me a few ways the current implementation could possibly fail due to compiler optimizations. While the bcmp routine is simpler (XOR accumulate loop), it too could possibly be invalidated by optimization such as vectorization.

The most important concern is if this will encourage unsafe designs. I can’t come up with a crypto design that requires ordering of secret data that isn’t also a terrible idea. Sorting your AES keys? Why? Don’t do that. Database index lookups that reveal secret contents? Making array comparison constant-time fixes nothing when the index involves large blocks of RAM/disk read timing leaks. In any scenario that involves the need for ordering of secret data, much larger architectural issues need to be addressed than a comparison function.

Simple timing-independent comparison is an important primitive, but it should be used only when other measures are not available. If you’re concerned about HMAC timing leaks, you could instead hash or double-HMAC the data and compare the results with a variable-timing comparison routine. This takes a tiny bit longer but ensures any leaks are useless to an attacker. Such algorithmic changes are much safer than trying to set compiler and CPU behavior in concrete.

The justification I’ve heard from Ted Unangst is “API parity“. His argument is that developers will not use the timing-safe routines if they don’t conform to the ordering behavior of memcmp. I don’t get this argument. Developers are more likely to be annoyed with the sudden performance loss of switching to timing-safe routines, especially for comparing large blocks of data. And, there’s more behavior that should intentionally be broken in a “secure memory compare” routine, such as two zero-length arrays returning success instead of an error.

Perhaps OpenBSD will reconsider offering this routine purely for the sake of API parity. There are too many drawbacks.

June 23, 2014

In Defense of JavaScript Crypto

Filed under: Security — Nate Lawson @ 4:05 am

Thai Duong wrote a great post outlining why he likes JavaScript crypto, although it’s not as strong a defense as you might guess from the title. While he makes some fair points of some limited applications of JavaScript, his post is actually a great argument against those pushing web page JS crypto.

First, he starts off with a clever Unicode attack on JS AES by Bleichenbacher. It is a great way to illustrate how the language, with its bitwise and type hostility, actively works against crypto implementers. Though Thai points out lots of different ways to work around these problems, I disagree that it’s clear sailing for developers once your crypto library deals with these issues. You’ll get to pick up where your low-level library left off.

Oh, and those of you were looking for defense of web page crypto for your latest app? Sorry, that’s still dumb. Google’s End-to-End will only be shipped as a browser extension.

The most ironic part of Thai’s post involves avoiding PCI audit by shipping JS to the browser to encrypt credit card numbers. Back in 2009, I gave a Google Tech Talk on a variety of crypto topics. Afterwards, a Google engineer came up to me and gave exactly Thai’s example as the perfect use case for JavaScript crypto. “We avoid the web server being in scope for PCI audits by shipping JS to the user,” he said. “This way we can strip off the SSL as soon as possible at the edge and avoid the cost of full encryption to the backend.”

He went on to describe a race-condition prone method of auditing Google’s own web servers, hashing the JS file served by each to look for compromised copies. When I pointed out this was trivial to bypass, he said it didn’t really matter because PCI is a charade anyway.

While he’s right that PCI is more about full employment for auditors & vendors than security, news about NSA tapping the Google backbone also shows why clever ways to avoid end-to-end encryption often create unintended weaknesses. I hope Google no longer underestimates their exposure to nation-state adversaries after the Snowden revelations, but this use-case for JS crypto apparently hasn’t died yet.

May 26, 2014

Catching up on recent crypto developments

Filed under: Crypto,Security — Nate Lawson @ 6:10 am

When I started this blog, the goal was to write long-form posts that could serve as a standalone intro to security and crypto topics. Rather than write about the history of the NSA as planned, I’ll try writing a few short notes in hopes that they’ll fit better within the time I have. (Running a company and then launching a new one the past few years has limited my time.)

Heartbleed has to be the most useful SSL bug ever. It has launched not just one, but two separate rewrites of OpenSSL. I’m hoping it will also give the IETF more incentive to reject layering violations like the heartbeat extension. Security protocols are for security, not path MTU discovery.

Giving an attacker a way to ask you to say a specific phrase is never a good idea. Worse would be letting them tell you what to say under encryption.

Earlier this year, I was pleased to find out that a protocol I designed and implemented has been in use for millions (billions?) of transactions the past few years. During design, I spent days slaving over field order and dependencies in order to force implementations to be as simple as possible. “Never supply the same information twice in a protocol” was the mantra, eliminating many length fields and relying on a version bump at the start of the messages if the format ever changed. Because I had to create a variant cipher mode, I spent 5x the initial design time scrutinizing the protocol for flaws, then paid a third-party for a review.

As part of the implementation, I provided a full test harness and values covering all the valid and error paths. I also wrote a fuzzer and ran that for days over the final code to check for any possible variation in behavior, seeding it with the test cases. I encouraged the customer to integrate these tests into their release process to ensure changes to the surrounding code (e.g., 32/64 bit arch) didn’t break it. Finally, I helped with the key generation and production line design to be sure personalization would be secure too.

I firmly believe this kind of effort is required for creating security and crypto that is in widespread use. This shouldn’t be extraordinary, but it sadly seems to be so today. It was only through the commitment of my customer that we were able to spend so much effort on this project.

If you have the responsibility to create something protecting money or lives, I hope you’ll commit to doing the same.

January 6, 2014

Digging Into the NSA Revelations

Filed under: Crypto,Hacking,iOS,NSA,Rootkit,Security — Nate Lawson @ 5:00 am

Last year was a momentous one in revelations about the NSA, technical espionage, and exploitation. I’ve been meaning for a while to write about the information that has been revealed by Snowden and what it means for the public crypto and security world.

Part of the problem has been the slow release of documents and their high-level nature. We’ve now seen about 6 months of releases, each covering a small facet of the NSA. Each article attempts to draw broad conclusions about the purpose, intent, and implementation of complex systems, based on leaked, codeword-laden Powerpoint. I commend the journalists who have combed through this material as it is both vague and obfuscated, but I often cringe at the resulting articles.

My rule of thumb whenever a new “earth shattering” release appears is to skip the article and go straight for the backing materials. (Journalists, please post your slide deck sources to a publicly accessible location in addition to burying them in your own site’s labyrinth of links.) By doing so, I’ve found that some of the articles are accurate, but there are always a number of unwarranted conclusions as well. Because of the piecemeal release process, there often aren’t enough additional sources to interpret each slide deck properly.

I’m going to try to address the revelations we’ve seen by category: cryptanalysis, computer exploitation, software backdoors, network monitoring, etc. There have been multiple revelations in each category over the past 6 months, but examining them in isolation has resulted in reversals and loose ends.

For example, the first conclusion upon the revelation of PRISM was that the NSA could directly control equipment on a participating service’s network in order to retrieve emails or other communications. Later, the possibility of this being an electronic “drop box” system emerged. As of today, I’m unaware of any conclusive proof as to which of these vastly differing implementations (or others) were referred to by PRISM.

However, this implementation difference has huge ramifications for what the participating services were doing. Did they provide wholesale access to their networks? Or were they providing court-ordered information via a convenient transfer method after reviewing the requests? We still don’t know for sure, but additional releases seem to confirm that at least many Internet providers did not intentionally provide wholesale access to the NSA.

Unwarranted jumping to conclusions has created a new sport, the vendor witch hunt. For example, the revelation of DROPOUTJEEP, an iPhone rootkit, was accompanied by allegations that Apple cooperated with the NSA to create it. It’s great that Jacob Applebaum worked with the Spiegel press, applying his technical background, but he is overreaching here.

Jacob said, “either they [NSA] have a huge collection of exploits that work against Apple products … or Apple sabotaged it themselves.” This ignores a third option, which is that reliable exploitation against a limited number of product versions can be achieved with only a small collection of exploits.

The two critical pieces of information that were underplayed here are that the DROPOUTJEEP description was dated October 1, 2008 and says “the initial release will focus on installing the implant via close access methods” (i.e., physical access) and “status: in development”.

What was October 2008 like? Well, there were two iPhones, the original and just-released 3G model. There were iOS versions 1.0 – 1.1.4 and 2.0 – 2.1 available as well. Were there public exploits for this hardware and software? Yes! The jailbreak community had reliable exploitation (Pwnage and Pwnage 2.0) on all of these combinations via physical access. In fact, these exploits were in the boot ROM and thus unpatchable and reliable. Meanwhile, ex-NSA TAO researcher Charlie Miller publicly exploited iOS 1.x from remote in summer 2007.

So the NSA in October 2008 was in the process of porting a rootkit to iOS, with the advantage of a publicly-developed exploit in the lowest levels of all models of the hardware, and targeting physical installation. Is there any wonder that such an approach would be 100% reliable? This is a much simpler explanation and is not particularly flattering to the NSA.

One thing we should do immediately is stop the witch hunts based on incomplete information. Some vendors and service providers have assisted the NSA and some haven’t. Some had full knowledge of what they were doing, some should have known, and others were justifiably unaware. Each of their stories is unique and should be considered separately before assuming the worst.

Next time, I’ll continue with some background on the NSA that is essential to interpreting the Snowden materials.

December 27, 2013

Bypassing Sonos Updates Check

Filed under: Misc,Protocols — Nate Lawson @ 3:48 pm

Sonos has screwed up their update process a few times. What happens is that they publish an XML file showing that a new update is available but forget to publish the update itself on their servers. Since the update check is a mandatory part of the installation process, they effectively lock out new customers from using their software. This most recently happened with the 4.2.1 update for Mac.

There’s a way to bypass this: redirect the site “update.sonos.com” to 127.0.0.1 (localhost) using your system hosts file. When you launch the Sonos Controller software for the first time, it will try to connect to your own computer to get the XML file. Instead, it will fail to connect and then move forward in the setup process. After that, you can re-enable access to the updates server.

Specific guides:

The exact entry you want to add is:

127.0.0.1    update.sonos.com

Be sure to remove this line after you’ve gotten your system setup so you can get updates in the future.

September 24, 2013

An appeal for support

Filed under: Misc — Nate Lawson @ 5:00 am

If you enjoy this blog and want to give something back, I’d like you to consider a donation to WildAid. My wife Sandra has been organizing a film event and donation drive in order to raise money for saving wild elephants from extinction.

Elephants are being killed at an alarming rate. After they experienced a comeback in the 90’s due to laws restricting the ivory trade, a few policy mistakes and recent economic growth in Asia led to such an enormous demand for ivory that elephants may become extinct in the wild in the next 10 years. President Obama recognized this crisis recently with an executive order.

WildAid tackles the demand side of the problem. With the assistance of stars like Yao Ming and Maggie Q, they create educational public awareness campaigns (video) to encourage the public not to buy ivory, rhino horn, or other wild animal products. They distribute their media in markets like China, Vietnam, and the second largest ivory consumer, the United States.

If you’re interested in attending the film, it will be shown October 9th at The New Parkway theater in Oakland. Your ticket gets you dinner with live jazz, as well as a chance to see an educational and heartwarming movie about an elephant family. (The film is from PBS and does not show any violence; children 8 years old and up should be ok.)

If you can’t come, you can help out by donating on the same page. I keep this blog ad-free and non-commercial overall, so I appreciate you reading this appeal. Thanks, and now back to your regularly scheduled crypto meanderings.

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 83 other followers