Catching up on recent crypto developments

When I started this blog, the goal was to write long-form posts that could serve as a standalone intro to security and crypto topics. Rather than write about the history of the NSA as planned, I’ll try writing a few short notes in hopes that they’ll fit better within the time I have. (Running a company and then launching a new one the past few years has limited my time.)

Heartbleed has to be the most useful SSL bug ever. It has launched not just one, but two separate rewrites of OpenSSL. I’m hoping it will also give the IETF more incentive to reject layering violations like the heartbeat extension. Security protocols are for security, not path MTU discovery.

Giving an attacker a way to ask you to say a specific phrase is never a good idea. Worse would be letting them tell you what to say under encryption.

Earlier this year, I was pleased to find out that a protocol I designed and implemented has been in use for millions (billions?) of transactions the past few years. During design, I spent days slaving over field order and dependencies in order to force implementations to be as simple as possible. “Never supply the same information twice in a protocol” was the mantra, eliminating many length fields and relying on a version bump at the start of the messages if the format ever changed. Because I had to create a variant cipher mode, I spent 5x the initial design time scrutinizing the protocol for flaws, then paid a third-party for a review.

As part of the implementation, I provided a full test harness and values covering all the valid and error paths. I also wrote a fuzzer and ran that for days over the final code to check for any possible variation in behavior, seeding it with the test cases. I encouraged the customer to integrate these tests into their release process to ensure changes to the surrounding code (e.g., 32/64 bit arch) didn’t break it. Finally, I helped with the key generation and production line design to be sure personalization would be secure too.

I firmly believe this kind of effort is required for creating security and crypto that is in widespread use. This shouldn’t be extraordinary, but it sadly seems to be so today. It was only through the commitment of my customer that we were able to spend so much effort on this project.

If you have the responsibility to create something protecting money or lives, I hope you’ll commit to doing the same.

Digging Into the NSA Revelations

Last year was a momentous one in revelations about the NSA, technical espionage, and exploitation. I’ve been meaning for a while to write about the information that has been revealed by Snowden and what it means for the public crypto and security world.

Part of the problem has been the slow release of documents and their high-level nature. We’ve now seen about 6 months of releases, each covering a small facet of the NSA. Each article attempts to draw broad conclusions about the purpose, intent, and implementation of complex systems, based on leaked, codeword-laden Powerpoint. I commend the journalists who have combed through this material as it is both vague and obfuscated, but I often cringe at the resulting articles.

My rule of thumb whenever a new “earth shattering” release appears is to skip the article and go straight for the backing materials. (Journalists, please post your slide deck sources to a publicly accessible location in addition to burying them in your own site’s labyrinth of links.) By doing so, I’ve found that some of the articles are accurate, but there are always a number of unwarranted conclusions as well. Because of the piecemeal release process, there often aren’t enough additional sources to interpret each slide deck properly.

I’m going to try to address the revelations we’ve seen by category: cryptanalysis, computer exploitation, software backdoors, network monitoring, etc. There have been multiple revelations in each category over the past 6 months, but examining them in isolation has resulted in reversals and loose ends.

For example, the first conclusion upon the revelation of PRISM was that the NSA could directly control equipment on a participating service’s network in order to retrieve emails or other communications. Later, the possibility of this being an electronic “drop box” system emerged. As of today, I’m unaware of any conclusive proof as to which of these vastly differing implementations (or others) were referred to by PRISM.

However, this implementation difference has huge ramifications for what the participating services were doing. Did they provide wholesale access to their networks? Or were they providing court-ordered information via a convenient transfer method after reviewing the requests? We still don’t know for sure, but additional releases seem to confirm that at least many Internet providers did not intentionally provide wholesale access to the NSA.

Unwarranted jumping to conclusions has created a new sport, the vendor witch hunt. For example, the revelation of DROPOUTJEEP, an iPhone rootkit, was accompanied by allegations that Apple cooperated with the NSA to create it. It’s great that Jacob Applebaum worked with the Spiegel press, applying his technical background, but he is overreaching here.

Jacob said, “either they [NSA] have a huge collection of exploits that work against Apple products … or Apple sabotaged it themselves.” This ignores a third option, which is that reliable exploitation against a limited number of product versions can be achieved with only a small collection of exploits.

The two critical pieces of information that were underplayed here are that the DROPOUTJEEP description was dated October 1, 2008 and says “the initial release will focus on installing the implant via close access methods” (i.e., physical access) and “status: in development”.

What was October 2008 like? Well, there were two iPhones, the original and just-released 3G model. There were iOS versions 1.0 – 1.1.4 and 2.0 – 2.1 available as well. Were there public exploits for this hardware and software? Yes! The jailbreak community had reliable exploitation (Pwnage and Pwnage 2.0) on all of these combinations via physical access. In fact, these exploits were in the boot ROM and thus unpatchable and reliable. Meanwhile, ex-NSA TAO researcher Charlie Miller publicly exploited iOS 1.x from remote in summer 2007.

So the NSA in October 2008 was in the process of porting a rootkit to iOS, with the advantage of a publicly-developed exploit in the lowest levels of all models of the hardware, and targeting physical installation. Is there any wonder that such an approach would be 100% reliable? This is a much simpler explanation and is not particularly flattering to the NSA.

One thing we should do immediately is stop the witch hunts based on incomplete information. Some vendors and service providers have assisted the NSA and some haven’t. Some had full knowledge of what they were doing, some should have known, and others were justifiably unaware. Each of their stories is unique and should be considered separately before assuming the worst.

Next time, I’ll continue with some background on the NSA that is essential to interpreting the Snowden materials.

Bypassing Sonos Updates Check

Sonos has screwed up their update process a few times. What happens is that they publish an XML file showing that a new update is available but forget to publish the update itself on their servers. Since the update check is a mandatory part of the installation process, they effectively lock out new customers from using their software. This most recently happened with the 4.2.1 update for Mac.

There’s a way to bypass this: redirect the site “update.sonos.com” to 127.0.0.1 (localhost) using your system hosts file. When you launch the Sonos Controller software for the first time, it will try to connect to your own computer to get the XML file. Instead, it will fail to connect and then move forward in the setup process. After that, you can re-enable access to the updates server.

Specific guides:

The exact entry you want to add is:

127.0.0.1    update.sonos.com

Be sure to remove this line after you’ve gotten your system setup so you can get updates in the future.

An appeal for support

If you enjoy this blog and want to give something back, I’d like you to consider a donation to WildAid. My wife Sandra has been organizing a film event and donation drive in order to raise money for saving wild elephants from extinction.

Elephants are being killed at an alarming rate. After they experienced a comeback in the 90’s due to laws restricting the ivory trade, a few policy mistakes and recent economic growth in Asia led to such an enormous demand for ivory that elephants may become extinct in the wild in the next 10 years. President Obama recognized this crisis recently with an executive order.

WildAid tackles the demand side of the problem. With the assistance of stars like Yao Ming and Maggie Q, they create educational public awareness campaigns (video) to encourage the public not to buy ivory, rhino horn, or other wild animal products. They distribute their media in markets like China, Vietnam, and the second largest ivory consumer, the United States.

If you’re interested in attending the film, it will be shown October 9th at The New Parkway theater in Oakland. Your ticket gets you dinner with live jazz, as well as a chance to see an educational and heartwarming movie about an elephant family. (The film is from PBS and does not show any violence; children 8 years old and up should be ok.)

If you can’t come, you can help out by donating on the same page. I keep this blog ad-free and non-commercial overall, so I appreciate you reading this appeal. Thanks, and now back to your regularly scheduled crypto meanderings.

20 Years of Internet

This month marks my 20th anniversary of first connecting to the Internet. It seems like a good time to look back on the changes and where we can go from here.

I grew up in a rural area, suspecting but never fully realizing the isolation from the rest of the world, technology or otherwise. Computers and robots of the future lived in the ephemeral world of Sears catalogs and Byte magazines rescued from a dumpster. However, the amateur radio and remote-controlled plane hobbies of my father’s friends brought the world of computing and electronics to our house.

Still, communications were highly local. The VIC-20 could connect to a few BBS systems and my father’s industrial control of warehouse refrigeration systems (way before SCADA). However, anything beyond that incurred long distance charges and thus was irrelevant. Only the strange messages and terminology in cracked games, distributed from faraway places like Sweden, hinted at a much broader world out there.

Towards the end of high school, our local BBS finally got a FidoNet connection. Text files started trickling in about hacking COSMOS to change your “friend’s” phone service and building colored boxes to get free calls. One of those articles described how to use the Internet. I’d spend hours trying to remember all the protocol acronyms, TCP port numbers, etc. The Internet of my imagination was a strange amalgamation of X.25, ARPA protocols, TCP/IP, and the futuristic OSI protocols that were going to replace TCP/IP.

Once I arrived at college, I was one of the first in line to register for an Internet account. Our dorm room had an always-on serial connection to the campus terminal server and Ethernet was coming in a few weeks. It took some encouraging from my friends to make the jump to Ethernet (expensive, and 10BASE-T was barely standardized so it was hard to figure out if a given NIC would even work). Along with free cable TV, you’ve got to wonder, “what were they thinking?”

The dorm Ethernet experiment soon became a glorious free-for-all. There was a lot of Windows 3.1 and Linux, but also a few NeXTSTEP and Sun systems. Campus network admin had its hands full, bungling rushed policy changes intended to stop the flood of warez servers, IPX broadcast storms from Doom games, IRC battles, sniffing, hacking, and even a student running a commercial ISP on the side. Life on the dorm network was like a 24/7 Defcon CTF, but if you failed, you were reinstalling your OS from 25 floppies before you could do your homework.

There were three eras I got to see: Usenet (ending in 1994), early Web (1994-1997), and commercial Web (1998 to present). The Usenet era involved major changes in distributed protocols and operating systems, including the advent of Linux and other free Unixes. The early Web era transitioned to centralized servers with HTTP, with much experimentation in how to standardize access to information (remember image maps? Altavista vs. Lycos?) The commercial Web finally gave the non-technical world a reason to get online, to buy and sell stuff. It continues to be characterized by experimentation in business models, starting with companies like eBay.

One of my constant annoyances with technological progress is when we don’t benefit from history. Oftentimes, what comes along later is not better than what came before. This leads to gaps in progress, where you spend time recapitulating the past before you can truly move on to the predicted future.

Today, I morn the abandonment of the end-to-end principle. I don’t mean networking equipment has gotten too smart for its own good (though it has). I mean that we’re neglecting a wealth of intelligence at the endpoints and restricting them to a star topology, client/server communication model.

Multicast is one example of how things could be different. Much of the Internet data today is video streams or OS updates. Multicast allows a single transmission to be received by multiple listeners, building a dynamic tree of routes so that it traverses a minimal set of networks. Now, add in forward error-correction (allows you to tune in to a rotating transmission at any point in time and reconstruct the data) and distributed hash tables (allows you to look up information without a central directory) and you have something very powerful.

Bittorrent is a hack to leverage an oversight in the ISP pricing model. Since upload bandwidth from home broadband was underutilized but paid for, Bittorrent could reduce the load on centralized servers by augmenting them with users’ connections. This was a clever way to improve the existing star topology of HTTP downloads but would have been unnecessary if proper distributed systems using multicast were available.

We have had the technology for 20 years but a number of players have kept it from being widely deployed. Rapid growth in backbone bandwidth meant there wasn’t enough pricing pressure to reduce wastefulness. The domination of Windows and its closed TCP/IP stack meant it was difficult to innovate in a meaningful way. (I had invented a TCP NAT traversal protocol in 1999 that employed TCP simultaneous connect, but Windows had a bug that caused such connections to fail so I had to scrap it.) There have been bugs in core router stacks, and so multicast is mostly disabled there.

Firewalls are another symptom of the problem. If you had a standardized way to control endpoint communications, there would be no need for firewalls. You’d simply set policies for the group of computers you controlled and the OS on each would figure out how to apply them. However, closed platforms and a lack of standardization mean that not only do we still have network firewalls, but numerous variants of host-based firewalls as well.

Since the late 90’s, money has driven an intense focus on web-based businesses. In this latest round of tech froth, San Francisco is the epicenter instead of San Jose. Nobody cares what router they’re using, and there’s a race to be the most “meta”. Not only did EC2 mean you don’t touch the servers, but now Heroku means you don’t touch the software. But as you build higher, the architectures get narrower. There is no HTTP multicast and the same-origin policy means you can’t even implement Bittorrent in browser JavaScript.

It seems like decentralized protocols only appear in the presence of external pressure. Financial pressure doesn’t seem to be enough so far, but legal pressure led to Tor, magnet links, etc. Apple has done the most of anyone commercially in building distributed systems into their products (Bonjour service discovery, Airdrop direct file sharing), but these capabilities are not employed by many applications. Instead, we get simulated distributed systems like Dropbox, which are still based on the star topology.

I hope that the prevailing trend changes, and that we see more innovations in smart endpoints, chatting with each other in a diversity of decentralized, standardized, and secure protocols. Make this kind of software stack available on every popular platform, and we could see much more innovation in the next 20 years.

Keeping skills current in a changing world

I came across this article on how older tech workers are having trouble finding work. I’m sure many others have written about whether this is true, whose fault it is, and whether H1B visas should be increased or not. I haven’t done the research so I can’t comment on such things, but I do know a solution to out-of-date skills.

The Internet makes developing new skills extremely accessible. With a PC and free software, you can teach yourself almost any “hot skill” in a week. Take this quote, for example:

“Some areas are so new, like cloud stuff, very few people have any experience in that,” Wade said. “So whether they hire me, or a new citizen grad, or bring in an H-1B visa, they will have to train them all.”

Here’s a quick way to learn “cloud stuff”. Get a free Amazon AWS account. Download their Java-based tools or boto for Python. Launch a VM, ssh in, and you’re in the cloud. Install Ubuntu, PostgreSQL, and other free and open-source software. Read the manuals and run a couple experiments, using your current background. If you were a DBA, try SimpleDB. If a sysadmin, try EC2 and monitoring.

Another quote from a different engineer:

‘If a developer has experience in Android 2.0, “the company would be hiring only someone who had at least 6 months of the 4.0 experience,” he said. “And you cannot get that experience unless you are hired. And you cannot get hired unless you provably have that experience. It is the chicken-and-the-egg situation. “‘

Android is also free, and includes a usable emulator for testing out your code. Or buy a cheap, wifi-only Android phone and start working from there. Within a week, you can have experience on the latest Android version. Again, for no cost other than your time.

I suspect that it’s not lack of time that is keeping unemployed engineers from developing skills. It’s a mindset problem.

‘He may not pick the right area to focus on, he said. “The only way to know for sure is if a company will pay you to take the training,” he said. “That means it has value to them.”‘

In startups, there’s an approach called customer development. Essentially, it involves putting together different, lightweight experiments based on theories about what a customer might buy. If more than one customer pays you, that’s confirmation. If not, you move onto the next one.

Compare this to the traditional monolithic startup, where you have an idea, spend millions building it into a product, then try to get customers to buy it. You only have a certain timeline before you run out of money, so it’s better when you can try several different ideas in the meanwhile.

There’s an obvious application to job hunting. Take a variety of hypotheses (mobile, cloud, etc.) and put together a short test. Take one week to learn the skill to the point you have a “hello world” demo. Float a custom resume to your targets, leaving out irrelevant experience that would confuse a recruiter. Measure the responses and repeat as necessary. If you get a bite, spend another week learning that skill in-depth before the interview.

There are biases against older workers, and some companies are too focused on keywords on resumes. Those are definitely problems that need changing. However, when it comes to learning new skills, there’s never been a better time for being able to hunt for a job using simple experiments based on free resources. The only barrier is the mindset that skills come through a monolithic process of degrees, certification, or training instead of a self-directed, agile process.