While I haven’t written an article in a while, I’m still alive. I just got buried with work, tax prep, and using every spare moment to try to finish up the xum1541. Last week, I attended the iSec Forum and saw a talk about cookie forcing based on work by Chris Evans and Michael Zalewski. This attack involves overwriting SSL-only cookies with a cookie injected into a non-SSL connection. In other words, browsers prevent disclosure of SSL-only cookies, but not deletion or replacement by cookies from an insecure session.
I don’t follow network security closely so this may be an older attack. However, it reminds me how the web application and browser designers treat SSL like table salt — sprinkle a little bit here and there, but be careful not to overuse it. That’s completely the wrong mentality.
WordPress recently notified their users how to enable SSL for the admin interface. While it’s admirable that they are providing more security, the attitude behind the post is a great example of this dangerous mentality. They claim SSL is only recommended when blogging from a public network, even going so far as to suggest it be disabled again when back on a “secure network”. It’s hard to believe performance is the issue, given the CPU gains in the past 13 years.
Attention: if you’re using a web application on a shared network (say, the Internet), you’re not on a secure network. This whole idea that users should pick and choose SSL based on some ephemeral security assessment of the local network is insane. How can you expect anyone, let alone regular users, to perform a security assessment before disabling SSL and then remember to re-enable it before traveling to an insecure network? (You can’t log into your blog and re-enable SSL from the insecure network because you would get compromised doing so.)
Likewise, sites such as Yahoo Mail use SSL for submitting the login password, but then provide a session cookie over plain HTTP. A session cookie is almost as good as a password. As long as the attacker refreshes their own session periodically, the cookie stays valid. (Do any web services implement an absolute session limit?) Even if the user clicks “log out”, the attacker can feed a fake logout page to them and keep the cookie active.
All cookies should definitely have their own cryptographic integrity protection and encryption, independent of SSL. But it is clear that the entire attitude toward SSL is wrong, and we will all eventually have to change it. Open wireless networks have helped session hijacking proliferate, no ARP spoofing needed. Soon, malware may contain a MITM kit to compromise any user accessing a website who shares an access point with a rooted system. As this attack becomes more common, perhaps we’ll see the end of SSL as an accessory, and it will be mandated for the entirety of every authenticated session. The prevailing attitude will have to change first.
Preach it! As you know better than me, the symmetric and asymmetric encryption gets cheaper every year, is amortizable (HTTP 1.1 keep-alive, TLS session resumption), is acceleratable, and is a reasonable cost of doing business anyway. The tougher cost is the TLS handshake: there’s no avoiding the network latency it incurs. Of course, that latency is also amortizable, AND, the majority of sites I’ve looked at are paying far worse costs than a little 100ms set-up. Sending 300KB of HTML/JS/CSS where 30KB would have the exact same meaning to the browser? Loading 40 images from 14 hostnames? Suboptimal JPEG compression level? An unoptimized AJAX client-side component? Failing to tell user agents and proxies to cache things properly? All those things cost considerably more (1 or more orders of magnitude) than the TLS handshake, yet they are the norm even on big, high-traffic web sites.
I’m doing a talk on this topic at Web 2.0 Expo (April 2009) on this topic, to try to change the prevailing attitude. At the same time, maybe we can convince browser vendors to try HTTPS before HTTP when users type in “wellsfargo.com”. :)
Heh, thanks Chris. You’re right that sesssion resumption and pipelining help significantly. I briefly looked at WordPress and it appears to load many small images from a lot of different servers with no pipelining. Yahoo email goes through two separate SSL connections before getting the cookie and connecting to the email server (a third name lookup).
On a related note, I’ve always wondered why there wasn’t an html-compress filter that static content was run through as part of an install step. I’m not talking gzip, I mean things like removing all whitespace, reducing all javascript variables and functions to short versions, and eliminating comments. This plus gzip compression could greatly reduce load times, even without SSL. Of course, you wouldn’t use this on the debugging server.
Note that pipelining and keep-alive are different things:
http://en.wikipedia.org/wiki/HTTP_persistent_connection
http://en.wikipedia.org/wiki/HTTP_pipelining
While both definitely help performance, browsers apparently can’t assume that servers support it (even though “HTTP/1.1 conforming servers are required to support pipelining”). You can turn it on in Firefox in about:config and see if it works well.
As for minification (eliminating whitespace and comments and such), it does indeed help, even when combined with compression. One of my clients has an 80KiB front page, 40KiB of which is nothing but whitespace! They gzip the 80 down to 12, but gzipping the minified version gets you down to 9.7 — a significant savings when you get lots of hits per day. :)
There are free tools for minification, and they could be built into filters, and probably have been.
Sorry to babble on — this is one of my favorite topics because it unites security, usability, and performance and shows that the three work together, rather than opposing each other.
Excellent info, thanks for babbling. :)