root labs rdist

December 4, 2012

Has HTML5 made us more secure?

Filed under: Hacking,Network,Security — Nate Lawson @ 4:19 am

Brad Hill recently wrote an article claiming that HTML5 has made us more secure, not less. His essential claim is that over the last 10 years, browsers have become more secure. He compares IE6, ActiveX, and Flash in 2002 (when he started in infosec) with HTML5 in order to make this point. While I think his analysis is true for general consumers, it doesn’t apply to more valuable targets, who are indeed less secure with the spread of HTML5.

HTML5 is a broad grouping of features, and there are two parts that I think are important to increasing vulnerability. First, there is the growing flexibility in parsing elements for JavaScript, CSS, SVG, etc., including the interpretation of relationships between them. Second, there’s the exposure of complex decoders for images, video, audio, storage, 3D graphics, etc. to untrusted sources.

If you look at the vulnerability history for these two groups, both are common culprits in flaws that lead to untrusted code execution. They still regularly exhibit “game over” vulnerabilities, in Firefox and Chrome. Displaying a PNG has been exploitable as recently as 2012. Selecting a font via CSS was exploitable in 2010. In many cases, these types of bugs are interrelated. A flaw in a codec could require heap grooming via JavaScript to be reliably exploitable. HTML5’s increased surface area of more parsing and complex decoders standardizes remote, untrusted access to components that are still the biggest source of code execution vulnerabilities in the browser, despite attempts to audit and harden them.

Additionally, it exposes elements that have not had this kind of attention. WebGL hands over access to your 3D graphics stack, something which even CERT thinks is worth disabling. If you want to know the future of exploitation, you need to keep an eye on the console and iPhone/Android hacking groups. 3D shaders were the first software exploit of the Xbox 360, a platform that is much more secure than any browser. And Windows GDI was remotely exploitable in 2009. Firefox WebGL is built on top of Mesa, which is software from the bad old days of 1993. How is it going to do any better than Microsoft’s most secure platform?

As an aside, a rather poor PR battle about WebGL is worth addressing here. An article by a group called Context in 2011 raised some of these same issues, but their exploit was only a DoS. Mozilla devs jumped on this right away. Their solution is a whitelist and blacklist for graphics drivers. A blacklist is great for everyone after a 0-day has been discovered and fixed and deployed, but not so good before then.

Call me a luddite, but I measure security by what I can easily disable or route around and ignore. Flash is easily blocked and can be uninstalled. JavaScript can be disabled with a browser setting or filtered. But HTML5? Well, that’s knit into pretty much every area of the browser. You want to disable WebGL? No checkbox, but at least there’s about:config. Just make sure no one set “webgl.force-enabled” or whatever the next software update adds to your settings. Want to disable parts of CSS but not page layout? Want a no-codec browser? Get out the compiler.

Browser vendors don’t care about the individual target getting compromised; they care about the masses. The cost/benefit tradeoff for these two groups are completely opposite. Otherwise, we’d see vendors competing for who could remove as many features as possible to produce the qmail of browsers.

Security happens in waves. If you’re an ordinary user, the work of Microsoft and Google in particular have paid off for you over the past 10 years. But woe to you if you manage high-value targets. The game of whack-a-mole with the browser vendors has been getting worse, not better. The more confident they get from their bug bounties and hardening, the more likely they are to add complex, deeply intertwined features. And so the pendulum begins swinging back the other way for everyone.

14 Comments

  1. Actually Active X already gave access to DirectX and so on for games at the time of IE5. WebGL is a very specific subset of OpenGL (similair to OpenGL ES for mobile, I think it was even based on it).

    If you care about security, it’s probably better to go into the Firefox preferences and just disable hardware acceleration in general instead of WebGL specifically in about:config. Without disabling hardware acceleration it is probably a false sense of security.

    One other things which has improved a lot for normal users is the update cycle (some might not like it, so they might not call it an improvement), because browsers get many updates, they also get security updates faster.

    Comment by Lennie — December 4, 2012 @ 4:51 am

    • Yes, I was not claiming IE/ActiveX was at any point better than Firefox or Chrome in terms of security. So saying that ActiveX ~= WebGL in terms of access to graphics hardware supports my point, not disproves it. Oh, and WebGL driver blacklists ~= ActiveX killbits. Funny the similarity in approaches there.

      I do know about disabling hardware acceleration, but that also supports my point. Why isn’t there a “grant 3D acceleration” permission that software has to ask for in order to get access to it? Just because I don’t want my browser to have it shouldn’t mean my window manager can’t either.

      Comment by Nate Lawson — December 4, 2012 @ 8:55 am

      • The reason I mentioned Active X is because you mentioned HTML5/WebGL as something new, it isn’t new.

        My point is, 3D acceleration doesn’t matter, a different bug in Firefox could still cause more problems if hardware acceleration is on, you don’t need to use WebGL. WebGL isn’t all that special.

        If you encounter a webpage that needs WebGL you can enable hardware acceleration which will automatically enable (accelerated) WebGL (I think the Firefox developers had the plan to use LLVMpipe to do software acceleration for WebGL for systems with blacklisted video drivers).

        You ask why is hardware acceleration enabled by default for all pages, because they want to show all pages as fast as possible of course. ;-)

        Also @Brad Hill / @Nate Lawson The browser with all the knobs and switches already exists, it’s called an addon.

        I still think I’ll rather frequently update “one” piece of software (the browser and the OS/drivers) and not have to deal with Flash, Java, Acrobat. I think it’s pretty clear they have a worse trackrecord than the browser makers. Adobe has improved the last few years, but Oracle doesn’t care.

        Anyway, what about supporting things like Content Security Policy, would it not be better to fundamentally improve all the web from the group up ?

        Comment by Lennie — December 4, 2012 @ 3:46 pm

      • I am comparing Firefox/Chrome adding WebGL to Firefox/Chrome before WebGL. That is a regression in my book.

        IE/ActiveX is not even in the scope of this article. You can make a lot of bad things look good by comparing them to ActiveX, but it doesn’t make them less bad.

        Comment by Nate Lawson — December 4, 2012 @ 5:44 pm

  2. On the other hand, since people anyway watch videos and want active content, I’d rather have it be a standardized element with a reliable mechanism for providing security updates than a random set of custom-installed plugins.

    As anecdotal evicence, I recently found that I was actually still running a years old Flash plugin because back in the days I needed to do some hacking to get it running and had static scripts and binaries in ~/bin/.

    Comment by pepe — December 4, 2012 @ 6:31 am

    • “Fast updates” only matter if you have to frequently update your software. They’re also part of the problem.

      Frequent updates lead to more vendor complacency with pre-ship testing since features can always be improved later. They also make it harder to audit what an update actually does, leading to the possibility of backdoors slipping into customer installs quickly once a vendor is compromised. Evidence: Flame’s use of the Windows Update code signing cert to get silent access to PCs.

      The qmail of web browsers would not need frequent updates.

      Comment by Nate Lawson — December 4, 2012 @ 8:59 am

  3. “Call me a luddite, but I measure security by what I can easily disable or route around and ignore.” Nate, I won’t call you a luddite, but I will say that’s not a particularly compelling argument to content developers: “Please use technologies that are worse and less likely to be secure for you and your users (plugins vs. HTML5) because I want to be able to more easily ignore your content.” I don’t think you’re going to build a lot of momentum with that message.

    I browsed for many years with plugins and script disabled, and as I mentioned in the original piece, the Web left us behind. It became unusable. The modern Web is not linked documents anymore, it is a giant distributed application, intended to be interactive everywhere. What you’re saying is really, “I don’t want to use the Web.”

    Or, if you think there is really a need and demand for a browser with more knobs and switches, or one that lives in the past world of static “pages”, then as you suggest, get out the compiler and meet it.

    Comment by Brad Hill — December 4, 2012 @ 10:25 am

    • A well-designed architecture adds new features as configurable components. A poor one hard-codes things, hides options, and mixes its implementation throughout the codebase.

      I said “get out the compiler” to illustrate how poor the runtime configurability is for these new features, not to suggest that people write their own browsers. If you are working with important targets, cooperative software development will have died if every solution is “write your own”.

      I disagree with your straw man (“I don’t want to use the Web.”) What I don’t want to use are the new features of the web as envisioned by the people who design things like w3crypto, Native Client or WebGL. There is a lot of middle ground between that and “static pages”.

      Comment by Nate Lawson — December 4, 2012 @ 5:54 pm

      • Now that you mention it, so what do you think about w3crypto ?

        Comment by Lennie — December 6, 2012 @ 2:46 am

      • That would probably take a full post in itself. As it stands today, w3crypto is a terrible idea. If approved, it puts a stamp of approval on browser-based JS crypto, ratifying direct access to low-level primitives without addressing the hard problems like key management.

        If you think the only problem with JS crypto is “I can’t do PKCS #1 v1.5 fast enough”, you’d probably think w3crypto is great.

        Comment by Nate Lawson — December 8, 2012 @ 10:29 am

      • Well, in practice I guess that means: the key is local to the browser, but you can export and import it ? Kind of like the client certificates ? Or you can use the system that is used for synchronizing bookmarkets and settings. At least the synchronizing function in Firefox will encrypt it before storing it on the server (you can use the Mozilla one or setup your own: just download the source from the Mozilla repository).

        Anyway, I think it is gonna be browser specific.

        Comment by Lennie — December 8, 2012 @ 11:27 am

      • Makes you wonder why the efforts are being placed into making a new API, not fixing usability of client certs.

        Comment by Nate Lawson — December 14, 2012 @ 1:29 pm

  4. Nate I agree with your assessment…except you may not have painted the picture bleak enough ;) I recently attended Portland Bsides and Steve Orrin did a talk there that summarized a lot of the research going on in the community on HTML5. The talk came across like trainwreck waiting to happen.

    The call to action Steve made was for more folks from the security side of the house to get involved with the HTML5 standard and it’s development. Most of the people involved there are from the browser and OS companies and not from a security background…and it shows.

    Comment by Bruce Monroe — December 4, 2012 @ 12:24 pm

    • Bruce, I just looked up Steven Orrin’s presentation, and it is full of wrong information. He doesn’t actually understand and clearly hasn’t written any code to try the “attacks” he’s putting on his slides, because most of them don’t work. I will also note, that despite his “call to action”, I’ve never seen anything from him, or any of the people whose work he references, posted to the WebAppSec WG, where we do all our design work on a public mailing list.

      Comment by Brad Hill — December 10, 2012 @ 10:39 am


RSS feed for comments on this post.

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 85 other followers