Google Tech Talk on common crypto flaws

Yesterday, I gave a talk at Google on common crypto flaws, with a focus on web-based applications. Like my previous talk, the goal was to convince developers that the most prevalent crypto libraries are too low-level, so they should reexamine their design if it requires a custom crypto protocol.

Designing a crypto protocol is extremely costly due to the careful review that is required and the huge potential damage if a flaw is found. Instead of incurring that cost, it’s often better to keep state on the server or use well-reviewed libraries that provide SSL or GPG interfaces. There are no flaws in the crypto you avoid implementing.

I hope you enjoy the video below. Slides are also posted here.

35 thoughts on “Google Tech Talk on common crypto flaws

  1. Great talk, with complete coverage of many important points. However, I’ve never been a fan of the “please don’t ever do crypto yourself.” This seems to be a popular talking point nowadays, and I think it’s a bit over the top. With some specific education, and straight-forward guidelines, programmers should be able to do crypto. As long as they understand that programming crypto is like arming a bomb, that is. Treat it carefully, and with respect and forethought.

    1. I respectfully disagree. If you look at cryptographers, they don’t do crypto like programmers seem to. No respectable cryptographer designs a protocol without planning right from the beginning for massive peer review. On the other hand, programming is often a solo sport. Pair programming does not compare to standing in front of hundreds of people at CRYPTO and having your multi-year thesis torn apart publicly.

      All I’m saying is that embarking on crypto design without the assurance of this kind of review is foolhardy. If you can avoid this danger by using crypto protocols that already exist, why wouldn’t you? A cryptographer would! Just because you can program well in assembly doesn’t mean you use it casually for writing large applications. Why is it easier to avoid assembly than implementing crypto?

  2. A lot of what you talked about raises a big question with me: what *is* a proper security paradigm for web applications?

    Lets take user-password security for a web application. The common way to store passwords in the database is: sha1( password + salt ), or the precise way you said isn’t so good. So, if the defacto standard for 80% of open-source software isn’t secure enough, then what should it be replaced with?

    In an effort to be more secure I’ve been attempting to rewrite the technique I use to store passwords.

    My new technique is to generate an HMAC from the password, the key being a very large random number (per user), and then to regenerate the HMAC from the result and the large number four more times. This was inspired by someone else’s suggestions … on a blog. But I feel like this isn’t any more secure than the “standard”.

    1. Your approach seems better, but why are both you and the 80% not using what’s already out there? For example, the openwall project provides a PHP version of the OpenBSD password crypt or you can use the one built into your OS if you have a Unix C library.

      Why not spend the time you would have spent designing, implementing, and testing your own crypt with surveying what’s out there and doing a security review of its implementation?

      1. I think you’re assuming that we know what to do, cryptographically. I’m a hobbyist when it comes to ciphers and understand the rudimentary physical/paper ciphers like the airplane cipher; I’ve gotten as far as writing an implementation of XTea in Pascal. But, that doesn’t mean I actually know what I’m doing … that just means I could take the code I found and translate it to another language.

        How do I evaluate cryptographic implementations to determine their relative security?

        Understand what I see when I look at that code you linked me to: That code is three years old, writes a native-PHP implementation of the existing base64_encode, uses MD5 which I know is easily broken, DES fallback should be never due to its weakness (I do know about the weak keys, etc), and it uses the highly esoteric crypt() function built into PHP. The problem with crypt() being a lack in clear use, and the use of several CIPHERS to “hash” something?

        All this makes me want to scream, wave my arms in panic, and run away.

        From what I understand, ciphers aren’t destructive enough for password storage when the implementation of crypt is freely readable, as would any passwords the system would use. They would simply need to glance at my database table, see the signature “$2$” and then figure out what my salt is–then it all falls to pieces. Again, by my understanding this makes the serious hacker more dangerous than using hashes like SHA-512 or Whirlpool with a salt, or using their HMAC algorithms.

        In short: how can I possibly tell when there is a good cryptographic implementation, so that I can look for one; and how can I trust what appears both unverified and poorly written? Besides, I’m just a freelancer and contractor, and cannot afford to conduct a security review; I could look at the logic of an implementation, but again, I don’t know what to search for.

  3. If you want to see a horror show, browse the questions tagged “cryptography” or “encryption” on StackOverflow.

      1. Have you read the book “The Cult of the Amateur”? Though it does not talk specifically about crypto, it does talk about some of the issues with Web 2.0, some of which probably led to things like the rainbow table fiasco.

  4. WickedFlea,

    Perhaps you just had trouble finding this section of the website I linked to. Perhaps this will help clear up some of your questions.

    The preferred (most secure) hashing method supported by phpass is the OpenBSD-style Blowfish-based bcrypt, also supported with our public domain crypt_blowfish package (for C applications), and known in PHP as CRYPT_BLOWFISH, with a fallback to BSDI-style extended DES-based hashes, known in PHP as CRYPT_EXT_DES, and a last resort fallback to MD5-based salted and variable iteration count password hashes implemented in phpass itself (also referred to as portable hashes).

    So you as the person responsible for your site’s security would use the Blowfish mode, avoiding fallbacks to the other modes that are only there for backwards compatibility. Not too hard to use the default, is it? Apparently, this is built into PHP 5.3.0+ or the Suhosin patch (again, just reading the page).

    And no, an attacker can see your salts and a well-designed password hash scheme should not fall apart. A salt is not secret. It only needs to make a given hash unique.

    1. For the record, Robert K and I are synonymous; WordPress didn’t recognize me when I commented the first time.

      Okay then, to distill what I wanted from the responses you gave…

      Ideal security involves: using trusted, tested, established ciphers and hashes; presuming that your salt is known, and ought to be unique per-record; taking enough time to generate the hash so as to make brute force difficult (or infeasible). In web development, the greatest prevention of spam and automated attacks is uniqueness, which means that customization of the algorithms, rounds, and modifiers used would prevent such things as precalculated hash attacks from blindly compromising more than a handful of sites.

      1. The salt has to be unique per record, AND per instance of a record. So if a user changes their password, you need a new, unique salt again.

        I disagree that uniqueness is needed for the password hash algorithm, however. You have to assume that if the attacker has your hashes, they have the code that generates them. After all, they’re on the same server and require the same privileges to access.

      2. Nate, I never said uniqueness in the algorithm is required: I said that it would prevent blind reuse of the same precalculated dictionary. IE, it would take more time to compromise even when in possession of the algorithm. I did forget to mention salt uniqueness per-instance, but I assumed nothing.

        And, firstly, the database and application that I deploy to my server run on separate machines. While yes, if my FTP account is compromised (I do change my passwords) the attacker will have my database credentials and the exact algorithm; the database is just as often compromised by itself. Should the database alone be compromised, a custom algorithm (with no indicative header like $2a$) would be very difficult to diagnose and brute-force–since they would have no clue to any procedural variations within the algorithm.


        I won’t claim it reviewed, secure, or even a good idea, but I put it together to try to comprehend the concepts of crypt() and phpass. Oh, and it works on virtually all installations of PHP 5.2.9, which most shared servers out there use.

      3. WickedFlea,

        I looked at your design. There’s a salt and a hashed password, separated by a $. There are no identifiers so if a user wanted to increase the number of rounds later, upgrade the algorithm, etc., there’s no way to tell which configuration goes with a particular hash. So basically, the result is equivalent to the old Unix crypt but with more rounds by default, ignoring the features added in md5-crypt (algorithm identifier) and bcrypt (number of rounds identifier). I don’t think you’ve gained any security in exchange for the usability loss.

    1. You mentioned later that you figured this out. For the benefit of other readers, I’ll add some more notes here.

      No attack is being described on the hash function itself. Its just an inherent property of the Merkle-Damgard construction that given a hash for some data, anyone can extend the hash to cover additional data. To do this, they simply use the previous output hash value as the initial chaining values, then hash the data they want to append as normal.

      There are additional, more difficult attacks on constructions like H(key || data || key). Luckily, it’s easiest and best to just use HMAC.

  5. A great talk. I have seen it just now.
    Personally, I don’t agree with your your slide #9. You assume that the keys to encrypt/decrypt the data via Javascript are on the server that provide the data itself. In a Host-proof Hosting system this isn’t true. Only the user has their keys and all the data are sent to the server after that the data has been encrypted in the browser.
    If you adopt standard algorythm (AES, SHA256, RSA…), avoid XSS problems, etc. you can create a good system that fully protect privacy of the users.

    1. How can it be a “Host-proof” system when the user is generating the keys and then encrypting the data using Javascript code downloaded from the Host?

      Javascript has no unique root of trust. Here is a “secret keeper” application that encrypts the user’s data using a password they enter into the browser. All communications are over SSL to prevent MITM.

      Design #1:
      A. Client gets HTML form
      B. Client posts data and password
      C. Server runs code that encrypts data with password and stores it

      Attack: compromise server and trojan the code

      Design #2:
      A. Client gets HTML form and Javascript code
      B. Client runs JS code that encrypts data with password
      C. Client posts encrypted data to the server

      Attack: compromise server and trojan the JS code

      Both models have the same root of trust: the server. Everything else is just handwaving.

      1. Hi Nate, I agree that “Both models have the same root of trust: the server”.

        I didn’t point the attention on trust, but on the possibility of create a crypto system that properly uses Javascript. I am a founder of Passpack, an online password manager. We launched our tecnology in december 2006 and now we have about 70,000 users. During theese years, we haven’t have security issues. This says something.

        I know that we must trust a web system, always. But a well made Host-proof Hosting system offers same advantages for the user, because safe her privacy.

        To grab your data, I need to create a back door in the code. Is is a clear violation of the contract with the user and you can notice the modification.

        Instead, if the encryption is delegated to the server, during the job I can make a copy of you data and you can not know it.

      2. I appreciate that you are getting down to the actual details, but it doesn’t change the fact that JS crypto is being oversold to the public. Just because you haven’t had known security issues in 3 years doesn’t mean the system provides the type of security that users think they’re getting.

        The one thing JS crypto gets you that server-side crypto does not is client-side auditability. That’s it. However, this benefit comes with tremendous caveats that I think far outweigh it.

        * The user has to be a cryptographer and can detect subtle cryptographic flaws. Note: this ignores all the malicious crypto research by Yung et al and assumes this is actually doable in reasonable time.

        * The user always does “View Source” each time she connects to be sure the code is identical. If it was changed for bugfixes or whatever, she has to re-review the entire thing.

        * The user always loads everything over SSL to be sure she’s getting untampered code and there is no MITM.

        * The JS code does not have cache problems where users keep running old code after you’ve fixed a flaw and tried to upgrade them.

        * You periodically audit all your servers to be sure they aren’t serving up malicious or outdated JS code. Note: this audit cannot be done via a client because an attacker can return different code to you than your users, based on src IP.

        Since no one does all this and the disadvantages are so significant, JS crypto is worse than server-side crypto.

      3. You listed some interesting points. User has some difficult to verify the code and an hacker can discover holes in the code. You’re right. But if the crypto is server side the user can verify nothing at all. So the assumption is that the server side programmers mantain a good system and we trust them.

        If you assume that the provider of the JS crypto has the same care of the server-side crypto, something changes. The difference:

        * a well programmed host-proof hosting app garantees the user privacy
        * a good programmed server-side crypto app cannot garantee it

        Mantain a HPH system is more critic that mantain a server-side crypto system, it requires more attention, I know it. But I think that the choice depends from your objective.

      4. I’d rather have the crypto code on a locked-down server I control than pushing it to some unknown browser environment. Since no users actually are capable or even motivated to audit the JS crypto they receive anew each session, there is no advantage. Given the disadvantages listed before, I’m still unconvinced.

      5. the obvious solution would appear to be ‘do both’. encrypt in the client with a client only key to add a layer of privacy between client and server, then encrypt again on the server with a key provided, but not stored, on the server to add a more controlled and auditable layer. the root of trust is still always the server, but the client application has an extremely good chance at maintaining privacy in a hybrid system.

      6. for a very important reason: your analysis was with respect to the relationship between client and an *attacker* while francesco’s was with respect to the relationship between client and *server*. when js crypto is used, it certainly may be true that it does not enhance the security of the client/attacker relationship but it can enhance the security/privacy between the client/server. the key difference is that in case of an attack both parties are *not* cooperating to keep information secret but in the case of js crypto both parties *are* cooperating to maintain privacy/security of the client’s data.

        imagine a hybrid system that did both, your design #2 layered on top of your design #1 (two passphrases, one secret and used on the client and one passed to the server to perform encryption there). while it may be true that the js crypto layer doesn’t enhance the security between client and the attacker (and therefore a claim that it does might be correctly called ‘handwaving’) it would not be ‘handwaving’ to say that it does not enhance the security/privacy between the *client and the server*.

        i simply think you and francesco were talking about security in two different contexts that the context in which security takes place is important for either of your comments to be understood fully: two cooperating and trusting parties trying to keep information private is a *completely* different context than two parties that do not have this agreement.

        the trust root is always the server. if i trust them to encrypt and protect my data i don’t see why i wouldn’t trust them to help me keep a secret with standard cryptography techniques employed in the browser. while it may not enhance global security is does enhance the security between myself and a trusted partner. only when this partner no longer becomes trusted does this stop being true.

      7. No, I was not drawing a distinction between client/server & client/attacker. Quoting you:

        “while it (JS crypto) may not enhance global security it does enhance the security between myself and a trusted partner. only when this partner no longer becomes trusted does this stop being true.”

        I disagree with the first sentence and agree with the second.

        There is no difference between the attacker vantage points needed to compromise JS crypto or server-side crypto. They either need control over the server or to compromise the link between client/server (e.g. SSL attacks, phishing, etc.)

        JS crypto makes no difference. However, it does have all the drawbacks I listed in my comment on 2010/1/11. Thus, it should not be used.

  6. i think we’re talking at cross purposes. i’m talking about doing BOTH server side and client side crypto. most of the ‘disadvantages’ you list surrounding having to use ssl/caching are just silly and trivial to deal with. comparing versions too: any revision tool does this. no one denies that server side crypto *MUST* be used to secure a system. the point you seem to be making, that js crypto is valueless, denies the reality that some people want to keep data, stored on a server, secret from that server in addition to keeping it safe from random attackers. and wrt this goal i think js crypto has a place/need.

    i personally refuse to consider that it is either value-less or impossible. mark my words: we’ll see more of it as the clould, data everywhere, and js continue to be part of the technological landscape.

    at a meta-level *all* crypto code is run from some random server. in the case of a browser/js the server is know and the container designed to be a security sandbox. the sandbox is watched by many eyes. contrast this to

    sudo rpo install openssl


    sudo python install

    or, worse

    (double click something.exe)

    or, very worse

    (automatically upgrade my system while i’m at the coffee shop)

    and i think one can begin to imagine how, one day, it might be considered crazy to install and execute crypto code from some random operating system repository.

    food for thought.

    1. You advocate changing this:

      – Small set of libraries (OpenSSL, pycrypto, ?)
      – Has been in existence for 10+ years, usually at least one actual cryptographer as maintainer
      – Access to proven platform security features (/dev/urandom, ACLs, UIDs)
      – Audit library once and then use traditional modification detection (tripwire etc.)

      To this:

      – Many different JS crypto libraries
      – Around <4 years (some much less), written by web developers
      – No access to system security features such as PRNG, same origin policy only security model
      – User audits crypto code every time they connect to the site (note: no users actually do this), no way to detect trojans

      1. nate-

        i *really* appreciate your thoughtful analysis, but know that i am a developer that has developed many durable and long running systems on the basis of someone saying “that won’t work”

        hopefully you’ll appreciate that i’m sincerely trying to get a deep understanding of the limitations of client vs server systems and not simply trolling… to that end i offer my gratitude in engaging in such a discussion

        i hope your readers will find value in discussion, even if the end result is that i end up looking like and idiot ;-)

        my commentary on your recent points follows:

        > – Many different JS crypto libraries

        i am personally aware of only a few credible js crypto libs, such as, and yet am also aware of dozens of c/fortran/misc crypto libs. of course the quality/culture of js code is a real issue… but some basic googling doesn’t seem to point to a huge proliferation of js libs. if i’ve missed some plethora of usable js crypto code please advise with urls.

        > – Around <4 years (some much less), written by web developers

        maturity != quality or security. $ms – No access to system security features such as PRNG

        personally i’d trust this more than some random mobile phones prng… in fact, i think many of your arguments are quite ‘personal computer’ centric… it’s quite worthwhile considering that access to secure data is going to become increasingly mobile based, and that the limitations of those platforms will undoubtedly become a game changer wrt determining best practices in data security.

        > same origin policy only security model

        i think this is a red herring. the browser is probably the *only* example of ubiquitous code/security sandbox in existence – despite the efforts of sun microsystems… same orgin is better that *no* security model. consider that most users have any number of application setup for ‘auto update’ on their systems – with root access no less – and that the kernel (ignoring se-linux, etc) offers absolutely zero protected from executing abritrary *.so’s. it’s actually very odd to me that people consider with great deliberation the merits of executing arbitrary javascript when almost no one understands nor cares how osx or ubuntu dlls are verified, signed, and trusted…

        it brings to mind a scenario i was once in: a ‘security’ guy refused to install a verison of my software on our .gov systems… i pointed out that the mac he was running came out of the box with an older, flawed, version of the software and that i, the author, was suggesting he updated to a newer, more secure version of the software. he ignored me, despite the fact that apple had never contacted me and had no idea if i’d released improvements to the oss package they’d included from my repos… this is a conrete example of how the magical process if pulling down loads of software from the intertubes and running it generally goes through a very lax security audit.

        summary: unless you can provide some *general* mechanism for how users should download, apply trust, and execute software from the internet, including system and arbitrary third party dlls – i dont’s see why js code should be subject to some special treatment/consideration – it’s the only programming language in the world with a standards based security sandbox!

        > – User audits crypto code every time they connect to the site (note: no users actually do this), no way to detect trojans

        i’m unclear how https and certs (trust chains) don’t solve this. if they do not it would seem that the ‘server side encryption is better’ argument would immediately fall down. right? consider: what version of openssl is the server running? where did they get it from? how do i know this?

      2. I appreciate your long and thoughtful comment. However, since this post is really not about JS crypto, I’m writing a new one that will go out next week specifically about JS crypto. Please post future comments there as it will address a lot of what you’ve brought up.

        I agree the current app deployment model, even for heavyweight client apps, is flawed. The “best” commonly-used approach we have today is verifying a PGP signature with a public key that was fetched directly from the same server as the signed code.

        I also agree that a server-side app offers no auditability to the client, while JS crypto at least gives some possibility of that. Server-side apps do not offer a fundamentally better or different trust model than JS crypto — both have significant drawbacks in common. But I think there are some additional drawbacks to JS crypto that make it a worse option, even though the trust model is largely the same.

        Check out the post next week and see if it helps explain why. Thanks.

Comments are closed.