An obvious solution to the password problem

Many organizations try to solve problems by making rules. For example, they want to prevent accounts from being compromised due to weak passwords, so they institute a password policy. But any policy with specific rules gets in the way of legitimate choices and is vulnerable to being gamed by the lazy. This isn’t because people are bad, it’s because you didn’t properly align incentives.

For example, a bank might require passwords with at least one capital letter and a number. However, things like “Password1” are barely more secure than “password”. (You get them on the second phase of running Crack, not the first phase. Big deal.) A user who chose that password was just trying to get around the rule, not choose something secure. Meanwhile, a much more secure password like “jnizwbier uvnqera” would fail the rule.

The solution is not more rules. It is twofold: give users a “why” and a “how”. You put the “why” up in great big red letters and then refer to the “how”. If users ignore this step, your “why” is not compelling enough. Come up with a bigger carrot or stick.

The “why” is a benefit or penalty. You could give accountholders a free coffee if their account goes 1 year without being compromised or requiring a password reset. Or, you can make them responsible for any money spent on their account if an investigation shows it was compromised via a password.

The “how” in this case is a short tutorial on how to choose a good passphrase, access to a good random password generator program, and enough characters (256?) to prevent arbitrary limits on choices.

That’s it. Once you align incentives and provide the means to succeed, rules are irrelevant. This goes for any system, not just passwords.

3 thoughts on “An obvious solution to the password problem

  1. Nate,

    As always, a very succinct and clear-thinking approach to the problem. The academic research that’s come out in the past year, using massive datasets like the leaked password list from RockYou, confirms that password rules aren’t effective-users will tend towards choosing a few weak things in any space they’re allowed to pick from.

    But I disagree on the obviousness of the solution that we reward users for having their password not compromised. It’s hard to detect password compromise, and even when it’s detected its difficult to tell if the user’s password was guessed or any of several other vectors (key logger, shoulder surfing, wireshark, phishing, or compromise at another site) were used to get it. If we could reliably detect that somebody was trying to guess the user’s password then presumably that could be stopped directly.

    I also think holding users responsible if their password is compromised is unworkable. To do this would require users use a completely different password for every site-this isn’t possible without using tools like PwdHash, which remains beyond most users.

    For big websites, I think a better target is for users to use passwords that are unique to the system. Given appropriate guessing limits and a large userbase, this is enough to make random guessing attacks futile. Targeted guessing might still work (if, say, every user picks their username as their password), but beyond some simple checks this is impossible to defend against in general. This still doesn’t solve the problem of users doing insecure things with their passwords, but it’s a start. Non-big websites should be out of the password collection business within a few years,

    1. Thanks for the comment. This is definitely easier to solve in restricted domains, such as within a small company. Then the penalty/reward can be higher and you can review individuals more carefully.

      However, I don’t think you need to differentiate password guessing from keyloggers or shoulder-surfing. Using a shared computer or a public location makes you more vulnerable to both of these, and the user is responsible for protecting the password. Choosing a good one is just the first step. Your education (the “why”) can explain all this, and it’s up to the users to decide if your incentives make it worthwhile to learn from them.

      The reason this wouldn’t work for some environments, say Facebook, is that the company’s loss due to punishing/rewarding some % of their 500M users is high but their loss when an account is compromised is extremely low. So they needn’t worry about this, just let the users deal with it.

      When you assign responsibility properly and pick the right level of incentives, the system self-corrects. How many people use a more-secure password for their online bank than Myspace? I’d guess a decent number, even without explicit incentives.

  2. So how does one determine if you have been compromised if the person attacking you does it covertly? Passwords are just a barrier. They are like keys to your house.

    Furthermore this kind of strategy requires a behavioral shift. If it takes forever to get a good password they might not come back at all and may not see value in the service. If the passwords are complex then they will just write them down.

    Instead why not suggest two factor or multifactor authentication? The user can have a semi crappy password followed by a number token auto generated.

Comments are closed.