9
votes

I saw a recommendation that the number of rounds be set to ($currentYear - 2000) to account for Moore's law, so that 2013 would be 13 rounds and therefore 2^13 total iterations. Of course, you need to take into account your own hardware to ensure it doesn't take too long (I saw 1 second recommended as "safe" for checking passwords/hashes, and 13 rounds falls around that mark on my current hardware).

Does that sound reasonable for a social networking type of site? Or would I be setting myself up for very slow password checking in the future by using ($currentYear - 2000)?

Also, how do you deal with changing the number of rounds from one year to the next? Won't changing the number of rounds change the hashes, therefore not allowing you to check hashes from 2013 in 2014 since the check would use an extra round? Would you have to re-calculate every single hash each year, or how would it work exactly?

3
First of all I think you should not make that dependent on how long it takes on your (current) hardware – if I happened to get a hold of your hashed passwords, I would try cracking them on my {insert_supercomputer_of_your_choice_here}.CBroe
I would think it would depend on your resources available and the number of authentication requests you need to handle. One second might be the right number, it all depends. For the second part, yes you would need to recalculate the hash, or just continue using diff. numbers of iterations for different hashes.ficuscr
@CBroe: Yes, but from what I've read and understood, that needs to be balanced with how long it takes on your own server, because you wouldn't want to frustrate your users with a 20-second log on. So the key is to find the "right" balance, which is what I'm trying to figure out.ProgrammerGirl
Well, around one second for a normal user sounds reasonable. One could use more rounds for more “important” accounts like admin accounts – for them it should be acceptable to wait a little longer.CBroe
@Programmer, yes, that is the typical convention, to store the number of 'rounds' along with the salt used when creating the hash. Highly recommend this: blog.ircmaxell.com/2012/10/password-hashing-in-php-talk.htmlficuscr

3 Answers

16
votes

First off, I question that recommendation (adjusting cost based on year). The cost should be based on how fast your hardware is, not the current date. If you don't upgrade your server between now and 2015, there's no reason to increase the cost. All you do is slow an already slow process.

With that said, I also question the recommendation of 1 second for most usages. If you're dealing with highly sensitive information, 1 second (or perhaps longer) is ok. But for the average website, I typically recommend between 0.25 and 0.5 seconds. In some cases you can go lower, but I wouldn't without strong justification.

Now, to the question itself. When you use crypt() or password_hash(), the iteration count is stored in the return hash format. In fact, the salt is as well. So all information needed to compute the hash is included in it!

And if you're not using either of those API's (or the polyfill that I maintain: password-compat), then I really have to wonder why you aren't. Don't invent your own password crypto. Don't use libraries that use native hashes (like phpass) unless you have a strong reason to (for certain governmental compliance reasons, or compatibility with PHP <= 5.2).

It is generally considered that bcrypt is the strongest hash format today. SCrypt is stronger, but there are some issues with it, and it is still very new (and it's not available in PHP core yet). So just use bcrypt...

The password_hash() api has a mechanism for you to do what you're asking: password_needs_rehash(). Basically, you pass in the hash, and the options you use today, and it tells you if you need to rehash it:

if (password_verify($password, $hash)) {
    if (password_needs_rehash($hash, PASSWORD_BCRYPT, ['cost' => 14])) {
        $hash = password_hash($password);
        update_password_in_database($hash);
    }
    $loggedin = true;
}

Read the RFC for password_hash() for more information about it (I collected data from a large number of sources, and included references in the RFC).

Edit - Follow up to @AnotherParker's Comment:

Criminals don't stop upgrading their crackingboxes just because you didn't upgrade your server. You do need to increase the work parameter over time to thwart offline attacks.

Sort-of true. Well, true, but misses the point of what I was talking about above.

The cost parameter of the hash function is a time-effort tradeoff. You trade-off some time to add additional effort to each hash. On the same hardware, taking more time will yield more work. The other way to yield more work is to get faster hardware.

But the recommendation is to test the hash function on your current hardware, and make it as expensive as you can reasonably make it. If 0.5 seconds is the maximum that you can afford today, unless you upgrade your server hardware, how is increasing the cost going to help you? In short, it won't because you'll break the maximum time limit you've already determined is important.

So you can't increase the work parameter without also increasing the server's capabilities, unless you already were producing weak hashes.

Also, check out this answer on the subject

5
votes

When you use bcrypt, the number of rounds is part of the hash generated:

crypt ( 'Password', '$2a$04$thisshallbemysalt' );

will result in something like

$2a$04$thisshallbemysalt.rAnd0ml0ok1ngch4rsh3re

2a after the first $ sign stands for the bcrypt algorithem, and next 04 stands for the number of rounds – so by looking at the hash you can see how many rounds where done creating it.

So when you decide it’s time to up the number of rounds, you could check the number of rounds used in generating the stored hash when the user logs in – and if its not your current number of rounds, you re-hash their password there and then, and save it as the new hash (after checking whether their password matched the existing hash, of course ;-))

4
votes

The idea of key stretching is to make brute-forcing unfeasable because calculating a hash hundreds or thousands of times takes, for each round, the same amount of extra time on the attackers' system.

It isn't really important if it takes 1 second or .9 seconds or 2.5 seconds . The idea is that it's unfeasable to brute-force millions of password-hashes per second. It's the factor that counts, not the actual number of rounds.

If you use, for example, an SHA256 hash a system could do X (say 1,000,000) hashes per second. By key stretching (and thus hashing, for example, 500 times) you bring this system down to 1,000,000 / 500 = 2,000 attempts per second for each password. You effectively slow down the attacker by a factor of 500. If you use 750 round you... exactly! Slow down the attacker by a factor of 750. But increasing the number of round affects your system/website/application as well so you don't want to go too high "just to be sure"; users will complain about slow logins!

This stems from the fact that, for example, SHA1/256/512, MD4/5 etc. are optimized for speed. And what you don't want is a speed-optimized algorithm so you can slow attackers down. So once every some years you simply increase the number of rounds by some factor that the login-time for your users still stays acceptable but that it would slow down the attacker enough to make it not worthwhile trying to brute-force the hashes (or at least to force them to focus on a lot less accounts rather than all accounts for example).

When you up the number of rounds you rehash as CBroe explains.

I don't know who came up with the 2($currentYear - 2000) recommendation (I'd love to see a source! Nevermind, found it) but if you ask me this is total bullocks. I suggest you read the answers more closely and also check this question/answer.

If your bcrypt takes .2 to .5 seconds (which is an acceptable 'delay' while logging in if you ask me) that would mean an attacker can brute-force about 5 to 2 hashes per second given the same hardware and if he/she invests heavily 5,000/2,000 or 5,000,000/2,000,000 maybe. That still isn't feasable to brute-force the entire 160bit (SHA1), 256bit (SHA256) or even 448bit (bcrypt) space in acceptable time (or even lifetime).