Both of these two top-rated answers are wrong. Check out the MDN description on the concurrency model and the event loop, and it should become clear what's going on (that MDN resource is a real gem). And simply using setTimeout
can be adding unexpected problems in your code in addition to "solving" this little problem.
What's actually going on here is not that "the browser might not be quite ready yet because concurrency," or something based on "each line is an event that gets added to the back of the queue".
The jsfiddle provided by DVK indeed illustrates a problem, but his explanation for it isn't correct.
What's happening in his code is that he's first attaching an event handler to the click
event on the #do
button.
Then, when you actually click the button, a message
is created referencing the event handler function, which gets added to the message queue
. When the event loop
reaches this message, it creates a frame
on the stack, with the function call to the click event handler in the jsfiddle.
And this is where it gets interesting. We're so used to thinking of Javascript as being asynchronous that we're prone to overlook this tiny fact: Any frame has to be executed, in full, before the next frame can be executed. No concurrency, people.
What does this mean? It means that whenever a function is invoked from the message queue, it blocks the queue until the stack it generates has been emptied. Or, in more general terms, it blocks until the function has returned. And it blocks everything, including DOM rendering operations, scrolling, and whatnot. If you want confirmation, just try to increase the duration of the long running operation in the fiddle (e.g. run the outer loop 10 more times), and you'll notice that while it runs, you cannot scroll the page. If it runs long enough, your browser will ask you if you want to kill the process, because it's making the page unresponsive. The frame is being executed, and the event loop and message queue are stuck until it finishes.
So why this side-effect of the text not updating? Because while you have changed the value of the element in the DOM — you can console.log()
its value immediately after changing it and see that it has been changed (which shows why DVK's explanation isn't correct) — the browser is waiting for the stack to deplete (the on
handler function to return) and thus the message to finish, so that it can eventually get around to executing the message that has been added by the runtime as a reaction to our mutation operation, and in order to reflect that mutation in the UI.
This is because we are actually waiting for code to finish running. We haven't said "someone fetch this and then call this function with the results, thanks, and now I'm done so imma return, do whatever now," like we usually do with our event-based asynchronous Javascript. We enter a click event handler function, we update a DOM element, we call another function, the other function works for a long time and then returns, we then update the same DOM element, and then we return from the initial function, effectively emptying the stack. And then the browser can get to the next message in the queue, which might very well be a message generated by us by triggering some internal "on-DOM-mutation" type event.
The browser UI cannot (or chooses not to) update the UI until the currently executing frame has completed (the function has returned). Personally, I think this is rather by design than restriction.
Why does the setTimeout
thing work then? It does so, because it effectively removes the call to the long-running function from its own frame, scheduling it to be executed later in the window
context, so that it itself can return immediately and allow the message queue to process other messages. And the idea is that the UI "on update" message that has been triggered by us in Javascript when changing the text in the DOM is now ahead of the message queued for the long-running function, so that the UI update happens before we block for a long time.
Note that a) The long-running function still blocks everything when it runs, and b) you're not guaranteed that the UI update is actually ahead of it in the message queue. On my June 2018 Chrome browser, a value of 0
does not "fix" the problem the fiddle demonstrates — 10 does. I'm actually a bit stifled by this, because it seems logical to me that the UI update message should be queued up before it, since its trigger is executed before scheduling the long-running function to be run "later". But perhaps there're some optimisations in the V8 engine that may interfere, or maybe my understanding is just lacking.
Okay, so what's the problem with using setTimeout
, and what's a better solution for this particular case?
First off, the problem with using setTimeout
on any event handler like this, to try to alleviate another problem, is prone to mess with other code. Here's a real-life example from my work:
A colleague, in a mis-informed understanding on the event loop, tried to "thread" Javascript by having some template rendering code use setTimeout 0
for its rendering. He's no longer here to ask, but I can presume that perhaps he inserted timers to gauge the rendering speed (which would be the return immediacy of functions) and found that using this approach would make for blisteringly fast responses from that function.
First problem is obvious; you cannot thread javascript, so you win nothing here while you add obfuscation. Secondly, you have now effectively detached the rendering of a template from the stack of possible event listeners that might expect that very template to have been rendered, while it may very well not have been. The actual behaviour of that function was now non-deterministic, as was — unknowingly so — any function that would run it, or depend on it. You can make educated guesses, but you cannot properly code for its behaviour.
The "fix" when writing a new event handler that depended on its logic was to also use setTimeout 0
. But, that's not a fix, it is hard to understand, and it is no fun to debug errors that are caused by code like this. Sometimes there's no problem ever, other times it concistently fails, and then again, sometimes it works and breaks sporadically, depending on the current performance of the platform and whatever else happens to going on at the time. This is why I personally would advise against using this hack (it is a hack, and we should all know that it is), unless you really know what you're doing and what the consequences are.
But what can we do instead? Well, as the referenced MDN article suggests, either split the work into multiple messages (if you can) so that other messages that are queued up may be interleaved with your work and executed while it runs, or use a web worker, which can run in tandem with your page and return results when done with its calculations.
Oh, and if you're thinking, "Well, couldn't I just put a callback in the long-running function to make it asynchronous?," then no. The callback doesn't make it asynchronous, it'll still have to run the long-running code before explicitly calling your callback.
setTimeout(fn)
is same assetTimeout(fn, 0)
, btw. – vsync