0
votes

I am load testing an .Net 4.0 MVC application hosted on IIS 7.5 (default config, in particular processModel autoconfig=true), and am observing odd behavior in how .Net manages the threads. http://msdn.microsoft.com/en-us/library/0ka9477y(v=vs.100).aspx mentions that "When a minimum is reached, the thread pool can create additional threads or wait until some tasks complete". It seems the duration that threads are blocked for, plays a role in whether it creates new threads or waits for tasks to complete. Not necessarily resulting in optimal throughput.

Question: Is there any way to control that behavior, so threads are generated as needed and request queuing is minimized?

Observation: I ran two tests, on a test controller action, that does not do much beside Thread.Sleep for an arbirtrary time.

  • 50 requests/second with the page sleeping 1 second
  • 5 requests/second with the page sleeping for 10 seconds

For both cases .Net would ideally use 50 threads to keep up with incoming traffic. What I observe is that in the first case it does not do that, instead it chugs along executing some 20 odd requests concurrently, letting the incoming requests queue up. In the second case threads seem to be added as needed.

Both tests generated traffic for 100 seconds. Here are corresponding perfmon screenshots. In both cases the Requests Queued counter is highlighted (note the 0.01 scaling)

50/sec Test 50/sec test

For most of the test 22 requests are handled concurrently (turquoise line). As each takes about a second, that means almost 30 requests/sec queue up, until the test stops generating load after 100 seconds, and the queue gets slowly worked off. Briefly the number of concurrency jumps to just above 40 but never to 50, the minimum needed to keep up with the traffic at hand.

It is almost as if the threadpool management algorithm determines that it doesn't make sense to create new threads, because it has a history of ~22 tasks completing (i.e. threads becoming available) per second. Completely ignoring the fact that it has a queue of some 2800 requests waiting to be handled.

5/sec Test 5/sec test

Conversely in the 5/sec test threads are added at a steady rate (red line). The server falls behind initially, and requests do queue up, but no more than 52, and eventually enough threads are added for the queue to be worked off with more than 70 requests executing concurrently, even while load is still being generated.

Of course the workload is higher in the 50/sec test, as 10x the number of http requests is being handled, but the server has no problem at all handling that traffic, once the threadpool is primed with enough threads (e.g. by running the 5/sec test). It just seems to not be able to deal with a sudden burst of traffic, because it decides not to add any more threads to deal with the load (it would rather throw 503 errors than add more threads in this scenario, it seems). I find this hard to believe, as a 50 requests/second traffic burst is surely something IIS is supposed to be able to handle on a 16 core machine. Is there some setting that would nudge the threadpool towards erring slightly more on the side of creating new threads, rather than waiting for tasks to complete?

1
While I cannot explain the behavior you are seeing, here's a little contribution: The thread-pool algorithm does not use the queue size as input, and it shouldn't. It tries to heuristically maximize throughput. (It fails doing that here.)usr
@usr: how do you mean "and it shouldn't"? Seems to me queue size is crucial in determining whether or not it makes sense to spin up more threads to maximize throughput.turnhose
If throughput is optimal at x threads, why should a doubling in queue size change the optimal throughput? Your case is contrived because you're sleeping. You have +inf. optimal throughput. Real systems level out at some point. (Don't get me wrong: I'm still saying that the thread-pool fails to detect the optimal thread-count here. It should just continue adding.)usr

1 Answers

0
votes

Looks like it's a known issue: "Microsoft recommends that you tune the minimum number of threads only when there is load on the Web server for only short periods (0 to 10 minutes). In these cases, the ThreadPool does not have enough time to reach the optimal level of threads to handle the load."

Exactly describes the situation at hand.

Solution: Slightly increased the minWorkerThreads in machine.config to handle expected traffic burst. (4 would give us 64 threads on the 16 core machine).