1
votes

As I know, for threads scheduling, Linux implements a fair scheduler and Windows implements the Round-robin (RR) schedulers: each thread has a time slice for its execution (correct me if I'm wrong).

I wonder, is the CPU usage related to the thread scheduling?

For example: there are 2 threads executing at the same time, and the time slice for system is 15ms. The cpu has only 1 core.

Thread A needs 10ms to finish the job and then sleep 5ms, run in a loop.

Thread B needs 5ms to finish the job and then sleep 10ms, also in a loop.

  1. Will the CPU usage be 100%?

  2. How is the thread scheduled? Will thread A use up all its time and then schedule out?

One More Scenario: If I got a thread A running, that is then blocked by some condition (e.g network). Will the CPU at 100% affect the wakeup time of this thread? For example, a thread B may be running in this time window, will the thread A be preempted by the OS?

2
'As i know that Windows System implements the Round-robin (RR) schedulers for threads scheduling...' OK, you've already gone off the rails.Martin James
'each thread has a time slice for its execution' nope. The OS timer interrupt is just one of many, and only shares the allocation of CPU to threads when there are more ready threads than cores. It's Microsoft who started all this confusion with terms like 'quantum' and phrases like 'give up the remainder of its time-slice'. Whoever wrote that carp, I hope they got fired:(Martin James
@MartinJames Ok, he or she is already fired, and you are hired :). So, how will the OS schedule the threads in the above scenaino? Any link or answer will be grateful. Thanks in advance.Wei Guo
See here for a an explanation of how it has changed in Windows since Vista (2007): technet.microsoft.com/en-us/library/2007.02.vistakernel.aspx @MartinJames you do see the term Quantum everywhere in Windows kernel source code. Yes, it's completely true that this "quantum" changes all the time. google on KiAdjustQuantumThread, from file wait.c for example, written by Dave Cutler back in 1989 :-)Simon Mourier
Yes, in some way. There are many algorithms implemented in the Windows kernel (I no nothing about Linux). IMHO, the best (recent) official article on this is here: microsoftpressstore.com/articles/… (search for 'Priority Boosts for CPU Starvation'). It also explain how you can test it by yourself.Simon Mourier

2 Answers

3
votes

As i know that Linux implements a fair scheduler and Windows System implements the Round-robin (RR) schedulers for threads scheduling,

Both Linux and Windows use priority-based, preemptive thread schedulers. Fairness matters but it's not, strictly speaking, the objective. Exactly how these scheduler work depends on the version and the flavor (client vs. server) of the system. Generally, thread schedulers are designed to maximize responsiveness and alleviate scheduling hazards such as inversion and starvation. Although some scheduling decisions are made in a round-robin fashion, there are situations in which the scheduler may insert the preempted thread at the front of the queue rather than at the back.

each thread has a time slice for its execution.

The time slice (or quantum) is really more like a guideline than a rule. It's important to understand that a time slice is divisible and it equals some variable number of clock cycles. The scheduler charges CPU usage in terms of clock cycles, not time slices. A thread may run for more than a time slice (e.g., a time slice and a half). A thread may also voluntarily relinquish the rest of its time slice. This is possible because the only way for a thread to relinquish its time slice is by performing a system call (sleep, yield, acquire lock, request synchronous I/O). All of these are privileged operations that cannot be performed in user-mode (otherwise, a thread can go to sleep without telling the OS!). The scheduler can change the state of the thread from "ready" to "waiting" and schedule some other ready thread to run. If a thread relinquishes the rest of its time slice, it will not be compensated the next time it is scheduled to run.

One particularly interesting case is when a hardware interrupt occurs while a thread is running. In this case, the processor will automatically switch to the interrupt handler, forcibly preempting the thread even if its time slice has not finished yet. In this case, the thread will not be charged for the time it takes to handle the interrupt. Note that the interrupt handler would be indeed utilizing the CPU. By the way, the overhead of context switching itself is also not charged towards any time slice. Moreover, on Windows, the fact that a thread is running in user-mode or kernel-mode by itself does not have an impact on its priority or time slice. On Linux, the scheduler is invoked at specific places in the kernel to avoid starvation (kernel preemption implemented in Linux 2.5+).

So the CPU usage will be 100%? And how is the thread scheduled? Will thread A use up all its time and then schedule out?

It's easy to answer these questions now. When a thread goes to sleep, the other gets scheduled. Note that this happens even if the threads have different priorities.

If i got a thread running, and blocked by some condition(e.g network). Will the CPU 100% will affect the wakeup time of this thread? For example, another thread may running in its time window and will not schedule out by the OS?

Linux and Windows schedulers implement techniques to enable threads that are waiting on I/O operations to "wake up quickly" and get higher chances of being scheduled soon. For example, on Windows, the priority of a thread waiting on an I/O operation may be boosted a bit when the I/O operation completes. This means that it can preempt another running thread before finishing its time slice, even if both threads had the same priorities initially. When a boosted-priority thread wakes up, its original priority is restored.

1
votes

So the CPU usage will be 100%?

Ideally speaking, the answer would be yes and by ideally I mean , you are not considering the time wasted in doing performing a context switch. Practically , the CPU utilization is increased by keeping it busy all of the time but still there is some amount of time that is wasted in doing a context switch(the time it takes to switch from one process or thread to another).

But I would say that in your case the time constraints of both threads are aligned perfectly to have maximum CPU utilization.

And how is the thread scheduled? Will thread A use up all its time and then schedule out?

Well it really depends, in most modern operating systems implementations , if there is another process in the ready queue, the current process is scheduled out as soon as it is done with CPU , regardless of whether it still has time quantum left. So yeah if you are considering a modern OS design then the thread A is scheduled out right after 10ms.