I'm studying operating system on this book and my prof's slides. I'm arrived at the "Process scheduling algorithms" chapter. Talking about the RoundRobin (RR) algorithm i found some inconsistencies. I understand that is a preemptive version of the FCFS algorithm with a time-slice (quatum). From now on, I will use the following notation:
#1 = prof's version
#2 = book's version
#3 = other version
Here's the inconsistency (suppose a quantum of 100ms):
#1
The RR uses two queue (Q1, Q2):
- Q1: queue for processes that did not end their quantum;
Q2: queue for processes that did end their quantum;
- The scheduler takes the process from the head of Q1;
- If the process ends before the quantum expires, the process release the CPU on purpose and the scheduler takes the next process from Q1
- If the process doesn't end before the quantum expires, is preempted and placed at the end of Q2;
- When a process is ready it's placed at the end of Q1;
- When Q1 is empty, Q1 and Q2 are swapped;
So when a process is blocked for an I/O request (e.g. after 30ms) and its quantum is not expired yet, is placed at the end of Q1 (I guess) and when it will be scheduled again it will use the CPU for his remaining time (70ms in this case).
#2
(The book did not talk about multiple queues, so I assume it will use just one queue)
- The scheduler takes the process from the head of the ready queue;
- If the process ends before the quantum expires, the process release the CPU on purpose and the scheduler takes the next process from the ready queue
- If the process doesn't end before the quantum expires, is preempted and placed at the end of the ready queue;
#3
Source
- The scheduler takes the first process in the ready queue;
- If the process ends before the quantum expires, the process release the CPU on purpose and the scheduler takes the next process from the ready queue
- If the process doesn't end before the quantum expires, is preempted and placed at the end of the ready queue;
- If the process is blocked by a I/O request, it's placed in a waiting queue and when it will became ready will be placed again in the ready queue;
To me, these are 3 different implementations of the RR scheduling algorithm. I think that the most valuable is the #3
because the #1
can cause a starvation (if the process is placed in Q2 and new processes keep coming in Q1, then the process will never be scheduled again) and #2
will waste CPU time when a process is blocked for an I/O request. So, my question is: which one is the right one?