5
votes

I am currently studying operating systems from Silberschatz's book and have come across the "Dispatch Latency" concept. The book defines it as follows:

The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency.

Isn't this the same definition of "Context Switch"? Is there any difference between the two terms or are they interchangeable?

2

2 Answers

4
votes

Let's try a "somewhat realistic" scenario and assume that a task previously used read() to fetch data from a pipe but there was no data at the time so the task was blocked; then something wrote data to the pipe causing the task to be unblocked again. In this scenario:

  • the scheduler does a task switch from "previous task running kernel code" to "task that was unblocked running kernel code". This might take 40 nanoseconds.
  • the kernel (now running in the context of the unblocked task) copies data into the buffer that was provided by the original read() call, and arranges parameters that the read() call is supposed to return (e.g. number of bytes read). This might take another 50 nanoseconds.
  • the kernel decides it has nothing better to do so it returns to user-space, taking another 10 nanoseconds.

Here, the context switch time would be 40 nanoseconds, but the dispatch latency (as defined by the book's author) would be 100 nanoseconds.

3
votes

"Context Switch" is a process. "Dispatch Latency" is a latency, a.k.a. time.