Let's try a "somewhat realistic" scenario and assume that a task previously used read()
to fetch data from a pipe but there was no data at the time so the task was blocked; then something wrote data to the pipe causing the task to be unblocked again. In this scenario:
- the scheduler does a task switch from "previous task running kernel code" to "task that was unblocked running kernel code". This might take 40 nanoseconds.
- the kernel (now running in the context of the unblocked task) copies data into the buffer that was provided by the original
read()
call, and arranges parameters that the read()
call is supposed to return (e.g. number of bytes read). This might take another 50 nanoseconds.
- the kernel decides it has nothing better to do so it returns to user-space, taking another 10 nanoseconds.
Here, the context switch time would be 40 nanoseconds, but the dispatch latency (as defined by the book's author) would be 100 nanoseconds.