This question is inspired by another question on process allocation. This is a twist on, and enhancement of, the problem discussed there.
We have n processes, and we need to allocate them to the fewest possible processors. Each process has a scheduled start and finish time, which we will define in terms of time units, indexed from 1; a process will run for some contiguous sequence of time units. A processor can then be scheduled to run any number of non-overlapping processes.
The obvious greedy algorithm is:
At each step, schedule a maximal non-overlapping set of the remaining processes onto the next processor.
How is a maximal non-overlapping set to be chosen? We will leave the algorithm as non-deterministic, since that makes it easier to analyse and split into two sub-questions.
Essentially, the previous question concerned behaviour if the algorithm is unlucky: what's the smallest n for which this algorithm might produce a sub-optimal allocation (i.e., require more processors than necessary)? It turns out that the answer is n=4. There are cases where two processors are sufficient, but the greedy algorithm might require three processors (though if it's lucky, it will do it in two steps).
This question concerns what happens if the non-determinism is always lucky:
What's the smallest n for which this algorithm is guaranteed to be sub-optimal? That is, what's the smallest n for which we can find a set of processes where the greedy algorithm must use more processes than necessary?
Since Stack Overflow actively encourages asking and answering your own question, I will do so here, and you can tell me whether my partial answer can be improved upon!