0
votes

I figured that finding the upper bound worst-case makes sense most of the time because that's how we get an idea of the maximum runtime of an algorithm, and we can expect that a given algorithm will never exceed that limit. But, as opposed to that, when do we use lower bound worst-case and lower bound best-case analysis? And why?

I have seen the answers to this question Example of algorithm which has different worst case upper bound, worst case lower bound and best case bounds? but could not understand why would one need to calculate Ω and Θ when the O gives a clear idea of the complexity.

EDIT


My question is not about where we are using the lower bounds, I have seen a couple of examples about that. My question is why and when would one choose lower bound (omega) over upper bound (Big O) to determine the worst case.

Is it because

lower bound immediately gives some bound on the runtime even if we haven’t analyzed the algorithm in-depth?

For example suppose the Upper bound worst case of an algorithm is O(n!), and I have not analyzed the algorithm in-depth but found the lower bound worst-case already Ω(2^n) then instead of going further, I can conclude that the runtime complexity worst case is Ω(2^n), i.e., tits already bad and we need optimization if possible

1
In my experience, it's usually to argue that a complicated recursive algorithm (where it's difficult to get an exact time bound) is slow. - David Eisenstat
O doesn't give a clear idea of the complexity. It's Θ that gives a clear idea of the complexity, because it's both an upper bound and lower bound. However, there are some algorithms that don't have a Θ. For those algorithms, you need to find a tight Ω and O to get the complete picture of the algorithm's performance. However, in most cases, only the worst case performance (given by O) is of interest, so that's why O is the most commonly cited performance metric. - user3386109
@user3386109 if Θ (the tight bound) gives a clear idea of the complexity of an algorithm then for the worst-case why don't we consider Θ(g(n)) instead of O(g(n)), for example, the algorithmic time complexity of find max element in an unsorted array is Θ(n) instead of O(n) - Arko
more precisely why don't we use tight bound at least where it (the Θ) is applicable. "However, in most cases, only the worst-case performance (given by O) is of interest, so that's why O is the most commonly cited performance metric." I understand that part as well my question is then why Ω when O is the most commonly cited performance metric. - Arko

1 Answers

2
votes

Here’s an example. Suppose someone says “I have an algorithm that lists all the subsets of an n-element set” and we want to see how long it will take to run. Since an n-element set has 2n different subsets, the runtime of the algorithm must be at least Ω(2n), since anything faster than that can’t list all those subsets. This is a lower-bound on the worst-case runtime of the algorithm, and we don’t even need to see what the algorithm is to derive it. If our goal is to say “yeah, that’s definitely not going to be fast enough, because our input has thousands of items in it” we can probably just call it here without looking into the specifics of the algorithm.

Another example would be in cases where there are known lower bounds on a problem. For example, if someone invents a new comparison-based sorting algorithm, we can say that the runtime in the worst case is Ω(n log n). It might be much worse than this, but it’s definitely going to take at least that much time.