O and Ω
What about big \( \mathrm{O} \)?
Yeah, a lot of this seems similar to CS 42/60, but back then we would have said \( \mathrm{O} \) instead of \( \Theta \).
Now is the perfect time to talk about it!
Big \( \mathrm{O} \) is also a way to say something about the asymptotic complexity. But what it means is different than \( \Theta \).
Big \( \mathrm{O} \)
For a function \( g(n) \), \( \mathrm{O}(g(n)) \) is a set of functions, all the functions \( f \) that grow more slowly or at a similar rate to \( g \) as \( n \to \infty \).
More specifically,
$$ f(n) \in \mathrm{0}(g(n)) \mathrm{\quad{}if\ and\ only\ if\quad{}} \lim_{n \to \infty} \frac{ f(n) }{ g(n) } < \infty $$
Whereas \( f(n) \in \Theta(g(n)) \) sort of means "\( f \) is similar to \( g \)", \( f(n) \in \mathrm{O}(g(n)) \) sort of means "\( f \) is no worse than \( g \)".
Saying it's "worse" is placing a value judgement on a function, which is weird.
Sure, but in Computer Science we are usually talking about costs and generally agree that costs growing more quickly is worse.
Example 1
Let's take the limit:
$$ \lim_{n \to \infty} \frac{ 2n + 1 }{ n } \\ = \lim_{n \to \infty} \frac{ 2n }{ n } + \lim_{n \to \infty} \frac{1}{n} \\ = 2. $$
Since \( 2 < \infty \), we can say that \( 2n + 1 \in \mathrm{O}(n) \).
Wouldn't we say that \( 2n + 1 \in \Theta(n) \)?
Sure, we could say that. That's also true!
These aren't mutually exclusive statements. They can both be true.
Just like \( x \leq 5 \) and \( x = 5 \) can both be true.
Of course, like \( x = 5 \), \( 2n + 1 \in \Theta(n) \) gives us more information, so if we know that's true, it's probably better to just say that!
Example 2
Let's take the limit:
$$ \lim_{n \to \infty} \frac{ 2n + 1 }{ n^2 } \\ = \lim_{n \to \infty} \frac{ 2n }{ n^2 } + \lim_{n \to \infty} \frac{ 1 }{ n^2 } \\ = 0 $$
Since \( 0 < \infty \), we can say that \( 2n + 1 \in \mathrm{O}(n^2) \).
Hay, wait. In CS 60 we would never have said that \( 2n + 1 \in \mathrm{O}(n^2) \)!
Sure. That's because it's not a very useful thing to say.
Still, now that we understand more fully what \( \mathrm{O} \) means, we can see that it is true!
Of course, because the limit is \( 0 \), we know that \( 2n + 1 \notin \Theta(n^2) \), but it is safe to say that it is "no worse than" quadratic.
Or saying paying for an all-you-can-eat buffet is "no worse than" paying for every side dish individually. I mean, it's better, you can eat MORE for the same price, but it's also no worse.
Example 3
Let's take the limit:
$$ \lim_{n \to \infty} \frac{ 2n + 1 }{1} \\ = \lim_{n \to \infty} 2n + 1 \\ \to \infty. $$
So, we can say that \( 2n + 1 \in \mathrm{O}(1) \).
That makes sense because linear time is asymptotically worse than constant time. And \( \mathrm{O} \) means "no worse than".
Now you're getting it!
What is \( \mathrm{O} \) For?
I feel like I understand how to use \( \Theta \), but when do we use \( \mathrm{O} \)?
Well, we use it whenever we want to say that the complexity is no worse than something else!
For example, let's revisit linear search...
On the previous page, we concluded that linear search has
- \( \Theta(1) \) best-case complexity and
- \( \Theta(n) \) worst-case complexity.
Depending on the input, the complexity could be of completely different orders!
So we can't just assign an order of complexity to linear search overall.
But one thing we could reasonably say is that the complexity of linear search is never worse than linear.
Big \( \mathrm{O} \) is how we say that with asymptotic notation: the complexity of linear search is \( \mathrm{O}(n) \).
We use \( \Theta \) when we want to say that two cost functions are asymptotically similar.
For example, "the worst-case complexity of linear search is \( \Theta(n) \)".
And we use \( \mathrm{O} \) when we want to say that one cost function is upper bounded by another in an asymptotic sense.
For example, "the complexity of linear search is \( \mathrm{O}(n) \)".
The second statement leaves room for the possibility that the complexity overall might sometimes be better than linear, because it might actually be!
Whereas we know that the complexity is definitely linear in the worst case; it can't be better or worse.
Quick Practice: Using \( \mathrm{O} \)
There is a very widely used sorting algorithm named Quicksort.
Oddly enough, despite its name, in the worst case Quicksort takes quadratic time! (And, even weirder, the worst case happens when the list comes already sorted.)
In the best case, however, it takes linearithmic (\( n\log{n} \)) time.
So… if it has such a lousy worst case, why do people use it so much and call it "Quicksort"?
For one thing, the worst case is very unlikely if you shuffle the list before sorting it. That makes its expected complexity \( \Theta(n \log{n} ) \), and the worst case becomes vanishingly unlikely for large \( n \).
Actually, you don't have to shuffle the array. If you're familiar with the algorithm, you'll know that it has to choose a “pivot” element each time it recurses. If you choose the pivot randomly, then the worst case is also vanishingly unlikely.
For another, its constant factors and overhead are small compared with many sorting algorithms that have \( \Theta(n \log{n}) \) complexity.
Always remember: asymptotic analysis is useful, but it doesn't tell the whole story!
Big \( \Omega \)
So, if \( \mathrm{O} \) means "no worse than", is there a way to say "no better than"?
Yep! That's Big \( \Omega \) (the Greek capital letter omega).
\( \Omega \) is just the mirror image of \( \mathrm{O} \).
For a function \( g(n) \), \( \Omega(𝑔(𝑛)) \) is a set of functions. It is all the functions \( f \) that grow faster than or at a similar rate to \( g \) as \( n \to \infty \).
More specifically,
$$ f(n) \in \Omega(g(n)) \mathrm{\quad{}if\ and\ only\ if\quad{}} \lim_{n \to \infty} \frac{ f(n) }{ g(n) } > 0. $$
Whereas \( f(n) \in \mathrm{O}(g(n)) \) sort of means "\( f \) is no worse than \( g \)", \( f(n) \in \Omega(g(n)) \) sort of means "\( f \) is no better than \( g \)".
Useful facts about \( \mathrm{O} \) and \( \Omega \)
- If \( f(n) \in \mathrm{O}(g(n)) \), then \( g(n) \in \Omega(f(n)) \) and vice versa.
- If \( f(n) \in \mathrm{O}(g(n)) \), and \( f(n) \in \Omega(g(n)) \), then \( f(n) \in \Theta(g(n)) \) and vice versa.
So, if you can work with \( \mathrm{O} \), then you can work with \( \Omega \).
You just have to flip things around!
We're usually more interested in the question "how bad can things get?" than "how good can things be?"
For that reason, \( \Omega \) doesn't get as much use as \( \mathrm{O} \) or \( \Theta \), but it comes in handy sometimes.
(When logged in, completion status appears here.)