CS 70

O and Ω

  • Duck speaking

    What about big \( \mathrm{O} \)?

  • Cat speaking

    Yeah, a lot of this seems similar to CS 42/60, but back then we would have said \( \mathrm{O} \) instead of \( \Theta \).

  • LHS Cow speaking

    Now is the perfect time to talk about it!

  • RHS Cow speaking

    Big \( \mathrm{O} \) is also a way to say something about the asymptotic complexity. But what it means is different than \( \Theta \).

Big \( \mathrm{O} \)

For a function \( g(n) \), \( \mathrm{O}(g(n)) \) is a set of functions, all the functions \( f \) that grow more slowly or at a similar rate to \( g \) as \( n \to \infty \).

More specifically,

$$ f(n) \in \mathrm{0}(g(n)) \mathrm{\quad{}if\ and\ only\ if\quad{}} \lim_{n \to \infty} \frac{ f(n) }{ g(n) } < \infty $$

Whereas \( f(n) \in \Theta(g(n)) \) sort of means "\( f \) is similar to \( g \)", \( f(n) \in \mathrm{O}(g(n)) \) sort of means "\( f \) is no worse than \( g \)".

  • Owl speaking

    Saying it's "worse" is placing a value judgement on a function, which is weird.

  • LHS Cow speaking

    Sure, but in Computer Science we are usually talking about costs and generally agree that costs growing more quickly is worse.

Example 1

Is \( 2n + 1 \in \mathrm{O}(n) \)?

Remember, consider \( \lim_{n \to \infty} \frac{ 2n + 1 }{ n } \).

Let's take the limit:

$$ \lim_{n \to \infty} \frac{ 2n + 1 }{ n } \\ = \lim_{n \to \infty} \frac{ 2n }{ n } + \lim_{n \to \infty} \frac{1}{n} \\ = 2. $$

Since \( 2 < \infty \), we can say that \( 2n + 1 \in \mathrm{O}(n) \).

  • Duck speaking

    Wouldn't we say that \( 2n + 1 \in \Theta(n) \)?

  • LHS Cow speaking

    Sure, we could say that. That's also true!

  • RHS Cow speaking

    These aren't mutually exclusive statements. They can both be true.

  • LHS Cow speaking

    Just like \( x \leq 5 \) and \( x = 5 \) can both be true.

  • RHS Cow speaking

    Of course, like \( x = 5 \), \( 2n + 1 \in \Theta(n) \) gives us more information, so if we know that's true, it's probably better to just say that!

Example 2

Is \( 2n + 1 \in \mathrm{O}(n^2) \)?

Remember, consider \( \lim_{n \to \infty} \frac{ 2n + 1 }{ n^2 } \).

Let's take the limit:

$$ \lim_{n \to \infty} \frac{ 2n + 1 }{ n^2 } \\ = \lim_{n \to \infty} \frac{ 2n }{ n^2 } + \lim_{n \to \infty} \frac{ 1 }{ n^2 } \\ = 0 $$

Since \( 0 < \infty \), we can say that \( 2n + 1 \in \mathrm{O}(n^2) \).

  • Horse speaking

    Hay, wait. In CS 60 we would never have said that \( 2n + 1 \in \mathrm{O}(n^2) \)!

  • LHS Cow speaking

    Sure. That's because it's not a very useful thing to say.

  • RHS Cow speaking

    Still, now that we understand more fully what \( \mathrm{O} \) means, we can see that it is true!

  • LHS Cow speaking

    Of course, because the limit is \( 0 \), we know that \( 2n + 1 \notin \Theta(n^2) \), but it is safe to say that it is "no worse than" quadratic.

  • Pig speaking

    Or saying paying for an all-you-can-eat buffet is "no worse than" paying for every side dish individually. I mean, it's better, you can eat MORE for the same price, but it's also no worse.

Example 3

Is \( 2n + 1 \in \mathrm{O}(1) \)?

Remember, consider \( \lim_{n \to \infty} \frac{ 2n + 1 }{ 1 } \).

Let's take the limit:

$$ \lim_{n \to \infty} \frac{ 2n + 1 }{1} \\ = \lim_{n \to \infty} 2n + 1 \\ \to \infty. $$

So, we can say that \( 2n + 1 \in \mathrm{O}(1) \).

  • Cat speaking

    That makes sense because linear time is asymptotically worse than constant time. And \( \mathrm{O} \) means "no worse than".

  • LHS Cow speaking

    Now you're getting it!

What is \( \mathrm{O} \) For?

  • Duck speaking

    I feel like I understand how to use \( \Theta \), but when do we use \( \mathrm{O} \)?

  • LHS Cow speaking

    Well, we use it whenever we want to say that the complexity is no worse than something else!

  • RHS Cow speaking

    For example, let's revisit linear search...

On the previous page, we concluded that linear search has

  • \( \Theta(1) \) best-case complexity and
  • \( \Theta(n) \) worst-case complexity.

Depending on the input, the complexity could be of completely different orders!

So we can't just assign an order of complexity to linear search overall.

But one thing we could reasonably say is that the complexity of linear search is never worse than linear.

Big \( \mathrm{O} \) is how we say that with asymptotic notation: the complexity of linear search is \( \mathrm{O}(n) \).

We use \( \Theta \) when we want to say that two cost functions are asymptotically similar.

  • LHS Cow speaking

    For example, "the worst-case complexity of linear search is \( \Theta(n) \)".

And we use \( \mathrm{O} \) when we want to say that one cost function is upper bounded by another in an asymptotic sense.

  • RHS Cow speaking

    For example, "the complexity of linear search is \( \mathrm{O}(n) \)".

The second statement leaves room for the possibility that the complexity overall might sometimes be better than linear, because it might actually be!

Whereas we know that the complexity is definitely linear in the worst case; it can't be better or worse.

Quick Practice: Using \( \mathrm{O} \)

There is a very widely used sorting algorithm named Quicksort.

Oddly enough, despite its name, in the worst case Quicksort takes quadratic time! (And, even weirder, the worst case happens when the list comes already sorted.)

In the best case, however, it takes linearithmic (\( n\log{n} \)) time.

Which of these is the most precise and accurate characterization of the worst-case complexity of Quicksort?

Which of these is the most precise and accurate characterization of the complexity of Quicksort overall?

  • Duck speaking

    So… if it has such a lousy worst case, why do people use it so much and call it "Quicksort"?

  • LHS Cow speaking

    For one thing, the worst case is very unlikely if you shuffle the list before sorting it. That makes its expected complexity \( \Theta(n \log{n} ) \), and the worst case becomes vanishingly unlikely for large \( n \).

  • Rabbit speaking

    Actually, you don't have to shuffle the array. If you're familiar with the algorithm, you'll know that it has to choose a “pivot” element each time it recurses. If you choose the pivot randomly, then the worst case is also vanishingly unlikely.

  • LHS Cow speaking

    For another, its constant factors and overhead are small compared with many sorting algorithms that have \( \Theta(n \log{n}) \) complexity.

  • RHS Cow speaking

    Always remember: asymptotic analysis is useful, but it doesn't tell the whole story!

Big \( \Omega \)

  • Duck speaking

    So, if \( \mathrm{O} \) means "no worse than", is there a way to say "no better than"?

  • LHS Cow speaking

    Yep! That's Big \( \Omega \) (the Greek capital letter omega).

  • RHS Cow speaking

    \( \Omega \) is just the mirror image of \( \mathrm{O} \).

For a function \( g(n) \), \( \Omega(𝑔(𝑛)) \) is a set of functions. It is all the functions \( f \) that grow faster than or at a similar rate to \( g \) as \( n \to \infty \).

More specifically,

$$ f(n) \in \Omega(g(n)) \mathrm{\quad{}if\ and\ only\ if\quad{}} \lim_{n \to \infty} \frac{ f(n) }{ g(n) } > 0. $$

Whereas \( f(n) \in \mathrm{O}(g(n)) \) sort of means "\( f \) is no worse than \( g \)", \( f(n) \in \Omega(g(n)) \) sort of means "\( f \) is no better than \( g \)".

Useful facts about \( \mathrm{O} \) and \( \Omega \)

  • If \( f(n) \in \mathrm{O}(g(n)) \), then \( g(n) \in \Omega(f(n)) \) and vice versa.
  • If \( f(n) \in \mathrm{O}(g(n)) \), and \( f(n) \in \Omega(g(n)) \), then \( f(n) \in \Theta(g(n)) \) and vice versa.
  • LHS Cow speaking

    So, if you can work with \( \mathrm{O} \), then you can work with \( \Omega \).

  • RHS Cow speaking

    You just have to flip things around!

  • LHS Cow speaking

    We're usually more interested in the question "how bad can things get?" than "how good can things be?"

  • RHS Cow speaking

    For that reason, \( \Omega \) doesn't get as much use as \( \mathrm{O} \) or \( \Theta \), but it comes in handy sometimes.

(When logged in, completion status appears here.)