Big-O Notation - StudyPulse
Boost Your VCE Scores Today with StudyPulse
8000+ Questions AI Tutor Help

Big-O Notation

Algorithmics (HESS)
StudyPulse

Big-O Notation

Algorithmics (HESS)
01 May 2026

Big-O Notation and Worst-Case Complexity Analysis

What Is Big-O Notation?

Big-O notation provides a formal way to describe the upper bound on an algorithm’s resource usage (time or space) as a function of input size $n$.

Formal definition:
$f(n) = O(g(n))$ if there exist constants $c > 0$ and $n_0 > 0$ such that:
$$f(n) \leq c \cdot g(n) \quad \text{for all } n \geq n_0$$

In plain English: $f(n)$ grows no faster than $g(n)$ for large enough $n$.

KEY TAKEAWAY: Big-O describes the worst-case growth rate. It ignores constants and lower-order terms, focusing on the dominant behaviour for large inputs.


Worst-Case Analysis

Worst-case complexity is the maximum number of operations the algorithm performs over all inputs of size $n$.

Example: Linear search on array $A$ of size $n$:
- Best case: Target is $A[0]$ → 1 comparison = $O(1)$
- Average case: Target is at position $n/2$ → $n/2$ comparisons = $O(n)$
- Worst case: Target not found → $n$ comparisons = $O(n)$

VCE Algorithmics focuses on worst-case unless stated otherwise.


Simplification Rules

When computing Big-O, apply these rules:

  1. Drop constants: $3n^2 = O(n^2)$, not $O(3n^2)$
  2. Drop lower-order terms: $n^3 + 5n^2 + 100 = O(n^3)$
  3. Dominant term wins: $n! + 2^n + n^3 = O(n!)$

Ordering of common growth rates (slowest to fastest):
$$O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(n^3) < O(2^n) < O(n!)$$


Examples

Algorithm Time complexity Why
Array access A[i] $O(1)$ Direct memory access
Binary search $O(\log n)$ Halves search space each step
Linear search $O(n)$ May scan every element
Mergesort $O(n \log n)$ $\log n$ levels, $O(n)$ work per level
Bubble sort $O(n^2)$ Two nested loops
Floyd-Warshall $O(n^3)$ Three nested loops
Brute-force TSP $O(n!)$ Enumerate all permutations

Big-O in Practice

Estimating feasibility

With a modern computer doing $10^8$ operations/second:

Complexity $n = 100$ $n = 10^4$ $n = 10^6$
$O(n)$ instant instant instant
$O(n \log n)$ instant instant $\approx$ 0.2s
$O(n^2)$ instant $\approx$ 1s $\approx$ 167 min
$O(2^n)$ long astronomical impossible

Worked Analysis

ALGORITHM Mystery(A, n)
    result  0
    for i  0 to n-1 do           // n iterations
        for j  i to n-1 do       // n-i iterations
            result  result + A[i] * A[j]
        end for
    end for
    return result

Iterations: $(n) + (n-1) + \ldots + 1 = n(n+1)/2$

Time complexity: $O(n^2)$

EXAM TIP: In exams, show your working when determining Big-O. Count loop iterations explicitly, state the dominant term, and drop constants. Simply writing $O(n^2)$ without justification may not receive full marks.

COMMON MISTAKE: Confusing Big-O (upper bound / worst case) with the exact running time. An algorithm with $T(n) = 3n + 7$ is $O(n)$ — not $O(3n)$ or $O(n + 7)$.

Table of Contents