Master complexity analysis and recursion fundamentals
What: Big-O describes the asymptotic upper bound on how an algorithm's cost grows with input size n. It hides constant factors and lower-order terms.
Why: Lets you compare scalability independent of hardware and implementation details.
Origin (one-liner): The notation comes from number theory (often called Bachmann–Landau notation): introduced in late-19th/early-20th century work and popularized by Edmund Landau. The "O" is for Order (growth order).
"Eventually" = there exists an n₀ after which the inequality holds.
Unit-cost RAM model (interview default): arithmetic/compare/assign on fixed-width ints → O(1). Array index/ptr hop → O(1).
Bit complexity (theory): cost depends on operand bit-length (e.g., big integers).
Data-structure assumptions: Hash map ops often assumed O(1) amortized (but can degrade).
Unless a problem says otherwise, assume unit-cost RAM and fixed-width integers.
Step A — Count operations structurally
Step B — Use dominance rules
Common series & bounds
Mini-examples
Single loop:
for i in 1..n: cost += 1 → Θ(n)
Nested loops (triangular):
for i in 1..n:
for j in 1..i: work
→ ∑ᵢ₌₁ⁿ i = Θ(n²)
"Halving" loop:
i = n
while i > 1: i //= 2 → number of iterations = ⌊log₂ n⌋ = Θ(log n)
Doubling index in loop:
for (i=1; i<=n; i*=2) → Θ(log n)
A. Recursion tree / iteration method Expand a few levels, spot pattern, sum per-level cost.
B. Substitution (a.k.a. induction) Guess the bound, prove by induction, solve for constants.
C. Master Theorem (divide-and-conquer of the form T(n)=a T(n/b)+f(n))
Let n^(log_b a) be the "work from recursion".
Canonical worked examples
When Master Theorem doesn't fit (e.g., T(n)=T(n/2)+T(n/3)+n), try recursion tree or Akra–Bazzi (advanced).
Dynamic array push (double capacity on full):
Stack with occasional multi-pop: each element is pushed and popped once → O(1) amortized per op.
Potential method: define a potential Φ(state) so that amortized_cost = actual_cost + ΔΦ and show the series telescopes.
Base, progress, combine
Tail recursion vs iteration
A function is tail-recursive if the recursive call is the last thing it does. Some languages optimize tail calls; for interview safety, convert deep tail recursion to a loop.
Memoization vs tabulation (DP)
Recursion pitfalls
| Input size n | Safe time target | |--------------|------------------| | ~10⁵ – 2·10⁵ | O(n), O(n log n) | | ~10⁴ – 5·10⁴ | O(n log n), some O(n√n), light O(n²) | | ≤ 3·10³ | O(n²) fine; avoid O(n³) unless tiny | | ≤ 20–22 | 2ⁿ / n! backtracking feasible with pruning |
1. for i in 1..n: for j in 1..n: for k=1; k<n; k=2* → n × n × log n = Θ(n² log n)
2. T(n)=3T(n/2)+n n^(log₂ 3) ≈ n^1.585 dominates n → Θ(n^(log₂ 3))
3. i=1; s=0; while i<=n: s+=i; i+=i i doubles → iterations ≈ ⌊log₂ n⌋+1 → Θ(log n)
4. Grid n×n, BFS starting from one corner Each cell visited at most once, each edge processed O(1) → Θ(V+E)=Θ(n²).