CodeMosa

Master LeetCode Patterns

Primer

Dynamic Programming

Master the art of smart recursion with memory

What is DP (dynamic programming)?

#

DP is "smart recursion with memory."

You solve a problem by splitting it into smaller subproblems, but you never solve the same subproblem twice—you cache (memoize) results or fill a table (tabulation). This turns many exponential brute-force searches into polynomial-time solutions.

Two properties make DP work:

  • Overlapping subproblems: the same questions get asked repeatedly in a naive recursion.
  • Optimal substructure: an optimal solution is composed of optimal solutions to subparts.

If both hold, DP is a strong candidate.

Why use DP instead of greedy or plain recursion?

#
  • Greedy makes locally best choices—great when it's provably correct, but it can be fooled by problems where a local choice ruins the global optimum.

    • Plain recursion/backtracking may recompute the same states exponentially many times.
    • DP systematically explores all relevant choices once each, guaranteeing correctness (under the two properties) and cutting time from exponential to polynomial:

    Time ≈ #states × avg transitions, Space ≈ #states.

When to use DP (recognition checklist)

#

If you see any of these, think DP:

  1. Min/Max/Count with constraints: "minimum cost/maximum value/number of ways."
  2. Sequential decisions: prefix/suffix, positions
    i
    , pairs
    (i, j)
    , or ranges
    [l, r]
    .
  3. Edit/align strings: LCS, edit distance, wildcard/regex matching.
  4. Capacity/limits: knapsack/coin change ("choose items under capacity").
  5. Grid paths / step-limited moves: right/down, obstacles, costs.
  6. Trees/DAGs: compute from children/parents (rerooting) or in topo order.
  7. Small sets (n ≤ ~20): "visit/assign all" → bitmask DP.
  8. Digit restrictions up to N: count numbers ≤ N with per-digit rules → digit DP.
  9. Two players optimal play: game DP / minimax with memoization.

If none of these fit and a correct greedy exists, DP might be overkill.

The DP approach (6-step recipe)

#
  1. Define the state. What minimal info makes the next decision local? e.g.,

    i
    (index),
    (i, j)
    (substring),
    mask
    (picked set),
    node
    (tree), plus any small "mode" flags.

  2. Write the recurrence (transitions). Express the answer for a state from smaller states. Decide whether you min, max, or sum results.

  3. Base cases. Trivial starts (empty prefix, zero capacity, single char, etc.).

  4. Order of evaluation.

    • Top-down (memoization): recursion + cache. Fast to write; follows dependencies automatically.
    • Bottom-up (tabulation): loop in a dependency-respecting order (e.g., increasing length).
  5. Compute answer location. Is it

    dp[n]
    ,
    dp[n][m]
    ,
    dp[0][n-1]
    , or an aggregate over states?

  6. Complexity & space tweaks.

    • Shrink to rolling arrays if only "previous layer" is needed.
    • Use monotonic queue / CHT / divide-and-conquer / Knuth optimizations when the structure allows.

Tiny, concrete mental pictures

#
  • Climbing stairs (count ways): Ways to reach step

    i
    = ways to reach
    i-1
    + ways to reach
    i-2
    . (Overlapping subproblems, sum transition, base: step 0 → 1 way.)

    • Coin change (min coins): Best for amount

      x
      =
      1 + min(best[x - coin] for coin)
      . (Min transition; base: amount 0 → 0 coins.)

    • Edit distance (two strings):

      dp[i][j]
      = min of delete, insert, replace from smaller prefixes— or carry
      dp[i-1][j-1]
      if chars match. (2D state, min transition.)

    • Burst balloons / matrix chain (interval DP):

      dp[l][r]
      = best split over
      k in (l..r-1)
      plus local cost. (Range state; try all split points.)

How DP relates to graphs

#

Think of each state as a node, each transition as a directed edge to smaller states.

DP = longest/shortest path/count of paths on this implicit DAG computed in a topological order (or via memoized DFS).

Common pitfalls (and fixes)

#
  • State too big / too small: include exactly the info that affects future choices; nothing extra.
    • Double counting in counting problems: ensure transitions partition cases cleanly.
    • Wrong iteration order: bottom-up must respect dependencies; otherwise you read unfilled states.
    • Mixing goals: don't combine "count ways" with "min cost" in one table unless you track both explicitly.
    • Space blow-ups: check if you can roll arrays or compress to 1D.

Quick "should I DP?" flow

#
  1. Naive recursion/backtracking repeats subproblems → Yes.
  2. Greedy is tempting—can you prove it always works? NoDP.
  3. Graph/DAG with acyclic deps? → Topo DP.
  4. Small set/all subsets? → Bitmask DP.
  5. Per-digit constraints up to N? → Digit DP.
  6. Splitting intervals? → Interval DP.

Top‑down vs bottom‑up

#

Top‑down (memoization):

  • Pros: fast to write, only visits reachable states, easier to prune and add constraints.
  • Cons: recursion limits/stack, overhead of function calls, tricky to order iterative post-processing.

Bottom‑up (tabulation):

  • Pros: no recursion, predictable memory access, easier to space‑optimize (rolling arrays), good for iterative orders (length, sum, masks).
  • Cons: must know a valid fill order; may fill unreachable states.

Rule of thumb: start top‑down to validate the recurrence; convert to bottom‑up for performance/space when the order is clear.

1D vs 2D vs bitmask DP

#

1D DP: linear index (

i
) or value (
sum
); examples: climbing stairs, house robber, LIS (tails).

2D DP: pairs like (

i, j
) for substrings or two sequences; examples: edit distance, LCS, grid paths.

Bitmask DP: small

n
(≤20–22) with subset state
mask
; examples: TSP/style pathing, assignment, subset convolution. Complexity ~
O(n 2^n)
or
O(2^n n^2)
.

Pick the smallest state that makes transitions local; then consider rolling arrays (1D from 2D) when only prior row/diagonal is needed.

Knapsack variants

#

0/1 knapsack: each item once;

dp[w] = max(dp[w], dp[w-w_i] + v_i)
iterating
w
downward.

Unbounded knapsack / coin change: infinite copies; iterate

w
upward or loop items outside.

Bounded knapsack: limited copies; split by binary lifting (1,2,4,...) then treat as 0/1.

Multi‑dimensional knapsack: multiple capacities (weight, volume); state grows to 2D/3D.

Space tips: 1D rolling for 0/1 and unbounded; be mindful of iteration direction (down for 0/1, up for unbounded) to avoid reuse bugs.

#

Ready to practice?

Test your understanding with a quiz or explore Dynamic Programming patterns