Chapter 25 Key Takeaways: Algorithm Design Strategies
-
Three strategies cover most optimization problems. Divide and conquer for independent sub-problems, greedy for locally optimal choices, dynamic programming for overlapping sub-problems.
-
Divide and conquer splits, solves, and combines. It produces O(n log n) algorithms like merge sort and binary search. The sub-problems must be independent.
-
Greedy is fast but fragile. It works beautifully when the greedy choice property holds (activity selection, fractional knapsack) and fails silently when it does not (0/1 knapsack, some coin change problems). Always prove or test correctness.
-
Dynamic programming trades memory for time. By storing solutions to overlapping sub-problems, DP transforms exponential brute-force solutions into polynomial-time algorithms.
-
Memoization and tabulation are two faces of DP. Memoization (top-down) is easier to write. Tabulation (bottom-up) avoids recursion overhead. Both produce the same results.
-
Optimal substructure is the key DP property. A problem has optimal substructure if its optimal solution contains optimal solutions to its sub-problems. Without this property, DP cannot guarantee optimality.
-
The 0/1 knapsack is the canonical DP problem. It demonstrates why greedy fails on discrete choices and how DP's table-building approach finds the true optimum.
-
LCS (longest common subsequence) demonstrates DP on strings. The 2D table fills in O(m x n) time, and the subsequence is recovered by tracing back through the table.
-
Always verify greedy solutions against brute force or DP on small inputs. A greedy algorithm that produces the right answer for one input may fail on another.
-
Choosing the right strategy is itself a skill. It requires recognizing the problem's structure: are sub-problems independent or overlapping? Is there a greedy property? Does the problem have optimal substructure? This recognition comes with practice.