Chapter 25 Exercises: Algorithm Design Strategies


Part A: Conceptual Understanding

A.1. In your own words, explain the difference between divide-and-conquer and dynamic programming. Both break problems into sub-problems — what makes them different?

Guidance Divide and conquer breaks problems into independent sub-problems that are solved separately and combined. Dynamic programming breaks problems into overlapping sub-problems — the same sub-problem is needed by multiple larger problems, so results are cached to avoid recomputation. If sub-problems do not overlap, D&C is appropriate; if they do, DP is needed.

A.2. Why does the greedy strategy work for the fractional knapsack but not for the 0/1 knapsack? Give a specific example where greedy fails for 0/1 knapsack.

Guidance In the fractional knapsack, you can take part of an item, so the greedy strategy of taking the highest value-per-weight first is always optimal. In 0/1, you must take all or nothing. Example: capacity 10, items (weight=6, value=6), (weight=5, value=5), (weight=5, value=5). Greedy takes the first item (ratio 1.0) for value 6. But the optimal solution takes items 2 and 3 for value 10.

A.3. Explain the concepts of "overlapping subproblems" and "optimal substructure." Give an example of a problem that has both.

Guidance Overlapping subproblems: the same smaller problem appears multiple times when solving the larger problem (e.g., Fibonacci: F(5) needs F(3), and F(4) also needs F(3)). Optimal substructure: the optimal solution to the whole problem contains optimal solutions to sub-problems (e.g., the shortest path from A to C through B contains the shortest path from A to B). The 0/1 knapsack has both properties.

A.4. Compare memoization and tabulation. What are the advantages of each approach?

Guidance Memoization (top-down): easier to implement (add cache to recursive solution), computes only needed sub-problems, preserves the recursive structure. Tabulation (bottom-up): no recursion overhead, explicit computation order, can sometimes optimize space by discarding old table entries. Tabulation is typically slightly faster due to no function call overhead; memoization is easier to write for complex problems.

A.5. The activity selection problem can be solved greedily. Could it also be solved with dynamic programming? If so, how? Would the DP solution be better or worse than greedy?

Guidance Yes, DP can solve activity selection. Define DP[i] = maximum activities selectable from the first i activities. For each activity, decide to include or exclude it (like knapsack). The DP solution would be O(n²) or O(n log n), while greedy is O(n log n) with sorting. Greedy is simpler and equally correct, so DP is overkill here — but it demonstrates that DP is always an option when greedy works.

A.6. Trace the DP table for the 0/1 knapsack problem with: capacity = 7, items = [(weight=1, value=1), (weight=3, value=4), (weight=4, value=5), (weight=5, value=7)]. What is the optimal value? Which items are selected?

Guidance Build a 5×8 table (4 items + base case, capacities 0-7). Optimal value: 9 (items 2 and 3: weight 3+4=7, value 4+5=9). Tracing back: Table[4][7]=9, Table[3][7]=9, so item 4 not taken. Table[3][7]=9, Table[2][7]=5, so item 3 taken (weight 4). Table[2][3]=4, Table[1][3]=1, so item 2 taken (weight 3). Table[1][0]=0, done.

Part B: Applied Analysis

B.1. Rosa wants to decide how to split a $2,000 tax refund across debt payments. Each debt has a remaining balance and an interest rate. She can make partial payments. Should she use greedy (pay highest interest first) or DP? Justify your answer.

Guidance Greedy works here. This is essentially the fractional knapsack: the "value" of each payment is the interest saved, and higher interest rates give more value per dollar. Since she can make partial payments (fractional), greedy is optimal. If she could only pay in fixed installments (all-or-nothing), DP would be needed.

B.2. Tomás needs to fit textbooks into his backpack (capacity: 10 kg). He has 6 textbooks with weights [2, 3, 4, 5, 6, 7] kg and importance values [3, 4, 5, 8, 9, 10]. Use the 0/1 knapsack DP to find the optimal selection.

Guidance Build a DP table with 7 rows (6 items + base) and 11 columns (capacities 0-10). Optimal value: 15 (items with weight 3,7 giving value 4+10=14, or weight 2,3,5 giving value 3+4+8=15). Trace the table to confirm.

B.3. A ride-sharing app needs to match drivers to passengers. Each match has a "quality score." The goal is to maximize total quality across all matches, where each driver and passenger can be matched to at most one partner. Is this a greedy, DP, or neither problem? What is the correct approach?

Guidance This is a bipartite matching problem, which is neither purely greedy nor DP (though DP can help for special cases). The correct approach is the Hungarian algorithm or a network flow algorithm. Greedy (match highest-quality pairs first) can produce suboptimal results because matching two high-quality pairs might prevent three medium-quality matches that would score higher overall.

Part C: Code Exercises

C.1. Implement the Fibonacci function three ways: naive recursion, memoization, and tabulation. Time all three for N = 40. Report the results.

Hint Naive: O(2^n), will take several seconds. Memoization: O(n), near-instant. Tabulation: O(n), near-instant. The difference between naive and memoized Fibonacci is one of the most dramatic performance improvements in all of CS.

C.2. Implement the coin change problem using dynamic programming for an arbitrary denomination set. Find the minimum number of coins to make 47 cents using denominations {1, 5, 10, 21, 25}.

Hint DP[amount] = minimum coins for that amount. DP[0] = 0. For each amount, try all denominations: DP[amount] = min(DP[amount - coin] + 1) for all valid coins. Answer for 47 with {1,5,10,21,25}: 3 coins (21+21+5).

C.3. Implement the LCS function from Section 25.5 and extend it to print the actual subsequence (not just its length). Test with "ABCBDAB" and "BDCAB".

Hint To recover the LCS string, trace back through the table: if S1[i]=S2[j], include that character and move diagonally. If Table[i-1][j] > Table[i][j-1], move up. Otherwise move left.

C.4. Implement the activity selection problem. Test with activities: [(1,3), (2,5), (0,7), (4,8), (6,9), (5,10), (8,11), (12,14)]. Which activities are selected?

Hint Sort by finish time. Selected: (1,3), (4,8), (8,11), (12,14) — 4 activities.

C.5. Implement the fractional knapsack. Capacity: 50 kg. Items: [(10, 60), (20, 100), (30, 120)] where each pair is (weight, value). What is the maximum value?

Hint Ratios: 6, 5, 4. Take all of item 1 (10kg, $60), all of item 2 (20kg, $100), and 20/30 of item 3 (20kg, $80). Total: $240, weight: 50kg.

Part D: Challenge Exercises

D.1. Edit Distance. Implement the DP solution for edit distance (minimum number of insert/delete/replace operations to transform one string into another). Test with "kitten" → "sitting".

Hint DP[i][j] = edit distance between first i chars of S1 and first j chars of S2. If S1[i]=S2[j], DP[i][j]=DP[i-1][j-1]. Otherwise, DP[i][j] = 1 + min(DP[i-1][j], DP[i][j-1], DP[i-1][j-1]). Answer: 3 (kitten → sitten → sittin → sitting).

D.2. Matrix Chain Multiplication. Given matrices A1(10×30), A2(30×5), A3(5×60), find the parenthesization that minimizes the total number of scalar multiplications.

Hint This is a classic DP problem. DP[i][j] = minimum multiplications to compute Ai through Aj. Try all split points k: DP[i][j] = min(DP[i][k] + DP[k+1][j] + dims[i-1]*dims[k]*dims[j]). Answer: (A1 × A2) × A3 with 4500 multiplications.

D.3. The Knapsack Problem Three Ways. Solve the same knapsack instance with (a) brute force (try all subsets), (b) greedy, and (c) DP. Compare the results and running times. When does greedy produce a suboptimal answer?

Hint Use items where greedy fails: e.g., capacity 10, items [(6,6), (5,5), (5,5)]. Brute force: 2^n subsets, slow but guaranteed optimal. Greedy by value/weight ratio: may miss the optimal combination. DP: O(n*W), guaranteed optimal.

D.4. Longest Increasing Subsequence (LIS). Given a sequence of numbers, find the length of the longest subsequence that is strictly increasing. Use DP. Test with [10, 9, 2, 5, 3, 7, 101, 18].

Hint DP[i] = length of LIS ending at index i. DP[i] = 1 + max(DP[j]) for all j < i where Arr[j] < Arr[i]. Answer: 4 (e.g., [2, 3, 7, 101] or [2, 5, 7, 101]).

D.5. PennyWise Optimal Month. Tomás has 10 optional expenses for the month, each with a cost and a "happiness score." His budget is $300. Use DP to find the optimal set of expenses. Then compare with the greedy approach (highest happiness-per-dollar first). Do they produce the same result?

Hint Create 10 items with costs between $20 and $100 and happiness scores between 10 and 50. The greedy and DP solutions will usually differ, with DP finding a higher total happiness. Present both solutions to Tomás and explain why DP is optimal.

Part M: Interleaving Review (Spaced Practice)

M.1. (From Ch 22) Naive recursive Fibonacci is exponentially slow because of overlapping subproblems. Yet merge sort (also recursive with two calls) is O(n log n). Why the difference?

Guidance Merge sort's sub-problems are independent — MergeSort(left) and MergeSort(right) operate on disjoint halves of the array. No sub-problem is computed twice. Fibonacci's sub-problems overlap — Fib(n-1) and Fib(n-2) both need Fib(n-3), leading to exponential redundancy. The presence or absence of overlap determines whether DP is needed.

M.2. (From Ch 23) Quicksort is a divide-and-conquer algorithm. What are its "divide," "conquer," and "combine" steps? Compare with merge sort's steps.

Guidance Quicksort: Divide = partition array around pivot. Conquer = recursively sort sub-arrays. Combine = nothing (sub-arrays are already in place). Merge sort: Divide = split array in half. Conquer = recursively sort halves. Combine = merge sorted halves. The key difference: quicksort does work before recursion (partitioning), merge sort does work after (merging).

M.3. (From Ch 24) Dijkstra's algorithm is described as "greedy." In what sense is it greedy? How does it differ from other greedy algorithms?

Guidance Dijkstra greedily picks the unvisited vertex with the smallest tentative distance at each step. Unlike some greedy algorithms that fail, Dijkstra's greedy choice is provably optimal for non-negative edge weights. It differs from typical greedy algorithms because it also maintains and updates a "frontier" of tentative distances, making it more structured than simple greedy approaches.

M.4. (From Ch 9) The 0/1 knapsack DP table is a 2D array of size (N+1) × (W+1). If N = 100 items and W = 10,000, how much memory does this require (assuming 4-byte integers)? Is this practical?

Guidance 101 × 10,001 × 4 bytes ≈ 4,040,000 bytes ≈ 4 MB. This is very practical on modern hardware. But if W = 1,000,000, the table would be ~400 MB, which might be problematic. The 1D optimization (using only a single row of size W+1) reduces this to ~4 MB regardless of N.