Chapter 26 Key Takeaways: Complexity Analysis

  1. Big-O notation describes growth rate, not speed. O(n²) does not mean "slow" — it means "quadruples when input doubles." A program that is fast on 100 elements may be impractical on 100,000 if it is O(n²).

  2. Constants are dropped for good reason. Constants depend on hardware, language, and implementation. The asymptotic behavior (growth rate) is what determines scalability across all these variables.

  3. The complexity hierarchy determines practical limits. O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2ⁿ). For n = 1,000,000, the difference between O(n log n) and O(n²) is a factor of 50,000.

  4. Analyze loops by counting iterations. Single loops are O(n). Nested loops multiply. Halving loops are O(log n). Loops that double are also O(log n).

  5. Analyze recursion with recurrence relations. T(n) = 2T(n/2) + O(n) gives O(n log n). T(n) = T(n-1) + O(1) gives O(n). T(n) = 2T(n-1) gives O(2ⁿ).

  6. Space complexity matters too. Memory is finite. An O(n²) space algorithm may be impractical even if its time complexity is acceptable. Always consider time-space tradeoffs.

  7. Best/worst/average cases tell different stories. Worst case gives guarantees. Average case predicts typical behavior. Best case is rarely useful for decision-making.

  8. Amortized analysis captures average behavior over sequences. Dynamic array insertion is O(1) amortized even though individual insertions can be O(n).

  9. Profile before optimizing. Measure execution time to find the actual bottleneck. The slowest operation is often not what you expect.

  10. Improve the algorithm first, optimize the code last. An algorithmic improvement (O(n²) to O(n log n)) gives orders-of-magnitude speedup. Micro-optimization gives at most 2-3x. Premature optimization is the root of all evil.