Chapter 26 Key Takeaways: Complexity Analysis
-
Big-O notation describes growth rate, not speed. O(n²) does not mean "slow" — it means "quadruples when input doubles." A program that is fast on 100 elements may be impractical on 100,000 if it is O(n²).
-
Constants are dropped for good reason. Constants depend on hardware, language, and implementation. The asymptotic behavior (growth rate) is what determines scalability across all these variables.
-
The complexity hierarchy determines practical limits. O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2ⁿ). For n = 1,000,000, the difference between O(n log n) and O(n²) is a factor of 50,000.
-
Analyze loops by counting iterations. Single loops are O(n). Nested loops multiply. Halving loops are O(log n). Loops that double are also O(log n).
-
Analyze recursion with recurrence relations. T(n) = 2T(n/2) + O(n) gives O(n log n). T(n) = T(n-1) + O(1) gives O(n). T(n) = 2T(n-1) gives O(2ⁿ).
-
Space complexity matters too. Memory is finite. An O(n²) space algorithm may be impractical even if its time complexity is acceptable. Always consider time-space tradeoffs.
-
Best/worst/average cases tell different stories. Worst case gives guarantees. Average case predicts typical behavior. Best case is rarely useful for decision-making.
-
Amortized analysis captures average behavior over sequences. Dynamic array insertion is O(1) amortized even though individual insertions can be O(n).
-
Profile before optimizing. Measure execution time to find the actual bottleneck. The slowest operation is often not what you expect.
-
Improve the algorithm first, optimize the code last. An algorithmic improvement (O(n²) to O(n log n)) gives orders-of-magnitude speedup. Micro-optimization gives at most 2-3x. Premature optimization is the root of all evil.