Chapter 26 Further Reading: Complexity Analysis
Books
-
Cormen, T. H., et al. (2022). Introduction to Algorithms, 4th ed. MIT Press. Part I covers growth of functions (including formal Big-O, Big-Omega, Big-Theta), recurrences, and amortized analysis. The definitive treatment.
-
Sedgewick, R. & Flajolet, P. (2013). An Introduction to the Analysis of Algorithms, 2nd ed. Addison-Wesley. A comprehensive guide to algorithm analysis, covering both theory and practice. More mathematically rigorous than most algorithms textbooks.
-
Skiena, S. S. (2020). The Algorithm Design Manual, 3rd ed. Springer. Chapter 2 provides an excellent practical introduction to algorithm analysis with a focus on intuition over formalism.
-
McConnell, S. (2004). Code Complete, 2nd ed. Microsoft Press. Chapter 25-26 cover code-level performance optimization — the techniques you use after you have chosen the right algorithm. An essential complement to algorithm-level analysis.
Online Resources
-
Big-O Cheat Sheet (bigocheatsheet.com). A one-page reference for the time and space complexity of all major data structures and algorithms.
-
Computational Complexity Zoo (complexityzoo.net). For the mathematically curious: a catalog of all known complexity classes, from P and NP to exotic classes like BQP and #P.
Historical Context
-
Knuth, D. E. (1976). "Big Omicron and Big Omega and Big Theta." SIGACT News 8(2), 18-24. Knuth's paper that standardized the notation we use today.
-
Hartmanis, J. & Stearns, R. E. (1965). "On the Computational Complexity of Algorithms." Transactions of the AMS 117, 285-306. The paper that founded computational complexity theory as a field.
Topics for Further Exploration
-
P vs. NP: The most famous open problem in computer science. Is there a polynomial-time algorithm for every problem whose solution can be verified in polynomial time? A $1 million prize awaits the answer.
-
NP-Completeness: Some problems (traveling salesman, graph coloring, Boolean satisfiability) are provably as hard as the hardest problems in NP. Understanding NP-completeness tells you when to stop looking for an efficient algorithm and start looking for approximations.
-
Parallel Complexity: How fast can problems be solved with multiple processors? Some problems (matrix multiplication) parallelize beautifully; others (linked list traversal) are inherently sequential.
-
Cache Complexity: Modern computers have memory hierarchies (registers, L1/L2/L3 cache, RAM, disk). Algorithms that access memory sequentially (cache-friendly) can be 10-100x faster than algorithms with random access patterns, even with the same Big-O complexity.
-
Compiler Optimization: Free Pascal's
-O2and-O3flags enable optimizations like loop unrolling, constant propagation, dead code elimination, and register allocation. Understanding what the compiler can (and cannot) do helps you write code that optimizes well.