Case Study 1: Evolution and Chess Engines -- Two Satisficers, No Optimizer
"Natural selection is not a mechanism of optimization. It is a mechanism of elimination." -- Attributed to various evolutionary biologists
The Myth of the Optimized Organism
Walk into any introductory biology class and you will hear language that implies optimization. The cheetah is "perfectly adapted" for speed. The eagle's eye is "optimally designed" for detecting prey. The orchid's flower is "precisely engineered" to attract pollinators. This language is seductive and, in its strong form, wrong.
Natural selection does not optimize. It cannot optimize, because optimization requires three things that evolution lacks: foresight, global comparison, and a fixed objective function.
Foresight: Optimization requires the ability to evaluate future consequences of present choices. A chess player thinking five moves ahead is exercising foresight. Evolution has none. Natural selection operates on the current generation's ability to survive and reproduce in the current environment. A mutation that helps an organism today will be selected for, even if it creates a developmental constraint that will handicap the lineage millions of years from now. The recurrent laryngeal nerve is today's problem because of a routing decision made in ancient fish -- a decision that was fine for fish but absurd for giraffes. Evolution could not see the giraffe coming.
Global comparison: Optimization requires comparing all available options and selecting the best. Evolution cannot compare. It can only select from the options that happen to arise through random mutation and recombination. If a better design exists but no mutation has ever produced it, evolution will never find it. The vertebrate retina is backwards because the ancestral eye developed in a way that placed the photoreceptors behind the nerve layer, and no single mutation could reverse the architecture without destroying the eye's function in the intermediate steps. The octopus eye, which developed independently, happened to place the photoreceptors in front, producing a superior design. But vertebrate evolution cannot access the octopus solution because it cannot redesign from scratch.
A fixed objective function: Optimization requires a single, stable metric to maximize. Evolution has no fixed metric. "Fitness" is not a number that can be computed in advance -- it is a retrospective description of which organisms survived and which did not. And the environment that defines fitness is constantly changing. A trait that increased fitness in the Ice Age may decrease it in a warming world. A trait that helps in a forest may hinder on a plain. Evolution is optimizing (if we use the word loosely) for a moving target, in a landscape that shifts under its feet with every step.
Given these constraints, what evolution actually does is satisfice. It retains any variant that is good enough to survive and reproduce. It does not compare that variant against all possible alternatives. It does not ask whether a better design exists. It does not evaluate long-term consequences. It applies a single, brutal, binary filter: survive, or don't. Pass the threshold, or vanish. This is satisficing in its most elemental form.
The Evidence of Imperfection
The evidence that evolution satisfices rather than optimizes is written in every body. Some examples:
The human spine. The human spine evolved in quadrupeds -- animals that walk on four legs, distributing their weight across a horizontal beam. When our ancestors began walking upright, the spine was repurposed as a vertical column bearing the full weight of the torso, head, and arms. It was never redesigned for this purpose. The result: the epidemic of lower back pain that afflicts a significant fraction of the human population. An engineer designing a load-bearing vertical column from scratch would never produce the human spine. But evolution did not design from scratch. It modified what it had, and the modification was good enough -- good enough for our ancestors to survive and reproduce, even if it meant a lifetime of backaches for their descendants.
The panda's thumb. The giant panda has a sixth "finger" that it uses to grip bamboo stalks. This thumb is not a true thumb -- it is an enlarged wrist bone (the radial sesamoid) that has been co-opted for a gripping function. A true opposable thumb, like the one humans have, would be far more effective. But the panda's evolutionary lineage did not have an available path to a true thumb, because the existing five digits were already committed to other functions. Evolution satisficed: it found a structure that was good enough for bamboo gripping, even though it was not the optimal gripping solution. Stephen Jay Gould wrote a famous essay on this example, arguing that the panda's thumb is evidence of evolutionary tinkering rather than engineering.
Photosynthesis. The enzyme RuBisCO, which catalyzes the first step of carbon fixation in photosynthesis, is arguably the most important enzyme on Earth -- it is responsible for capturing carbon dioxide from the atmosphere and converting it into organic molecules. It is also remarkably inefficient. RuBisCO frequently binds oxygen instead of carbon dioxide, leading to a wasteful side reaction called photorespiration that consumes energy without producing useful products. Plants have evolved various workarounds (C4 photosynthesis, CAM photosynthesis), but the fundamental enzyme remains the same slow, error-prone molecule that evolved billions of years ago when Earth's atmosphere had far more carbon dioxide and far less oxygen. RuBisCO was good enough in the ancient atmosphere. It remains good enough today, barely, with extensive compensatory mechanisms. An engineer would have replaced it long ago. Evolution cannot.
The human eye's blind spot. As discussed in the chapter, the vertebrate retina is wired backwards, with nerve fibers lying in front of the photoreceptors. These fibers must exit the eye somewhere, creating the optic disc -- a patch of retina with no photoreceptors at all. This is the blind spot. You do not normally notice it because your brain fills in the gap using information from the surrounding retina and from the other eye. The brain's filling-in is itself a satisficing strategy -- it produces a "good enough" visual field rather than an accurate one.
Each of these examples tells the same story: evolution produces organisms that are good enough, not optimal. The gap between "good enough" and "optimal" is filled with historical accident, developmental constraint, and the impossibility of redesigning existing systems from the ground up.
Deep Blue and the Art of Not Thinking Too Hard
Now consider a system that could not be more different in its surface features: a chess engine.
When IBM's Deep Blue faced Garry Kasparov in their famous 1997 match, the machine could evaluate approximately 200 million positions per second. This sounds like an approach to brute-force optimization -- just look at every possible continuation and pick the best one. But 200 million positions per second, while staggering by human standards, is a pittance compared to the size of the chess game tree.
A typical chess position offers roughly 35 legal moves. If we look just 10 moves ahead (10 half-moves for each side, or 20 total half-moves), the number of possible continuations is approximately 35 to the power of 20, which is roughly 4 times 10 to the 30th power. At 200 million positions per second, it would take Deep Blue roughly 600 billion years to evaluate all possible continuations for just ten moves of look-ahead. The universe is about 14 billion years old.
Chess is finite. Every game must end. In principle, it could be solved -- every possible game could be evaluated, and the optimal move in every position could be determined. In practice, this is as impossible as counting every grain of sand on every beach on every planet in the observable universe. Chess is a finite game that might as well be infinite for any computational purpose.
So Deep Blue did what evolution does: it satisficed. It used several strategies to make the problem tractable:
Alpha-beta pruning. This technique eliminates branches of the search tree that cannot possibly lead to a result better than one already found. If you have already found a move that leads to winning a piece, you do not need to evaluate a move that demonstrably leads to losing a piece. Pruning reduces the effective branching factor dramatically -- from roughly 35 to roughly 6, making deep searches feasible. But pruning is satisficing, not optimization. It discards possibilities without evaluating them, accepting the risk that a discarded branch might have contained a better move.
Heuristic evaluation. At the end of its search (the "leaf nodes" of the game tree), Deep Blue did not determine whether the position was a win, loss, or draw. It could not, because the game was far from over. Instead, it evaluated the position using a heuristic function -- a formula that estimated how good the position was based on features like material balance, king safety, piece mobility, and pawn structure. This evaluation was a guess, an approximation, a judgment of "roughly how good." It was satisficing: the evaluation function said "this position is good enough to consider further" or "this position is bad enough to abandon," without determining the objective truth.
Opening books and endgame tablebases. For the opening and late endgame, Deep Blue used pre-computed databases rather than real-time search. The opening book contained established moves from grandmaster practice. The endgame tablebases contained solved positions for all combinations of a few remaining pieces. These are forms of recognition-primed decision making -- the engine recognizes a pattern and plays the known good move without searching.
The parallels to evolution are striking:
| Feature | Evolution | Chess Engine |
|---|---|---|
| Search space | All possible organisms | All possible game continuations |
| Size of search space | Effectively infinite | Effectively infinite (10^120) |
| Search method | Random variation + selection | Tree search with pruning |
| Evaluation criterion | Survive and reproduce | Heuristic position evaluation |
| Global optimum found? | No | No |
| Strategy | Satisfice (good enough to survive) | Satisfice (good enough to win) |
| Constraint | No foresight, no redesign | Limited computation, limited time |
Modern Engines: Smarter Satisficing, Not Less Satisficing
One might think that modern chess engines, which are far stronger than Deep Blue, have moved closer to optimization. They have not. They have become better satisficers.
Stockfish, the strongest traditional chess engine as of the 2020s, uses a more sophisticated evaluation function and a more efficient search algorithm than Deep Blue, but its fundamental approach is the same: search a subset of the game tree, evaluate positions heuristically, and select the move that looks best. Stockfish's superiority over Deep Blue comes not from searching more deeply (though it does search deeper, thanks to faster hardware and better pruning) but from evaluating positions more accurately -- a better satisficing heuristic, not a closer approach to optimization.
AlphaZero, the neural-network-based engine developed by DeepMind, took a different approach. Rather than using a hand-coded evaluation function, AlphaZero learned its evaluation function through millions of games of self-play. Its search algorithm, Monte Carlo Tree Search (MCTS), explores promising moves more deeply and unpromising moves more shallowly, guided by the neural network's learned intuition about which positions are worth investigating.
AlphaZero's approach is remarkably similar to expert human satisficing. Like Klein's fireground commanders, AlphaZero does not systematically evaluate all options. It generates candidates based on pattern recognition (the neural network's learned patterns, analogous to the expert's experience), explores the most promising ones, and stops when it has found a move that is good enough by its standards. The neural network serves as a rich "prior" (in the Bayesian sense from Chapter 10) that focuses the search on the most promising regions of the game tree.
The result is play of superhuman quality. AlphaZero defeats Stockfish decisively. But it achieves this not by getting closer to optimal play -- no engine is close to optimal play, because optimal play in chess is unknown -- but by satisficing more intelligently. It wastes less search effort on unpromising lines, evaluates positions more accurately, and finds good-enough moves faster.
The Shared Structure
Evolution and chess engines appear to have nothing in common. One is a blind, purposeless process operating over billions of years across the entire biosphere. The other is a deterministic algorithm running on silicon for a few minutes. One has no designer, no goal, no memory. The other is the product of decades of human engineering, with explicit objectives and sophisticated architecture.
Yet they share the same deep structure: both are search processes operating in spaces too vast to explore exhaustively, using heuristic evaluation instead of exact computation, accepting "good enough" solutions because optimal solutions are unattainable.
This convergence is not coincidental. It reflects a fundamental constraint on any search process in a large space. When the space is too big to search exhaustively -- and the interesting spaces almost always are -- the only viable strategy is satisficing. The specific form of satisficing varies: random variation and selection in evolution, tree search with pruning in chess engines, pattern matching in human experts. But the structure is the same: generate candidates, evaluate them approximately, keep the good-enough ones, discard the rest.
The view from everywhere reveals that satisficing is not a compromise forced on inadequate systems. It is the universal strategy of intelligence -- natural or artificial -- confronting the irreducible complexity of the world.
Questions for Discussion
-
The chapter argues that evolution satisfices rather than optimizes. Can you think of any feature of any organism that appears to be genuinely optimal -- not just good, but the best possible design for its function? If so, what would it take to demonstrate that it is truly optimal rather than merely very good?
-
AlphaZero's approach to chess is often described as "more human-like" than traditional engines. In what specific ways does AlphaZero's satisficing resemble the recognition-primed decision making described in Section 12.9 of the chapter?
-
Both evolution and chess engines use evaluation functions -- criteria for judging how good a solution is. In evolution, the evaluation function is survival and reproduction. In chess, it is a heuristic assessment of position quality. How reliable is each evaluation function? What happens when the evaluation function is wrong?
-
The parallel between evolution and chess engines suggests a general pattern: any search process in a vast space must satisfice. Can you identify other examples of search processes that follow this same pattern? (Hint: consider how you search for information on the internet, how a scientist searches for a theory, or how a startup searches for a viable business model.)
-
If evolution could start over -- if it could redesign the vertebrate eye from scratch, or reroute the recurrent laryngeal nerve, or rebuild the human spine for bipedalism -- would the results be optimal? Or would a clean-sheet redesign still satisfice, just with different constraints? What does your answer imply about the relationship between satisficing and optimization?