Chapter 11 Exercises

How to use these exercises: Work through the parts in order. Part A builds recognition skills, Part B develops analysis, Part C applies concepts to your own domain, Part D requires synthesis across multiple ideas, Part E stretches into advanced territory, and Part M provides interleaved practice that mixes skills from all levels.

For self-study, aim to complete at least Parts A and B. For a course, your instructor will assign specific sections. For the Deep Dive path, do everything.


Part A: Pattern Recognition

These exercises develop the fundamental skill of recognizing cooperation and defection dynamics across domains.

A1. For each of the following scenarios, identify whether the situation is best modeled as a one-shot prisoner's dilemma, an iterated prisoner's dilemma, a tragedy of the commons, or a free rider problem. Explain your reasoning.

a) Two gas stations on the same highway intersection, each deciding whether to lower prices to attract customers.

b) Residents of an apartment building deciding whether to clean the shared hallway.

c) A tourist bargaining with a street vendor in a city the tourist will never revisit.

d) Two neighboring countries sharing a river, each deciding how much water to divert for irrigation.

e) Members of a group project at university, each deciding how much effort to contribute.

f) Two rival companies deciding whether to collude on prices or compete aggressively.

g) Commuters deciding whether to drive alone or use public transit, knowing that each additional car makes everyone's commute worse.

A2. For each of Axelrod's four tit-for-tat properties (nice, retaliatory, forgiving, clear), give one real-world example of a person, organization, or system that embodies that property and one that violates it. Explain the consequences of the violation.

A3. Identify the cooperation mechanism (direct reciprocity, indirect reciprocity, kin selection, group selection, or network reciprocity) most likely at work in each of the following:

a) Vampire bats sharing regurgitated blood with roost-mates who previously shared with them.

b) A business executive donating to charity and publicizing the donation.

c) Worker bees sacrificing their own reproduction to serve the queen.

d) A tight-knit rural village where neighbors help each other during harvest season.

e) Meerkats taking turns as sentinels, watching for predators while others forage.

f) An online marketplace where buyers and sellers rate each other after each transaction.

g) Slime mold cells aggregating to form a fruiting body, with some cells sacrificing themselves to form the stalk.

A4. The chapter describes several mechanisms by which cooperation can break down. For each of the following real-world breakdowns, identify which mechanism failed:

a) Overfishing of Atlantic cod in the 1990s, despite scientific warnings.

b) The collapse of a Ponzi scheme.

c) A formerly thriving open source project that dies after its lead maintainer burns out.

d) The escalation of trench warfare in World War I (before informal Christmas truces emerged).

e) The 2008 financial crisis, in which individual banks took risks that collectively destabilized the system.

A5. Match each of Ostrom's eight design principles to one of the following real-world commons management examples. Some principles may apply to more than one example.

a) Swiss alpine villages that limit the number of cows each family can graze on shared meadows.

b) The Maine lobster fishing community, where lobstermen enforce territorial boundaries and report violations to each other.

c) A community garden where plot holders vote on the garden's rules at annual meetings.

d) A neighborhood watch program that gives first-time trespassers a warning, second-time offenders a visit from the police, and third-time offenders a restraining order.

e) An irrigation cooperative in the Philippines that divides water allocation based on each farmer's field size and crop type.


Part B: Analysis

These exercises require deeper analysis of cooperation dynamics.

B1. The Evolution of Trust. Consider an iterated prisoner's dilemma with the following payoff matrix (per round):

  • Both cooperate: each gets 3 points
  • Both defect: each gets 1 point
  • One cooperates, one defects: cooperator gets 0, defector gets 5

a) Why is (Defect, Defect) the Nash equilibrium in the one-shot game? Show that neither player can improve by unilaterally changing their strategy.

b) Now suppose the game is iterated with a 90% probability of another round after each one. Calculate the expected total payoff from the strategy "Always Cooperate against a tit-for-tat player" versus "Always Defect against a tit-for-tat player." Which yields a higher expected payoff?

c) What happens to the viability of cooperation if the probability of another round drops to 20%? Explain intuitively why the shadow of the future matters.

d) A strategy called "grim trigger" cooperates until the opponent defects once, then defects forever after. Compare grim trigger to tit-for-tat. Under what conditions is each strategy preferable? Why might tit-for-tat be better in noisy environments (where players sometimes defect by accident)?

B2. Blockchain Incentive Analysis. Consider a simplified blockchain network with 1,000 miners.

a) Explain why a miner who includes a fraudulent transaction in a block is likely to lose the mining reward. What makes honesty the Nash equilibrium?

b) What happens to the incentive structure if a single entity controls 51% of the mining power? Why does this undermine cooperation?

c) Compare the cooperation mechanism in blockchain (mechanism design / incentive compatibility) to the cooperation mechanism in open source (indirect reciprocity / reputation). What structural differences explain why blockchain can work with anonymous participants while open source typically relies on known identities?

d) Some critics argue that blockchain replaces "trust in institutions" with "trust in code." Is this a fair characterization? What must participants still trust in a blockchain system?

B3. Bacterial Cooperation and Cheater Dynamics. In a population of cooperating bacteria producing a public good:

a) Explain why a cheater mutant (one that does not produce the public good but still benefits from it) has a reproductive advantage. This is the biological analogue of what game-theoretic concept?

b) In a well-mixed liquid culture, cheaters tend to take over and eventually destroy the public good. In a spatially structured environment (like a biofilm), cooperators can persist. Explain why spatial structure changes the outcome, using the concept of network reciprocity.

c) Some bacterial species produce toxins that specifically harm non-cooperating cells. How does this policing mechanism parallel Ostrom's design principles? Which principle does it most closely resemble?

B4. Applying Ostrom's Principles. Choose a common pool resource problem you are personally familiar with (e.g., shared office supplies, a neighborhood park, a shared computing cluster, a communal fridge). Evaluate it against Ostrom's eight design principles:

a) Which principles are currently in place?

b) Which are absent?

c) For each absent principle, propose a specific institutional change that would implement it.

d) Predict whether your proposed changes would improve cooperation. What obstacles would you face in implementing them?


Part C: Application to Your Own Domain

These exercises connect cooperation concepts to your area of expertise.

C1. Identify a cooperation problem in your field of study or professional domain. Describe it in terms of the prisoner's dilemma framework:

a) Who are the "players"?

b) What does "cooperate" and "defect" mean in this context?

c) What are the payoffs for each combination of strategies?

d) Is the game one-shot or iterated? What determines the shadow of the future?

e) What mechanisms currently support cooperation? Which of Nowak's five mechanisms are at work?

f) What threatens to undermine cooperation? What would a defection cascade look like?

C2. Design a mechanism (in the spirit of mechanism design) that would improve cooperation in the situation you described in C1. Your mechanism should:

a) Make cooperation the Nash equilibrium (the individually rational strategy)

b) Include monitoring and graduated sanctions

c) Allow for forgiveness and recovery from mistakes

d) Be implementable with available resources

C3. Identify a "tragedy of the commons" in your domain. Evaluate it against Ostrom's eight design principles. Which principles are present? Which are missing? Would implementing the missing principles help?


Part D: Synthesis

These exercises require integrating ideas across multiple chapters.

D1. Cooperation and Emergence. Chapter 3 introduced emergence -- system-level properties arising from local interactions among simple agents. Chapter 11 argues that cooperation is an emergent equilibrium.

a) In what sense is cooperation in a bacterial colony an emergent property? What are the "simple local rules" and what is the "system-level behavior"?

b) How does the emergence of cooperation in open source differ from the emergence of cooperation in bacteria? What role does intentionality play, and does it change the structural analysis?

c) Can cooperation "emerge" in a one-shot game? Why or why not? What structural feature of iterated games makes emergence possible?

D2. Cooperation and Distributed Systems. Chapter 9 examined centralized versus distributed architectures. Chapter 11 examines cooperation among distributed agents.

a) Both chapters discuss blockchain. What question does Chapter 9 ask about blockchain, and what question does Chapter 11 ask? How do the answers complement each other?

b) Ostrom's design principles are a hybrid of centralized and distributed elements. Identify which principles are centralized (require collective agreement or enforcement) and which are distributed (operate through local observation and response).

c) The internet was designed to survive nuclear attack (Chapter 9). MAD was designed to prevent nuclear attack (Chapter 11). Compare these two design philosophies. What structural principle do they share?

D3. The Bayesian Cooperator. Chapter 10 introduced Bayesian reasoning as optimal belief updating.

a) Explain how a tit-for-tat player is performing Bayesian updating. What is the prior? What is the evidence? What is the posterior?

b) In indirect reciprocity, cooperation depends on reputation -- on the community's collective assessment of a player's cooperativeness. How is this collective assessment a Bayesian inference?

c) Ostrom's graduated sanctions can be interpreted as a Bayesian response to evidence of defection: mild sanctions for ambiguous signals, severe sanctions for unambiguous defection. Develop this interpretation.

D4. Cooperation and Gradient Descent. Chapter 7 introduced the fitness landscape and local optima.

a) Describe the "cooperation landscape" for a group of agents playing the iterated prisoner's dilemma. Where are the local optima? Where is the global optimum?

b) The tragedy of the commons is a social local optimum. Explain why individual agents are trapped at this local optimum -- what prevents them from "descending" to the cooperative equilibrium?

c) Ostrom's design principles can be understood as mechanisms for reshaping the payoff landscape so that the cooperative outcome is no longer a local optimum but the global one. Explain this interpretation.


Part E: Advanced Challenges

These exercises push beyond the chapter's material into deeper or more speculative territory.

E1. Robert Axelrod showed that tit-for-tat wins in the iterated prisoner's dilemma. But subsequent research has shown that a variant called "generous tit-for-tat" (which occasionally cooperates even after the opponent defects) or "win-stay, lose-shift" (which changes strategy only after a bad outcome) can outperform strict tit-for-tat in noisy environments. Research one of these strategies and explain why it outperforms tit-for-tat when there is a probability of accidental defection (noise).

E2. The chapter discusses five mechanisms for the evolution of cooperation. A sixth mechanism, sometimes called "tag-based cooperation," has been proposed: agents cooperate with others who share a visible marker (a "tag") and defect against those with different tags. Analyze the strengths and weaknesses of this mechanism. Under what conditions does it promote cooperation? Under what conditions does it lead to discrimination and intergroup conflict?

E3. Mechanism design is described as "reverse game theory" -- designing the game so that the desired behavior is the Nash equilibrium. Choose a real-world institutional design problem (carbon pricing, organ donation policy, traffic management, online content moderation) and design a mechanism that would align individual incentives with socially desirable outcomes. Evaluate your mechanism against Ostrom's principles.

E4. The chapter notes that Hardin proposed only two solutions to the tragedy of the commons: privatization and government regulation. Ostrom showed a third way: community self-governance. Analyze whether a fourth way exists: algorithmic governance, where smart contracts or automated monitoring systems enforce cooperation rules. What are the advantages and disadvantages of algorithmic governance compared to Ostrom's community-based approach?


Part M: Mixed Practice (Interleaved Review)

These exercises mix concepts from Chapters 7-11 to build integrated understanding.

M1. A population of organisms is evolving on a fitness landscape (Ch. 7). Some organisms cooperate (share food, warn others of predators) and some defect (hoard food, stay silent). The fitness landscape changes depending on how many cooperators are in the population. Explain how this creates a feedback loop (Ch. 2) between individual behavior and the fitness landscape.

M2. An online marketplace uses a reputation system (indirect reciprocity, Ch. 11) where buyers and sellers rate each other. A seller's rating is a Bayesian posterior (Ch. 10) based on accumulated evidence from past transactions. The platform distributes trust assessment across all users rather than centralizing it in a single authority (Ch. 9). Explain how all four of these concepts interact to create a functioning cooperation system.

M3. Consider the signal detection problem (Ch. 6) in monitoring a commons. The monitoring system must detect defectors (signal) against a background of normal use (noise). The base rate of defection is low. Using Bayesian reasoning (Ch. 10), explain why a monitoring system with high sensitivity but moderate specificity might generate too many false accusations, undermining the cooperative equilibrium.

M4. During the Cold War, each side explored new weapons technologies (exploration, Ch. 8) while maintaining existing deterrence systems (exploitation, Ch. 8). Explain the explore/exploit tradeoff in the context of arms race dynamics. How did arms control treaties attempt to constrain exploration while preserving the cooperative equilibrium described in Chapter 11?

M5. Compare the governance of the Linux kernel (open source, Ch. 11) with the governance of a coral reef ecosystem (mutualism, Ch. 11) and the governance of a blockchain network (mechanism design, Ch. 11). For each, identify whether the governance architecture is primarily centralized, distributed, or hybrid (Ch. 9), and explain which of Nowak's five cooperation mechanisms is most important.