Exercises: Adversarial Collaboration and Other Tools

Part A: Comprehension and Application

A.1. For each of the nine tools, identify the primary failure mode(s) it addresses and one limitation that prevents it from being a complete solution. Use the Tool-Failure Mode Matrix to check your answers.

A.2. Explain the difference between pre-registration and registered reports. What additional problem does the registered report format solve that pre-registration alone cannot?

A.3. The chapter argues that registered reports are "the single most effective institutional innovation for reducing publication bias." Evaluate this claim using the evidence presented. What would need to be true for this claim to be wrong?

A.4. Red teams have been used extensively in the U.S. military, yet Chapter 28 documented repeated military failures. Explain this paradox using the distinction between tool effectiveness and institutional culture.

A.5. The overcorrection warning identifies specific risks for each tool. For three tools of your choice, describe (a) the overcorrection that could occur and (b) a mechanism for detecting the overcorrection early.

Part B: Analysis

B.1. Using the Epistemic Health Checklist scores for your field (Chapter 32), identify the three lowest-scoring dimensions. For each, recommend a specific tool from this chapter that would address the vulnerability. Justify your recommendations.

B.2. Adversarial collaboration requires both parties to participate voluntarily. Apply the incentive framework (Chapter 11): under what conditions would a defender of the consensus agree to an adversarial collaboration? Under what conditions would they refuse? What does this tell you about when the tool is and isn't useful?

B.3. Compare independent replication funding to prize-based science. Both change what is incentivized. How do they differ in what they reward? Which is more likely to be adopted in your field, and why?

B.4. Post-publication peer review (PubPeer) has caught numerous errors but also enables anonymous harassment. Design a post-publication review system that preserves the error-catching capability while mitigating the harassment risk. What trade-offs are unavoidable?

Part C: Synthesis and Evaluation

C.1. If you could implement only two of the nine tools in your field, which two would you choose? Justify your selection using: (a) which failure modes are most active in your field, (b) which tools address those failure modes most effectively, and (c) which tools are most feasible to implement given your field's current institutional structure.

C.2. The chapter argues that "no single tool addresses all failure modes." Evaluate whether any combination of tools could produce a field that is structurally immune to error. Is error-free knowledge production achievable, or is "less wrong" the best we can hope for?

C.3. Design a tenth tool not on this list — an institutional mechanism for reducing error that addresses a failure mode that none of the nine tools fully covers. Describe what it does, what failure mode it targets, where it could be tried, and what its overcorrection risk would be.

Part D: Mixed Practice (Interleaved)

D.1. A medical research institution wants to reduce false-positive findings in its clinical research program. Using the Red Flag Scorecard (Ch.31) to identify the most common problems, the Epistemic Health Checklist (Ch.32) to diagnose the institutional vulnerabilities, and the tools from this chapter to design solutions, create a comprehensive reform proposal.

D.2. A technology company is making predictions about when its product will achieve a capability milestone (similar to autonomous vehicle timelines from Chapter 29). Design a correction mechanism — drawing from the nine tools in this chapter — that would prevent the company from repeating the tech industry's timeline prediction failures. Which tools apply, and which would need adaptation for a corporate context?