Case Study: Registered Reports in Action — Psychology's Most Effective Reform

The Problem

By 2012, psychology faced a credibility crisis. The replication rate of published studies was estimated at 36-39% (Open Science Collaboration, 2015). Publication bias was rampant — journals published almost exclusively positive results, creating a literature that looked far more consistent and reliable than the underlying reality.

The structural problem was clear: journals made publication decisions based on results. This created three cascading incentives: 1. Researchers were incentivized to find positive results (p-hacking, researcher degrees of freedom) 2. Negative results were not submitted because they wouldn't be published (the file drawer effect) 3. The published literature was systematically biased toward positives, creating a false picture of the evidence base

The Innovation

Chris Chambers, a psychologist at Cardiff University, championed the registered report format: a two-stage review process where the journal decides to accept or reject the study before data are collected — based on the importance of the question and the rigor of the methodology. If accepted at Stage 1, the journal commits to publishing regardless of the results.

The innovation is structurally elegant because it addresses the root cause rather than a symptom:

Problem Traditional Publishing Registered Reports
Publication bias Publication depends on results → bias toward positives Publication depends on question and method → no results bias
P-hacking Analysis can be adjusted after seeing data → inflated findings Analysis is pre-registered and reviewed before data exist → no flexibility
File drawer Negative results aren't submitted → literature is incomplete All accepted studies are published → literature is complete
Reviewer bias Reviewers judge results → biased toward exciting findings Reviewers judge method → biased toward rigorous methods

The Evidence

The data on registered reports is striking:

Null result rate: Approximately 55-60% of registered reports report null or mixed results, compared to ~5-10% in traditional publications. This doesn't reflect worse science — it reflects a more honest distribution of findings.

Quality indicators: Registered reports show higher methodological quality on multiple dimensions: larger sample sizes, more rigorous designs, clearer analysis plans. The pre-commitment to methodology improves the research itself, not just the publication filter.

Adoption: Over 300 journals across multiple disciplines have adopted the registered report format since its introduction in 2013. Adoption is growing but remains a small fraction of total publications.

What It Teaches About Institutional Design

Registered reports succeed because they change the incentive structure rather than relying on individual behavior change:

  1. They don't ask researchers to be more honest — they make honesty the easiest path. When the publication decision is made before results exist, there is no incentive to p-hack, no incentive to suppress null results, and no incentive to engage in questionable research practices.

  2. They don't ask journals to be more fair — they make fairness structural. When reviewers evaluate the question and method without seeing results, their biases toward exciting findings are eliminated.

  3. They don't ask the field to value replication — they make replication valuable. When null results are published alongside positive results, the literature naturally becomes more accurate.

This is the key lesson: the most effective institutional reforms change structures, not cultures. Asking people to behave differently within the same incentive structure rarely works. Changing the incentive structure so that good behavior is the path of least resistance works reliably.

Limitations and Ongoing Challenges

Adoption remains limited. Most journals still use traditional review. Most researchers still publish traditionally. The registered report format has proven effective where it's been adopted, but widespread adoption faces institutional resistance — from journals that prefer the editorial flexibility of results-based review, from researchers who prefer the analytical flexibility of traditional publishing, and from fields where exploratory research dominates.

Question selection bias. There is early evidence that editors may be more likely to accept registered reports on "safe" questions — topics where the methodology is well-established and the question is clearly important. This could create a bias against novel or risky research questions, potentially suppressing exactly the kind of research that produces breakthroughs.

The overcorrection risk. If registered reports became the only accepted format — if exploratory research could not be published unless pre-registered — the format would suppress the open-ended exploration that often produces the most important discoveries. The right balance is registered reports for confirmatory research and traditional formats for exploratory research, with clear labeling of which is which.

Analysis Questions

1. The registered report format changes the incentive structure of publishing. Using the incentive structures framework (Chapter 11), identify three new incentives that registered reports create. Could any of these new incentives produce their own failure modes?

2. Registered reports have been adopted by over 300 journals in ~10 years. Using the Correction Speed Model (Chapter 22), predict the timeline for registered reports becoming the standard (not just available) format in psychology. What variables determine the speed of adoption?

3. Could the registered report format be adapted for non-scientific fields? Design a "registered report" equivalent for policy analysis, journalism, or business strategy. What would need to change? What principles would remain the same?