Chapter 4: Case Study 1 — The Hot Hand Debate
How Even Scientists Can Be Fooled by Patterns in Noise
Overview
Few findings in behavioral economics have generated as much scholarly drama as the hot hand — the intuitive belief, held by athletes, coaches, gamblers, and sports fans worldwide, that a person who has succeeded recently is more likely to succeed again. The scientific story of the hot hand is not simply a story about a cognitive illusion. It is a story about what happens when brilliant researchers study human bias without fully accounting for their own, and what it means for the scientific enterprise when a landmark finding turns out to be more complicated than anyone realized.
This case study traces the arc from the original 1985 study to the 2016 reanalysis that changed everything — and draws lessons about pattern detection in noise that apply far beyond sports.
Part I: The Original Study (1985)
Setting
In the early 1980s, Thomas Gilovich was a graduate student at Stanford, working under Amos Tversky — one of the most influential cognitive psychologists in history, co-developer with Daniel Kahneman of prospect theory and the research program that became behavioral economics. Tversky was deeply interested in how humans misperceive random processes. He had spent years documenting that human beings are systematically bad at generating and recognizing truly random sequences — they produce sequences with too much alternation (not enough streaks) and perceive non-alternating sequences as non-random.
The hot hand seemed like a natural application: here was a widely held belief, pervasive in professional sports, that a player in a "zone" shoots better than their average. If Tversky's intuition was right, this belief should evaporate when exposed to rigorous data.
The Study Design
Gilovich, Vallone, and Tversky obtained shooting records for the Philadelphia 76ers across a full NBA season. For each player, they calculated:
- The player's overall shooting percentage (their base rate)
- Their shooting percentage immediately after making one, two, or three consecutive shots
- Their shooting percentage immediately after missing one, two, or three consecutive shots
If a hot hand were real, shooting percentage following a streak of makes should be higher than the base rate. If a cold hand were real, shooting percentage following a streak of misses should be lower.
They also examined free throw data from the Boston Celtics — a cleaner test because free throws have standardized conditions. And they ran controlled shooting experiments with Cornell University basketball players under controlled conditions.
Finally, they surveyed professional basketball players and fans about their beliefs: did players get "hot"? Players and fans overwhelmingly said yes.
The Results
The data, as Gilovich and colleagues analyzed it, showed essentially no hot hand effect. Player shooting percentages after streaks of makes were not significantly higher than their base rates. In several cases, shooting percentages were slightly lower following streaks, though not significantly so. The free throw data was similarly flat.
The controlled shooting experiments with Cornell players also failed to find a hot hand effect.
The conclusion was stark: "Basketball players and fans alike believe that a player's chance of hitting a shot are greater following a hit than following a miss (the 'hot hand'). However, an analysis of the shooting records of the Philadelphia 76ers... provided no evidence for a positive correlation between the outcomes of successive shots."
The belief in the hot hand, they concluded, arises from the "misperception of random sequences" — specifically, the human tendency to expect short sequences to reflect the properties of long-run probabilities (what Tversky called the "law of small numbers"). Streaks are normal features of random processes, but human pattern detection reads them as meaningful signals.
The Impact
The Gilovich-Vallone-Tversky paper became one of the most-cited papers in all of behavioral economics and cognitive psychology. It was widely taught as a demonstration of how intuition fails in probabilistic environments. It appeared in introductory psychology and economics textbooks. It was cited by journalists writing about sports strategy, financial markets, and gambling.
For thirty years, it was received wisdom: the hot hand is an illusion.
Part II: The Statistical Error Nobody Caught
A Subtle Problem in the Data
In 2015, Joshua Miller — a behavioral economist at Bocconi University in Italy — was working with Adam Sanjurjo, also an economist, on a different problem: understanding the law of small numbers more precisely. They were exploring what happens mathematically when you analyze sequential data in finite sequences.
And they noticed something unexpected.
The problem is subtle, so it is worth explaining carefully.
Suppose you flip a fair coin ten times and record each result. Now suppose you want to know: "What is the probability of getting heads, given that the previous flip was heads?" The intuitive answer is 50% — coin flips are independent, so the previous flip shouldn't matter.
But here is where the mathematics gets tricky. When you analyze this question within a finite sequence, a bias emerges. Imagine a sequence of ten flips. Every time heads appears, you look at the next flip. But if heads appears on the last flip, there is no next flip to examine. This means that, in a finite sequence, the "trials" you can examine after a heads always exclude the last flip of any heads run — which is also the most recent flip. This creates a small but real downward bias in the estimated probability of heads following heads in a finite sequence.
Miller and Sanjurjo calculated this bias and found it was not trivially small. For the kind of sample sizes used in the original hot hand study — 100 shots per player, per season — the bias was enough to make a real hot hand effect look like no effect at all, or even a slight negative correlation.
In plain terms: the statistical procedure Gilovich and colleagues used was biased against finding a hot hand, even if one existed.
The Reanalysis
Miller and Sanjurjo corrected for this bias and reanalyzed the original data from the 1985 paper. When they did, the hot hand reappeared.
Shooting percentages following a streak of three made shots were meaningfully higher than base rates — on the order of 10 to 13 percentage points higher in some players' data. This was not a trivial effect. It was statistically significant once the sampling bias was corrected.
They published their reanalysis in 2016 as a working paper titled "Surprised by the Gambler's and Hot Hand Fallacies? A Truth in the Law of Small Numbers." It eventually appeared in the Econometrica, one of the most rigorous economic journals in the world.
The behavioral economics community responded with considerable attention and, in some quarters, some defensiveness. Amos Tversky had died in 1996 and could not respond. Daniel Kahneman wrote publicly about the reanalysis, acknowledging it as an important finding that complicated the original conclusion. Gilovich and his collaborators also responded, largely accepting the statistical point while debating its magnitude.
Part III: What the Research Now Says
The Revised Picture
Subsequent research has examined hot hand effects in multiple contexts, with larger datasets and more careful methodology:
- NBA data (Bocskocsky, Ezekowitz, Stein, 2014): Found evidence for a hot hand in shot selection — players who have made recent shots tend to take harder shots, which may partially explain why raw percentages don't always show an effect. When controlling for shot difficulty, a hot hand in shot-making accuracy appears.
- Bowling (Dorsey-Palmateer and Smith, 2004): Found evidence for a hot hand effect in professional bowling, where each attempt is more standardized than basketball.
- Volleyball (Raab et al., 2012): Found evidence for hot hand effects in serve and attack sequences.
- Baseball (Albright, 1993): Found limited evidence for hot hand effects in hitting — results were mixed across players and periods.
The picture that emerges is not "the hot hand is real everywhere and large" and not "the hot hand is a complete illusion." It is: there may be small, variable, context-dependent hot hand effects in some skill tasks, the size is much smaller than athletes and fans believe, and the original confidence that it was entirely illusory was too strong.
What Caused the Variation?
Several candidate mechanisms have been proposed for why a small hot hand effect might exist:
Confidence effects: Making shots may increase a player's confidence and thus their concentration and effort on the next shot. The psychological effect of recent success might subtly improve performance.
Opponent adjustments: Paradoxically, opponents who perceive a player as "hot" may double-team them or deny them the ball — forcing that player to take higher-quality shots, selected by themselves rather than created by their own shots being forced. This could actually improve shot quality.
Genuine physiological momentum: In fine motor skill tasks, there may be real warm-up effects, where the neural patterns controlling the motor sequences become more precisely calibrated through repetition in a session. This is a real phenomenon in practice conditions; how much it transfers to game conditions is unclear.
Selective attention: Players who feel hot may focus more carefully on each shot, while players in a slump may tighten up or rush.
Part IV: What the Debate Teaches Us
Lesson 1: The Biases We Study Can Affect Our Research
The most striking meta-lesson from the hot hand debate is that the researchers studying bias about random sequences were themselves subject to a subtle form of that bias — not in the naive, "they were looking for patterns in noise" sense, but in the subtler sense that they were working with statistical tools that were not adequate for the question they were asking, and the inadequacy happened to confirm their hypothesis.
This is not fraud. It is not even carelessness by the standards of the time. The small-sample bias in finite sequences that Miller and Sanjurjo identified was genuinely non-obvious. But it illustrates a principle that runs through the history of science: our tools and methods embed assumptions, and those assumptions can produce systematic errors that confirm our pre-existing beliefs.
The hot hand researchers believed, on strong prior grounds, that hot hands were illusions. They built a methodology to test this. The methodology had a flaw that biased it against finding hot hands. The flaw went unnoticed for 31 years.
Lesson 2: The Scientific Corrective Works — Slowly
The good news in this story is that the error was eventually caught. Peer review, replication, and the accumulation of independent datasets eventually produced a corrective finding. The bad news is that it took 31 years, and the original conclusion had already been embedded in textbooks, taught to generations of students, and cited in thousands of papers.
Science is self-correcting. But the correction happens over decades, not weeks. In the meantime, "settled" science can embed errors that shape how a generation thinks about a problem.
Lesson 3: The Truth Is Usually More Complicated Than the Headline
The original finding was clean and dramatic: hot hands are an illusion. The reanalysis produced a more complicated, less dramatic truth: there may be a small real effect, much smaller than intuitively believed, that varies by context and athlete. The revised story is harder to tell, harder to remember, and less satisfying. This is typically how it goes when science progresses — away from clean stories toward complicated, nuanced, probability-laden accounts of messy reality.
Lesson 4: The Right Response to Uncertainty
What should athletes, coaches, and fans do with this revised understanding? The chapter's recommendation — informed by Dr. Yuki's conversation with Marcus — is instructive:
- Do not bet your strategy on a dramatic hot hand effect you cannot measure in real time
- Do not dismiss performance variation as entirely random, because some of it is real
- Keep records; trust data over feeling
- Hold beliefs about streaks loosely, updating with more data rather than with more intensity
Discussion Questions
-
The researchers who produced the original hot hand study were among the most sophisticated thinkers in behavioral economics, including Amos Tversky, widely considered one of the greatest psychologists of the 20th century. What does it say about the nature of cognitive bias that even experts in bias research can be subject to methodological blind spots?
-
The hot hand reanalysis took 31 years to appear. What institutional features of academic science slowed down the corrective process? What changes to scientific practice might speed up corrections in the future?
-
If you were advising a basketball coach who believes strongly in the hot hand, how would you communicate the research findings in a way that is accurate, actionable, and does not require the coach to abandon everything they believe?
-
The chapter argues that the feeling of being "in a zone" may be tracking something real (small genuine performance variation) even if the belief that the zone is dramatic and predictable is wrong. How would you design a tool that helped an athlete distinguish real performance elevation from confirmation bias about a random streak?
See also: Further Reading — Chapter 4 for the full Miller and Sanjurjo paper citation and related research.