Case Study 1: Are We More Depressed, or More Diagnosed?
The Two Cities Thought Experiment
Imagine two cities, each with 100,000 residents and identical actual rates of clinical depression (let's say 7%).
City A has low mental health awareness. Stigma is high. Most people with depression don't seek help. Those who do see a general practitioner who may or may not screen for depression. The city's depression diagnosis rate: 3%.
City B has high mental health awareness. Stigma is low. Mental health campaigns are everywhere. Screening is universal in primary care. People are encouraged to seek help at the earliest signs of distress. The city's depression diagnosis rate: 10%.
Same actual depression rate (7%). Very different diagnosis rates (3% vs. 10%).
Now imagine that City A transitions to City B's approach over a decade. What would the data show? A dramatic increase in depression diagnoses — from 3% to 10%. A tripling. Headlines would announce an "epidemic."
But nothing changed about the actual depression rate. What changed was detection.
Real-World Evidence for the Detection Effect
Several lines of evidence suggest the detection effect is real:
Primary care screening adoption. In the 2000s–2010s, the U.S. Preventive Services Task Force began recommending universal depression screening in primary care. As screening expanded, diagnosis rates rose. This is exactly what you'd expect if existing depression was being detected more efficiently.
Stigma reduction. Pew Research and Gallup data show that attitudes toward mental health treatment have become substantially more positive over the past 20 years. More people are willing to acknowledge depression, seek help, and receive a diagnosis.
Insurance coverage changes. The Mental Health Parity and Addiction Equity Act (2008) and the Affordable Care Act (2010) expanded insurance coverage for mental health services. More coverage → more help-seeking → more diagnoses.
Telehealth expansion. COVID-19 dramatically expanded telehealth availability. Access to mental health services increased, which increased diagnosis rates — particularly in rural and underserved areas.
Real-World Evidence Against Pure Detection
If rising numbers were purely about better detection, you'd expect:
- The increase to flatten as detection approaches 100% — but it hasn't flattened yet
- The increase to be largest in populations where detection was previously worst — but the increase is largest among young people, who were already the group most willing to discuss mental health
- ER visits for self-harm to remain stable (since ER visits don't depend on voluntary help-seeking) — but they've increased substantially
These observations suggest that at least some of the increase reflects genuine new cases, not just better detection.
The Middle Path
The most likely truth: both effects are operating simultaneously.
- Some of the increase is genuine new depression, particularly among youth, possibly related to social media, economic stress, academic pressure, and pandemic effects
- Some of the increase is better detection of pre-existing depression, driven by screening, stigma reduction, and improved access
- Some of the increase is diagnostic expansion, where conditions that would previously have been called sadness, burnout, or adjustment difficulties are now being labeled as depression
The proportions are debated. But the answer is almost certainly not "it's all real increase" or "it's all better detection." It's a mix — and the mix makes the "epidemic" framing problematic because it implies that 100% of the increase represents new illness, when much of it may represent awareness.
Discussion Questions
-
In the two-cities thought experiment, City B has a higher diagnosis rate but the same actual depression rate. Is City B better off than City A? What are the costs and benefits of higher detection?
-
If the increase is partly genuine and partly detection, how should public health messaging be framed? "Depression is increasing" may cause alarm; "We're detecting more" may cause complacency.
-
The youth ER data suggests genuine increase. What distinguishes this data from survey data in its ability to detect real changes?
-
If universal screening increases diagnoses, is there a risk of over-diagnosis? How should screening programs balance sensitivity (catching real cases) against specificity (not false-alarming people who aren't depressed)?