Case Study 26.1: How Press Releases Became Fake Science News — The Case of Nutrition Research
Overview
Nutrition science occupies a unique position in the landscape of scientific misinformation. It produces a disproportionate share of science news, it has a poor replication record, its findings are regularly amplified far beyond what the evidence supports, and it has generated numerous "miracle food" and "dangerous food" panics over the past half-century that have subsequently been revised, contradicted, or quietly abandoned.
This case study examines the institutional and methodological pathways by which weak nutritional science becomes sensational science news — with particular attention to the role of press releases, the structural features of nutritional epidemiology, and the well-documented critiques of the field's methodology.
Part 1: The Institutional Pathway from Research to News
The Chain of Translation
Scientific findings travel from laboratory to public knowledge through a specific institutional chain:
Stage 1 — The research paper: Researchers submit findings to a journal. Peer review catches some errors but misses many others (as discussed in Section 26.3). The paper uses careful, qualified language: "we found an association," "after adjustment for these covariates," "further research is needed."
Stage 2 — The institutional press release: University communications offices write press releases designed to attract media coverage. Press release writers are skilled at making findings sound dramatic, important, and certain. Careful qualifications are stripped out. "We found an association" becomes "X prevents Y." Studies in mice become evidence relevant to humans. Effect sizes are rarely mentioned.
A landmark 2014 study by Sumner et al. in the journal BMJ analyzed 462 press releases from 20 UK research universities. They found that 40% of press releases contained exaggerated claims of causality beyond what the paper stated, 36% contained exaggerated claims of applicability to humans (particularly from animal or cell studies), and 56% of the overstated claims in news stories could be traced directly to overstatement in the press release. The press release was the primary point of distortion, not the journalist.
Stage 3 — Journalist coverage: Journalists working under time pressure and without scientific training receive dozens of press releases per week. The most dramatically written releases are selected for coverage. Even conscientious journalists rarely read the full paper — they read the abstract and the press release. The story is then written for the publication's style: accessible, engaging, lacking nuance.
Stage 4 — Headline: Often written by an editor who has not read the article. The headline maximizes engagement and shares with the least qualification possible.
Stage 5 — Social sharing: Readers share headlines rather than articles. The share rate on Twitter decreases by approximately 50% for each additional link in a thread. Most people who share science news stories have not read past the headline.
Part 2: The Methodological Problems of Nutritional Epidemiology
The Food Frequency Questionnaire Problem
Most large nutritional cohort studies rely on food frequency questionnaires (FFQs) — asking participants to recall and estimate their average consumption of dozens or hundreds of food items over the past year. This measurement method is afflicted by:
Recall bias: People do not accurately remember what they ate last week, let alone last year. Portions are difficult to estimate. Reporting is influenced by social desirability (people overreport "healthy" foods, underreport "unhealthy" ones).
Measurement imprecision: FFQs are known to have poor test-retest reliability. The same person filling out the same questionnaire weeks apart gives substantially different answers. This measurement error attenuates true effects and creates spurious ones.
Dietary pattern co-linearity: Foods are not eaten in isolation. People who eat more vegetables also tend to eat more fruit, less processed food, less red meat, and have other healthier dietary patterns. Disentangling the effect of one food from the entire dietary pattern it comes embedded in is extremely difficult statistically.
The statistician David Allison and his colleagues have documented that even under favorable assumptions, food frequency questionnaires introduce sufficient measurement error to make detection of true dietary effects on health outcomes essentially impossible for most effect sizes that would be clinically meaningful.
The Confounding Problem in Nutritional Epidemiology
Even controlling for standard covariates (age, sex, BMI, smoking, exercise), nutritional epidemiology is chronically unable to control for the full range of factors that distinguish people who eat certain foods from those who do not.
Healthy user bias: People who voluntarily consume foods marketed as healthy tend to be health-conscious in many other ways. They exercise more, sleep better, drink less alcohol, use seatbelts, have regular medical checkups. These behaviors collectively reduce disease risk in ways that are extremely difficult to separate from dietary effects.
Socioeconomic confounding: Access to high-quality food is correlated with income, education, and the built environment. Higher socioeconomic status is itself independently protective against almost every chronic disease, primarily through mechanisms unrelated to food choice.
Reverse causation: People with existing health conditions — early-stage cardiovascular disease, pre-diabetes, undiagnosed cancer — change their diets. These condition-driven dietary changes produce spurious correlations between dietary choices and health outcomes.
John Ioannidis (the epidemiologist best known for his 2005 paper "Why Most Published Research Findings Are False") has been particularly critical of nutritional epidemiology. He has argued that most findings from this field are likely false or grossly inflated due to the combined effects of weak effect sizes, massive multiple testing, extensive confounding, unreliable measurement, and publication bias.
The Willett Epidemiology Critiques
Walter Willett of Harvard's T.H. Chan School of Public Health is among the most prolific and influential nutritional epidemiologists, having led the Nurses' Health Study and Health Professionals Follow-up Study. These large prospective cohort studies have generated hundreds of publications on dietary factors and chronic disease.
Willett's work has come under sustained methodological criticism:
Criticism from Ioannidis: In a widely cited 2018 paper, Ioannidis argued that even the large observational studies from top research groups (implicitly including Willett's) suffer from insufficient control for confounding, over-reliance on FFQs, and excessive willingness to make causal claims from correlational data. He noted that almost every food tested in large epidemiological studies has shown significant associations with multiple outcomes — often contradictory associations — suggesting pervasive multiple testing and false discovery.
Criticism from randomized trial results: When findings from nutritional epidemiology are tested in randomized trials, they often fail. The dietary fat-heart disease hypothesis, promoted for decades based on epidemiological data and supported by Willett's work, was tested in the Women's Health Initiative (WHI) — a massive RCT. The WHI found no significant cardiovascular benefit from a low-fat diet in the primary analysis. Antioxidant vitamins (vitamin E, beta-carotene) that showed protective effects in observational studies actually increased mortality in some randomized trials.
Criticism from effect size analysis: Pimpin et al. (2018) analyzed 41 meta-analyses of dietary factors and mortality. They found that nearly every food tested was associated with significantly higher or lower mortality — meat, dairy, vegetables, eggs, legumes, coffee, tea, chocolate — with typical relative risks in the range of 1.10-1.30. They argued this pattern is inconsistent with genuine dietary effects on mortality and more consistent with systematic confounding and false positives across a massively multiple-tested literature.
Part 3: Case Studies in Nutrition Misinformation
The Fat and Heart Disease Hypothesis
From the 1960s through the 1990s, the prevailing dietary guidance advised reducing dietary fat — particularly saturated fat — to prevent cardiovascular disease. This guidance was based primarily on epidemiological observations by Ancel Keys and subsequent cohort studies.
The hypothesis generated decades of "low-fat" dietary recommendations, a massive low-fat food industry, and the implicit promotion of carbohydrate-rich diets as heart-healthy alternatives. Butter became vilified; margarine (rich in trans fats, later shown to be genuinely harmful) was promoted as a healthier alternative.
When this hypothesis was rigorously tested: - The WHI RCT (n = 49,000 women, 8 years) found no significant cardiovascular benefit from low-fat diets. - Multiple systematic reviews found that replacing saturated fat with carbohydrates did not reduce cardiovascular risk. - Meta-analyses found that trans fats (found in partially hydrogenated vegetable oils promoted as saturated fat alternatives) were significantly more harmful than saturated fat.
The "fat is bad" recommendation had been disseminated for decades based on epidemiological data before randomized evidence revealed its limitations.
Red Wine and the French Paradox
The "French Paradox" — the observation that France had relatively low cardiovascular disease rates despite high saturated fat intake — generated enormous interest in the 1980s-1990s. Red wine consumption was proposed as a protective factor, leading to resveratrol research and a wave of media coverage suggesting red wine was heart-healthy.
Problems: - The "paradox" was based on epidemiological correlations susceptible to all the standard confounders. - The "Mediterranean diet" confounding was not adequately addressed (French dietary patterns differ in many ways beyond wine). - Resveratrol at the doses found in wine showed no significant cardiovascular effects in clinical trials. - The researcher who pioneered resveratrol research, Dipak Das, was found to have fabricated data in multiple papers. - Subsequent large cohort studies and Mendelian randomization studies (a method that uses genetic variants as proxies for exposures) found no significant cardiovascular benefit from moderate alcohol consumption after careful confounding adjustment.
The wave of "red wine is good for you" media coverage contributed to public confusion about alcohol and health.
The Dietary Supplement Industry and Weak Science
The dietary supplement industry generates billions in revenue on the basis of nutrition research findings that often cannot withstand scrutiny:
Antioxidant vitamins: Observational studies consistently found that people with high antioxidant vitamin status had lower disease rates. This generated a supplement industry worth hundreds of billions globally. Multiple large RCTs subsequently found that vitamin E supplementation increased overall mortality in some populations (the HOPE-TOO trial), that beta-carotene increased lung cancer risk in smokers (the CARET and ATBC trials), and that high-dose vitamin C had no significant disease prevention effects.
Omega-3 fatty acids: Observational data suggested strong cardiovascular protection. Early RCTs showed modest benefits. Subsequent large RCTs found more mixed results, with later high-dose omega-3 RCTs showing some benefit for cardiovascular outcomes but only at pharmaceutical doses far above what supplements provide.
Part 4: The Structural Incentives That Maintain the Problem
Why does weak nutritional science continue to be amplified into sensational media coverage?
University incentive structures: Research universities compete for media coverage as evidence of public impact. Press offices are rewarded for generating news stories. Studies with dramatic findings generate more media coverage and, indirectly, enhance the university's reputation for cutting-edge research.
Journal incentive structures: High-impact journals select for surprising, counterintuitive findings. A paper finding that a common food dramatically reduces disease rates is more publishable than a paper finding no association. This selection pressure favors dramatic but unreliable findings.
Industry interests: The food industry funds nutritional research selectively. Studies finding benefits for products are publicized; studies finding harms are suppressed or contested. Coca-Cola, sugar industry associations, and various food manufacturers have documented histories of funding favorable research and suppressing unfavorable research.
Media business models: Health and nutrition content is among the most widely shared content on social media. "Miracle food" stories generate page views; "this study has methodological limitations" stories do not.
Part 5: What Responsible Nutrition Science Communication Would Look Like
Qualify study designs clearly: News stories should state explicitly and prominently whether findings come from animal studies, small human trials, or large observational studies — not simply "scientists have found."
Report effect sizes and confidence intervals: The size of an association matters. An odds ratio of 1.05 (5% difference) is not the same as 1.5 (50% difference).
Acknowledge confounding: Observational nutritional studies cannot establish causation. This limitation belongs in the story itself, not buried in the final paragraph.
Wait for replication: Single studies — even large ones — should not generate dietary recommendations.
Distinguish mechanism from outcome: Demonstrating that a compound has a plausible mechanism (e.g., antioxidant activity in cell cultures) is not the same as demonstrating clinical benefit.
Discussion Questions
-
University communications offices that overstate research findings are primarily trying to attract media coverage for their institution. Does this institutional incentive create ethical obligations for communications staff? For researchers who approve the press releases?
-
If the methodological problems with nutritional epidemiology are as serious as critics suggest, what follows for public health dietary guidance? Should dietary guidelines be suspended until better evidence is available? Or is imperfect evidence better than no guidance?
-
The fat-and-heart-disease hypothesis was promoted for decades based on weak evidence. When it was not fully supported by RCT evidence, how should this history affect our confidence in current nutritional guidelines?
-
Scientists like Willett have defended their work against Ioannidis's critique, arguing that waiting for RCT evidence on dietary questions would mean decades of inaction on potentially preventable disease. How should the tension between acting on imperfect evidence and the risks of acting on false evidence be managed?
-
What responsibility do individual scientists have for how their findings are represented in press releases? Should researchers be able to override or correct press releases that misrepresent their work?