Appendix E: Research Methods Primer

How to read a psychology paper and evaluate its claims — in plain language.


Types of Studies

Cross-Sectional Survey

What it is: Measures variables at one time point in a sample. "We surveyed 1,000 people about their social media use and depression." What it can show: Correlation — whether two variables are associated. What it can't show: Causation — whether one variable causes the other. Watch for: Self-report bias, WEIRD samples, confounding variables.

Longitudinal Study

What it is: Follows the same people over time. "We measured social media use at Time 1 and depression at Time 2." What it can show: Temporal order — whether changes in one variable precede changes in another. What it can't show: Definitive causation (confounders may still explain both variables). Watch for: Attrition (people dropping out), measurement changes over time.

Randomized Controlled Trial (RCT)

What it is: Randomly assigns participants to conditions. "We randomly assigned half to a growth mindset intervention and half to a control." What it can show: Causation — if the groups differ on the outcome, the intervention likely caused the difference. Watch for: Small samples, short duration, demand characteristics, generalizability.

Meta-Analysis

What it is: Statistical synthesis of many studies on the same question. "We combined 50 studies of meditation and depression." What it can show: The best estimate of the true effect size across studies. Watch for: Publication bias (if the included studies are biased, the meta-analysis inherits the bias). Look for publication bias corrections (trim-and-fill, funnel plots, p-curve analysis).

Pre-Registered Study

What it is: The hypotheses, methods, and analyses are publicly registered before data collection. Why it matters: Eliminates HARKing and reduces p-hacking. Pre-registered findings are more trustworthy. How to check: Look for an OSF or AsPredicted registration link in the paper.


Key Statistical Concepts (in Plain Language)

P-Value

What it means: The probability of getting results this extreme (or more extreme) if the null hypothesis (no effect) were true. p < .05: Means there's less than a 5% chance the result is due to random chance alone. What it doesn't mean: It does NOT mean there's a 95% chance the finding is true. It does NOT tell you the effect is large or important.

Effect Size

What it means: How big the effect is — independent of sample size. Cohen's d: The difference between groups in standard deviation units. - d = 0.2: Small (you'd barely notice) - d = 0.5: Medium (noticeable) - d = 0.8: Large (obvious) Correlation (r): Strength of association. - r = 0.1: Small (explains 1% of variance) - r = 0.3: Medium (explains 9%) - r = 0.5: Large (explains 25%)

Why it matters: A p-value below .05 with a sample of 100,000 can detect a tiny, trivial effect. Effect size tells you whether the finding matters practically, not just statistically.

Confidence Interval

What it means: A range of values that likely contains the true effect. "The effect was d = 0.30, 95% CI [0.15, 0.45]." Why it matters: If the CI includes zero, the effect may not exist. If it's very wide, the estimate is imprecise.

Statistical Significance vs. Clinical Significance

Statistical significance: The result is unlikely to be due to chance (p < .05). Clinical significance: The result is large enough to matter in practice. A study can be statistically significant but clinically meaningless if the effect is tiny (e.g., a drug that reduces depression scores by 0.5 points on a 60-point scale with p < .001 in a sample of 50,000).


How to Read a Psychology Paper (5-Minute Version)

  1. Read the Abstract. Get the main finding and the conclusions.
  2. Check the Sample. How many participants? Who were they? WEIRD?
  3. Check the Design. RCT, longitudinal, cross-sectional, or meta-analysis?
  4. Find the Effect Size. Look for d, r, or odds ratios. Is it small, medium, or large?
  5. Check if Pre-Registered. Is there an OSF or AsPredicted link?
  6. Read the Limitations. Every good paper discusses its limitations. If there's no limitations section, be skeptical.
  7. Check the Funding Source. Who paid for the study? Conflicts of interest?

Red Flags in Research Papers

  • No effect sizes reported (only p-values) — may be hiding trivially small effects
  • Small sample with large effect — winner's curse; the effect may be inflated
  • "Exploratory" analyses presented as confirmatory — possible HARKing
  • Many outcome measures, only some significant — possible selective reporting
  • No limitations discussed — lack of self-awareness
  • Funded by parties with a stake in the outcome — conflict of interest