Chapter 3 Key Takeaways
The Research Methods Toolkit
- Attraction research uses a range of designs: experimental, correlational, observational, survey-based, qualitative, and neuroimaging. Each has characteristic strengths and limitations; no single method is sufficient on its own.
- Experimental designs allow causal inference through random assignment but typically sacrifice ecological validity — the conditions that make studies internally valid often make them less like the real world.
- Qualitative methods are not merely preliminary to "real" research; they generate kinds of understanding that quantitative methods cannot capture, particularly regarding how people make meaning of their own attraction experiences.
The WEIRD Problem
- The acronym WEIRD (Western, Educated, Industrialized, Rich, Democratic) captures the dramatic unrepresentativeness of the populations on which most psychological research is based.
- Approximately 90% of published attraction studies have drawn samples exclusively from North America or Western Europe. This is not a minor sampling detail — it shapes what the field thinks it knows about human desire.
- The solution is not merely to replicate studies in different countries, but to ask whether the concepts and instruments developed in Western research contexts are appropriate starting points for non-Western populations.
Effect Sizes and Statistical Significance
- Statistical significance (p < .05) and practical significance are not the same thing. With large samples, trivially small effects can be statistically significant; with small samples, genuine effects may fail to reach significance.
- Cohen's d provides a sample-size-independent estimate of effect magnitude: 0.2 = small, 0.5 = medium, 0.8 = large (rough guidelines, not absolute thresholds).
- Most published effects in attraction research are small to medium in size, meaning they explain a modest proportion of the variance in attraction outcomes. This is important context for interpreting popular-science headlines.
The Replication Crisis and Publication Bias
- The Open Science Collaboration (2015) found that only about 39% of a sample of 100 psychology studies replicated successfully — a result that prompted widespread re-evaluation of the field's confidence in its findings.
- The mechanisms behind replication failures include underpowered samples, flexible analysis practices (p-hacking), and HARKing (Hypothesizing After Results are Known).
- Publication bias — the tendency to publish significant results and file-drawer null results — systematically inflates the effect sizes in the published literature. Meta-analyses that do not correct for this will also overestimate effects.
- Pre-registration, in which researchers publicly specify hypotheses and analysis plans before collecting data, is a partial but meaningful remedy for these problems.
Measurement and Methodology
- Self-report, behavioral, and physiological measures do not always agree. Each offers a different window onto the psychology of attraction, and each has characteristic limitations.
- The appeal to physiological measures as "objective truth" behind subjective experience is misleading: physiological responses are imperfect proxies for psychological states, and require careful interpretation.
Ethical Practice
- IRB review exists to protect participants from harm, but ethical research involves more than compliance. Culturally sensitive, collaborative research design — like the approach Okafor and Reyes developed for the Global Attraction Project — is an ethical commitment, not a bureaucratic requirement.
Reading Research Critically
- When encountering any attraction research claim, ask: Who was the sample? What was the design? What is the effect size? Has it replicated? What does it not tell us? These questions are not expressions of cynicism — they are how good science works.