Chapter 7 Key Takeaways
The Foundational Principle
Survey questions do not discover pre-existing opinions — they help construct the opinions they measure. Every wording choice, every scale design, every sequencing decision shapes the distribution of responses you receive.
The Taxonomy of Bad Questions
| Problem | Definition | Example | Fix |
|---|---|---|---|
| Leading question | Signals the expected/correct answer | "Don't you agree that taxes are too high?" | Remove evaluative language; balance the question |
| Loaded question | Embeds contestable assumptions | "How much has the failed policy hurt you?" | Remove embedded judgments; test assumptions separately |
| Double-barreled | Asks about two things simultaneously | "Do you support education and healthcare spending?" | Split into two separate questions |
| Vague | Key terms mean different things to different respondents | "Do you support stronger action on crime?" | Define terms; ask about specific policies |
| Acquiescence-prone | Structure biases toward agreement | Long series of agree/disagree statements pointing the same direction | Mix directions; use forced-choice or other formats |
Response Scale Design Guide
| Situation | Recommended Scale | Reason |
|---|---|---|
| Candidate horse-race | Forced choice between named candidates | Matches the actual voting choice |
| Candidate job approval | Branching (approve/disapprove + strength probe) | Standard and efficient for telephone; captures intensity |
| Attitudes toward groups/figures | Feeling thermometer (0-100) | Fine-grained affect measurement with intuitive metaphor |
| Policy opinion intensity | 5-point Likert (Strongly Agree to Strongly Disagree) | Standard; allows midpoint for non-opinion |
| Issue prioritization | Forced choice or ranking | Captures relative priority, not just absolute support |
Order Effects
- Question order effects: Earlier questions prime accessible considerations that shape later responses. Related topics should be grouped; sensitive topics should come after engagement is established.
- Response order effects: First options attract primacy effects (visual surveys); last options attract recency effects (phone). Randomize response order where appropriate.
- Remedy: Split-sample experiments that vary order provide direct evidence of order effects; rotate candidate order on horse-race questions.
Sensitive Question Techniques
List Experiment (Item Count Technique) - Randomly assign respondents to control (N items) or treatment (N+1 items, including sensitive item) - Ask how many items apply — not which ones - Estimate sensitive item prevalence from the difference in group means - Requires larger samples due to increased variance
Randomized Response Technique (RRT) - Introduce random element (coin flip) that gives respondents plausible deniability - Individual answers uninterpretable; aggregate proportions estimable from known probability structure - Best for high-stigma behaviors; adds complexity to interviewing
Questionnaire Architecture Rules
- Open with engaging, non-threatening questions — right/wrong track, local issues before sensitive national issues
- Group related questions — don't jump between topics; coherent topical flow reduces cognitive load
- Place demographics at the end — avoids identity priming of substantive questions; reduces early dropout
- Use skip logic thoughtfully — branch to follow-ups only for relevant respondents
- Keep it as short as possible — every question must earn its place
- Pretest everything — cognitive interviews, then a pilot sample
The Meridian Questionnaire Lessons
From the Garza-Whitfield design process: - Rotate candidate order on horse-race questions - Include lean probes for all undecideds - Use forced-choice issue frames that acknowledge multiple policy positions - For sensitive topics (immigration, racial attitudes), avoid language that appears in partisan advertising - Distinguish between neutral measurement questions and explicit message-testing questions — label each clearly
The Questionnaire Design Checklist (Quick Reference)
Question wording: No leading language / No loaded assumptions / One thing per question / Specific, not vague / Accessible vocabulary
Response scales: Appropriate number of points / Midpoint where opinion may be absent / Exhaustive and mutually exclusive options / Order randomized where appropriate
Order and flow: No unintended priming / Related questions grouped / Sensitive items late / Appropriate length for mode
Sensitive topics: Social desirability risks identified / Appropriate technique selected / "Refused" option available
Pretesting: Internal review complete / Cognitive interviews conducted / Pilot fielded
The Core Professional Standard
"If someone asks me to defend any question in this questionnaire, I can tell them exactly why it's worded the way it is and what tradeoffs we made. That's the job. You don't have to be neutral. You have to be transparent." — Trish McGovern