Chapter 7 Exercises: Survey Design: From Questions to Questionnaires

Tier 1: Foundational

1. Question Problem Identification Classify each of the following questions as one or more of: (a) leading, (b) loaded, (c) double-barreled, (d) vague, or (e) acquiescence-prone. Explain your diagnosis for each.

i. "Do you support the freedom-destroying gun control measures being pushed by Washington elites?" ii. "Have you stopped engaging in wasteful consumer behavior since learning about climate change?" iii. "Do you think education and job training are the most important investments we can make in America's future?" iv. "Do you support or oppose healthcare reform?" v. "Don't you agree that politicians should be held to higher ethical standards?" vi. "Do you think the president is doing a good job handling foreign and domestic policy?" vii. "How important is it to you that the government protects our borders?"


2. Question Repair Rewrite each of the following questions to eliminate the identified problem. Show the original and your corrected version, and briefly explain what you changed and why.

a. (Loaded) "Given the skyrocketing crime caused by illegal immigration, do you support stricter border enforcement?" b. (Double-barreled) "Do you support investing in renewable energy and reducing our dependence on foreign oil?" c. (Leading) "Experts agree that taxes are too high — do you agree that Congress should cut taxes?" d. (Vague) "Do you think the government is doing enough about drugs?"


3. Response Scale Selection For each of the following measurement purposes, select the most appropriate response format and justify your choice: (a) Likert scale, (b) feeling thermometer, (c) forced choice between candidates, (d) branching approve/disapprove.

i. Measuring voter preference in a two-candidate election ii. Measuring intensity of support for the incumbent senator iii. Measuring attitudes toward undocumented immigrants as a group iv. Measuring agreement with a policy statement


4. Questionnaire Architecture You are designing a survey about mayoral approval and budget priorities in a mid-sized city. Arrange the following question types in the order you would present them in the questionnaire, and briefly explain your reasoning for any item you move significantly from its original position:

  • Mayor's overall job approval
  • Respondent's party identification
  • Rating of specific city services (parks, roads, police)
  • Right/wrong track for the city
  • Respondent's household income
  • Priority of budget issues (most important issue for city)
  • Awareness of current budget debate
  • Respondent's age and education

5. Likert vs. Thermometer A colleague argues: "Likert scales are better because they have clear interpretive labels. Feeling thermometers have too much ambiguity — what does '67 degrees' mean?" Write a 150-200 word response that (a) acknowledges the legitimate concern in your colleague's critique and (b) explains when feeling thermometers provide valuable information that Likert scales cannot.


6. Social Desirability Identification For the Garza-Whitfield race, identify three specific survey questions where you would expect elevated social desirability bias. For each, explain: who would be biased in which direction, and why.


7. Mode Effects You are polling about immigration attitudes in a state with a significant immigrant population, including some non-citizen residents who might fear government institutions. Describe how you would expect results to differ across: (a) live interviewer telephone, (b) anonymous web survey, and (c) mail self-completion. Which would you recommend, and why?


Tier 2: Analytical

8. The Welfare/Assistance Gap — Analysis The "welfare" vs. "assistance to the poor" wording effect produces a 10-20 point gap in expressed support. Using Zaller's RAS model, explain this gap: what considerations does each framing activate, and how does the model predict that different accessible considerations lead to different survey responses? Then argue: which framing is "more valid" as a measure of public preferences? Is there a meaningful answer to that question?


9. Designing a Sensitive Question You want to measure respondents' racial resentment (a psychological construct that combines anti-Black affect with a work-ethic narrative) in the context of a poll about affirmative action. You know that direct questions will be subject to heavy social desirability bias. Design either a list experiment or a randomized response question that could measure this construct more honestly. Be specific about: question wording, the non-sensitive items (for a list experiment), how you would analyze the results, and the limitations of your approach.


10. Order Effects — Research Design You suspect that asking about candidate favorability before the horse-race question inflates the favorability of the leading candidate (by priming positive considerations about them). Design a simple experiment to test this hypothesis. What is your treatment condition, what is your control condition, and what results would confirm your hypothesis? What alternative explanations would you need to rule out?


11. Questionnaire Critique Obtain a real survey instrument from a publicly available source (ANES, Pew, Gallup's public questionnaires, or a campaign poll disclosed in a news report). Select five consecutive questions and conduct a thorough methodological critique using the questionnaire design checklist from the chapter. What problems do you identify? What improvements would you recommend?


12. The Campaign vs. Academic Tension Nadia Osei (Garza's analytics director) wants the Meridian poll to use Garza's exact message language to test how voters respond to her ads. Vivian Park objects, arguing this would compromise measurement validity. Write a 300-word memo that proposes a compromise approach — one that serves both the campaign's practical need for message testing and the methodological standard of valid measurement. Be specific about which questions would use which approach.


13. The Cognitive Interview Report After conducting cognitive interviews on a draft questionnaire about tax policy, you discover the following:

  • Six of ten respondents interpreted "income taxes" as including state income taxes, not just federal
  • Four of ten respondents didn't know what a "capital gains" was and guessed it meant corporate profits
  • Three of ten respondents gave the same answer to two different questions that were intended to measure different constructs (general tax burden vs. personal tax burden)

Write a revision memo that addresses each of these findings with specific question rewording recommendations.


Tier 3: Advanced

14. Survey as Political Technology Frank Luntz's advice to use "death tax" rather than "estate tax" was explicitly intended to shift policy opinion, not merely to measure it. Using the chapter's frameworks, analyze the following claim: "There is no meaningful distinction between a survey that tests messaging and a survey that shapes opinion, because any survey that makes certain considerations accessible is also priming those considerations." Do you agree? Where do you draw the line between legitimate measurement and manipulation?


15. Full Questionnaire Design Design a complete 15-question survey instrument for the following scenario:

A statewide advocacy organization wants to assess public support for a ballot initiative that would require employers to provide 12 weeks of paid family leave, funded through a small payroll tax.

Your questionnaire should: - Open with appropriate warm-up questions - Measure baseline attitudes toward employer regulation and family policy - Measure awareness of the specific proposal - Test the proposal with balanced, neutral wording - Test the proposal after providing arguments for and against (message testing) - Measure demographic characteristics - Include screening criteria and branch logic where appropriate

Include the full question wording, response options, and brief justification notes for key design decisions.


16. The Anchoring Effect Survey research has documented an "anchoring" effect: when respondents are given a numeric anchor before answering a quantitative question, they adjust from that anchor. For example, if asked "Is the unemployment rate above or below 30%?" before estimating unemployment, respondents give higher estimates than if asked "above or below 5%?" Design a study that tests whether anchoring affects political opinions — not just factual estimates — and discuss the implications if it does.


17. Cross-National Equivalence Meridian has been asked to design a poll that will be conducted simultaneously in the United States, Mexico, and Colombia, comparing attitudes toward immigration. Identify five specific survey design challenges that arise from the need for cross-national equivalence, and propose specific solutions for each. Your answer should address language, conceptual equivalence, scale interpretation, social desirability across cultural contexts, and sampling frame comparability.


18. Advanced: The Questionnaire as Data Some researchers argue that questionnaire response patterns — not just individual answers, but patterns like item nonresponse, scale use, response time, and straight-lining — contain valuable information about respondent engagement and data quality. Design a data quality protocol for a 20-minute online survey that uses response behavior (not just content) to flag low-quality responses. What behaviors would you flag? How would you handle flagged responses in your analysis?


19. Replication and Comparability You have been asked to replicate a poll conducted in 2004 on attitudes toward military intervention abroad. You have the original questionnaire, but several of the question wordings use now-dated references (specific countries, specific political figures). You must update the questionnaire while maintaining comparability to the 2004 results. Write a 400-word methodology note explaining how you would approach this task and what caveats you would include when comparing 2024 results to 2004 results.


20. The Questionnaire as Narrative Some survey methodologists argue that questionnaires should be designed to create a coherent narrative flow that helps respondents enter a reflective mindset about the topic, rather than jumping abruptly between unrelated items. Others argue this "coherence" creates order effects that bias later responses. Write a 500-word essay taking a position on this tension. Under what conditions is narrative coherence in questionnaire design a feature rather than a bug?