Chapter 7 Quiz: Survey Design: From Questions to Questionnaires
Multiple Choice (10 questions)
1. The classic finding that support for "assistance to the poor" runs 10-20 percentage points higher than support for "welfare" is best explained by:
a) Sampling bias — different populations are being reached by the two versions b) Acquiescence bias — respondents agree more readily with the second version c) Question wording effects — the two framings activate different accessible considerations d) Social desirability bias — "welfare" is more politically stigmatized so fewer respondents admit opposition
Answer: c. This is a canonical demonstration of question wording effects. The two phrases describe the same policy but activate different sets of considerations — "welfare" activates associations with dependency, whereas "assistance to the poor" activates associations with charitable obligation.
2. "Do you support investing in public schools and reducing class sizes?" is an example of a:
a) Leading question b) Loaded question c) Double-barreled question d) Vague question
Answer: c. The question asks about two distinct policy elements — overall school investment and class size reduction specifically — making it impossible to interpret a respondent's agreement or disagreement as applying to either element in particular.
3. The primary advantage of feeling thermometers over standard Likert scales for measuring political attitudes is:
a) They eliminate social desirability bias by using an indirect measure b) They are easier to administer by telephone because they have fewer response options c) They capture fine-grained variation in affect over a 0-100 range, with an intuitive warm/cold metaphor d) They are validated against behavioral outcomes such as donation and volunteering
Answer: c. Feeling thermometers sacrifice the verbal labels of Likert scales in exchange for greater numeric resolution. They are particularly useful for measuring the degree of positive or negative affect toward political figures and groups.
4. A survey asks respondents about their views on healthcare reform before asking about their candidate preference. This introduces what type of potential problem?
a) Sampling frame bias b) Social desirability bias c) Question order effects — earlier questions may prime considerations that influence later responses d) Acquiescence bias in the earlier question set
Answer: c. Question order effects arise when earlier questions make certain considerations more accessible in ways that influence responses to later questions. Asking healthcare questions before candidate preference may advantage the candidate whose healthcare position is more popular.
5. The list experiment (item count technique) is designed to reduce:
a) Nonresponse bias in telephone surveys b) Social desirability bias by allowing respondents to endorse sensitive items without individually identifying themselves c) Order effects in questionnaires with many items d) Acquiescence bias in agree/disagree questions
Answer: b. The list experiment gives respondents plausible deniability by asking how many items they agree with (not which ones). This allows estimation of sensitive attitude prevalence without respondents having to individually disclose stigmatized views.
6. Which of the following is a "primacy effect" in survey response?
a) Respondents tend to choose the first response option they see on visual (web/paper) surveys b) Respondents tend to choose the last response option they heard on telephone surveys c) Respondents' first answers to a questionnaire are more valid than their later answers d) The first candidate listed in a horse-race question receives a disproportionate share of support
Answer: a. Primacy effects refer to the tendency to disproportionately select earlier options in a visually presented list. The corresponding telephone phenomenon — selecting the last option heard — is the recency effect.
7. "Branching" or "skip logic" in a questionnaire means:
a) Presenting respondents with multiple related questions in rapid succession to reduce fatigue b) Routing respondents to different subsequent questions based on their answers to earlier ones c) Alternating between easy and difficult questions to maintain respondent engagement d) Separating double-barreled questions into their component parts
Answer: b. Branching logic directs respondents to different subsequent questions based on their responses, allowing more efficient questionnaires that only ask relevant follow-ups. For example, follow-up favorability questions about candidates only reach respondents who have heard of the candidates.
8. Acquiescence bias in a survey would most seriously distort results in which of the following questionnaire designs?
a) A forced-choice horse-race question between two named candidates b) A feeling thermometer series rating five political figures from 0 to 100 c) A series of agree/disagree statements all phrased so that agreement indicates a conservative position d) A demographic question battery at the end of the questionnaire
Answer: c. Acquiescence bias — the tendency to agree with any proposition — most severely distorts questionnaires where all agree/disagree items are directionally aligned. A questionnaire where "agree" always means "conservative" will overstate conservatism due to acquiescence.
9. When the Meridian team debates whether to use Garza's exact campaign message language versus neutral academic phrasing, the core tension is between:
a) Sampling validity and measurement validity b) Message testing (practical relevance) and construct validity (accurate measurement of underlying opinion) c) Telephone and online survey modes d) Question order effects and social desirability bias
Answer: b. The tension is between using language that tests how voters respond to actual campaign communications (message testing) versus using neutral language that provides a less contaminated measure of underlying opinion. The chapter's compromise is to use neutral language for baseline measurement and campaign language for explicit message testing, clearly labeled.
10. Which of the following best describes the purpose of cognitive interviewing in survey design?
a) Assessing whether respondents are cognitively capable of understanding the survey b) Using think-aloud protocols and verbal probing to identify how respondents interpret questions and map responses to scales c) Comparing results across different educational and literacy levels d) Testing whether the questionnaire can be administered by interviewers with varying levels of training
Answer: b. Cognitive interviews ask respondents to think aloud while answering, revealing how they interpret questions, what information they draw on, and how they map their answer onto the response scale. The goal is to identify interpretation problems before the survey is fielded broadly.
Short Answer (5 questions)
11. Define "question wording effects" and give one example from political polling that illustrates their practical importance.
Model answer: Question wording effects occur when different phrasings of what is nominally the same question produce systematically different distributions of responses, because different words activate different accessible considerations in respondents' minds. A classic example: surveys asking about support for "affirmative action" versus "preferential treatment for minorities" show roughly 20-30 percentage point gaps in expressed support, even though both phrases are meant to describe the same category of policy. The phrase "affirmative action" has become relatively neutral; "preferential treatment" activates associations with unfairness. This gap means that pollsters who choose one phrase over the other are not measuring the same construct, and results from different polls using different phrasing cannot be meaningfully compared without accounting for this source of variance.
12. Explain the difference between primacy effects and recency effects in survey response order. Under what conditions does each occur?
Model answer: Primacy and recency effects both describe the tendency of response option position to influence selection, but they operate through different cognitive mechanisms. Primacy effects — disproportionate selection of the first option presented — tend to occur in visually administered surveys (web, paper) where respondents scan a list and make an early choice without fully reading all options. The first option catches attention and may be selected before alternatives are fully considered. Recency effects — disproportionate selection of the last option heard — tend to occur in aurally administered surveys (telephone) where the last item heard is most recently in working memory when the respondent makes a choice. The standard remedy for both is to randomize response option order across respondents when the options have no inherent ordering, so that any order advantage is distributed randomly rather than systematically favoring a particular response.
13. What is the difference between a leading question and a loaded question? Give an original example of each.
Model answer: A leading question signals to the respondent what the "correct" or expected answer is, creating social pressure toward agreement. Example: "Don't you think it's time for Congress to address the healthcare crisis?" — the phrase "isn't it time" signals that the expected response is agreement. A loaded question embeds an assumption that respondents may not share, forcing them to accept the assumption if they answer the question as posed. Example: "How much has the government's failed immigration policy cost American taxpayers?" — this assumes the policy has failed and that there is a measurable cost, neither of which respondents necessarily accept. Leading questions manipulate through social expectation; loaded questions manipulate through embedded premise.
14. Why is it a best practice to place demographic questions at the end of a questionnaire rather than at the beginning?
Model answer: Placing demographic questions at the end serves two functions. First, it protects data quality by establishing a cooperative relationship before asking personal questions — respondents who have been engaged for several minutes are more likely to complete demographic items than those who encounter them as the first questions, before any engagement is established. Second, demographic questions (particularly income, race, and education) can prime identity-based considerations that influence subsequent opinion questions. If a respondent answers questions about race and income before answering questions about immigration policy, racial and class considerations may be made more accessible than they would otherwise be, distorting the policy responses. By placing demographics at the end, you capture all substantive opinion data before these identity primes are introduced.
15. What is the "push poll" and why do professional polling organizations consider it an ethical violation?
Model answer: A push poll is a political operation that disguises itself as a survey while actually functioning as a negative advertising vehicle. The calls typically ask respondents whether they would be more or less likely to vote for a candidate "if you knew that [damaging claim]" — delivering negative opposition research messages to large numbers of voters under the guise of conducting research. Push polls are considered an ethical violation by professional polling organizations (including AAPOR) for several reasons: they deceive respondents about the purpose of the call; they do not measure opinion but rather try to change it; they inflate reported "sample sizes" because they attempt to reach as many voters as possible rather than a representative sample; and they poison public understanding of legitimate survey research by associating the survey format with political manipulation. Legitimate polls can test the impact of negative information on candidate support, but only with small sample sizes, with disclosed sponsorship, and with balanced treatment of the candidates.