Chapter 21 Quiz: Research, Synthesis, and Information Gathering

Test your understanding of AI-assisted research workflows, trust calibration for research tools, and citation verification requirements. After answering each question, reveal the explanation using the dropdown.


Question 1

Which of the following is described as a "genuine research value" that AI adds?

A) Producing accurate citations for academic papers B) Replacing primary source reading with AI summaries C) Landscape mapping — quickly orienting you to a new domain's major questions and thinkers D) Providing current, real-time information on fast-moving topics

Answer **C) Landscape mapping — quickly orienting you to a new domain's major questions and thinkers** AI's landscape mapping capability — generating a quick orientation to a domain's major questions, significant thinkers, dominant frameworks, and ongoing debates — is one of its genuine research strengths. It compresses the orientation phase of research significantly. The other options describe areas where AI is unreliable: citation generation has high hallucination rates, primary source replacement degrades understanding, and AI's knowledge cutoff means it cannot reliably provide current information on fast-moving topics.

Question 2

What is the correct rule regarding AI-generated citations?

A) AI citations from reputable models are generally reliable and can be used without verification B) Verify AI citations only when using them for academic publications; professional documents have lower standards C) Verify AI citations when the topic is specialized, but general knowledge citations can be trusted D) Never use an AI-generated citation until it has been verified in an authoritative database

Answer **D) Never use an AI-generated citation until it has been verified in an authoritative database** The chapter states this rule as absolute: never use an AI-generated citation until verified. AI citation hallucination rates are documented across studies at 15-40% depending on model and domain. The hallucinated citations look real — plausible authors, realistic journal names, appropriate years — and the only way to distinguish a fabricated citation from a real one is to search for it in a database like Google Scholar, Semantic Scholar, or PubMed. No qualifications about document type or model reliability change this fundamental requirement.

Question 3

Which phase of the research workflow described in the chapter requires the most human effort with the least AI involvement?

A) Orientation B) Source reading C) Synthesis D) Gap analysis

Answer **B) Source reading** The chapter is explicit: "The reading is not optional and cannot be delegated to AI." Source reading — actually reading primary sources and taking notes in your own words — is the phase where AI has the most limited and carefully bounded role. AI can help clarify passages you find confusing and generate discussion questions to improve active reading. It cannot summarize papers as a substitute for reading them. Every other phase has higher AI involvement; source reading is the irreducibly human core of research.

Question 4

What makes the "teach me" research technique different from asking AI to "research a topic for you"?

A) "Teach me" produces more comprehensive results; "research for me" produces shorter outputs B) "Teach me" uses a different AI model than "research for me" C) "Teach me" involves active engagement, follow-up questions, and naturally surfaces AI's limits; "research for me" produces passive synthesis you receive without engagement D) They are equivalent — both result in the same quality of orientation

Answer **C) "Teach me" involves active engagement, follow-up questions, and naturally surfaces AI's limits; "research for me" produces passive synthesis you receive without engagement** The distinction is about the mode of engagement, not the output length. When you ask AI to research a topic, you receive a synthesized output passively. When you ask AI to teach you, you engage in structured dialogue — asking for elaboration, testing your understanding with application questions, probing the boundaries and uncertainties. This active engagement produces better learning outcomes and naturally reveals where AI's knowledge becomes uncertain or contradictory, which the passive mode does not.

Question 5

What is the critical qualifier for using AI to synthesize material across multiple sources?

A) You must have read all sources yourself before submitting your notes for synthesis B) You must use at least five sources for AI synthesis to be reliable C) The sources must be from the same field or discipline D) You must use a specialized research tool like Elicit rather than a general AI model

Answer **A) You must have read all sources yourself before submitting your notes for synthesis** The chapter states this explicitly: you must have read the sources yourself. The synthesis prompt uses "your own notes" — not AI summaries of sources, and not the original source texts submitted for AI to summarize. When you ask AI to synthesize sources it described to you without you having read them, you are layering AI-generated content on top of AI-generated content. Errors compound and confidence grows without justification. The value of AI synthesis comes when it works with material you understand and can verify.

Question 6

What distinguishes Elicit from asking a general AI model for research citations?

A) Elicit uses more advanced language models than general AI B) Elicit searches real academic databases and returns verifiable papers; general AI models generate citations from pattern completion that may be fabricated C) Elicit is more expensive and therefore more accurate D) Elicit only works for medical research; general AI models work across all domains

Answer **B) Elicit searches real academic databases and returns verifiable papers; general AI models generate citations from pattern completion that may be fabricated** Elicit is specifically designed for literature review and searches real academic databases (primarily Semantic Scholar) to return actual papers with AI-generated summaries of their findings. It is a search tool, not a generative AI. General AI models produce citations through language pattern completion — generating plausible-looking citation strings that may not correspond to real papers. Elicit is significantly more reliable for source discovery precisely because it is not generating citations; it is retrieving them from databases.

Question 7

Why is AI's knowledge cutoff particularly risky for research in fast-moving domains?

A) AI models become slower as their training data ages B) Fast-moving domains have more specialized vocabulary that AI cannot process C) AI may confidently describe the regulatory environment, market conditions, or best practices as of its training cutoff without flagging that the information is potentially outdated D) AI's citations for fast-moving domains are less accurate than for stable domains

Answer **C) AI may confidently describe the regulatory environment, market conditions, or best practices as of its training cutoff without flagging that the information is potentially outdated** The knowledge cutoff risk is non-obvious: AI does not always signal when information may be outdated. In fast-moving domains — technology, policy, markets, medicine — significant changes can occur within months, and AI will describe the pre-change state with the same confident tone as post-change information. Perplexity, which pulls from current web sources and provides citations, is recommended for topics where currency matters.

Question 8

In the six-phase research workflow, at which phase does AI have the "highest value" role?

A) Phase 1 (Orientation) B) Phase 3 (Source reading) C) Phase 4 (Synthesis) D) Phase 2 (Source finding)

Answer **C) Phase 4 (Synthesis)** The chapter labels the synthesis phase as offering "high value" AI assistance — specifically for helping identify patterns, themes, tensions, and conclusions across sources you have already read and taken notes on. This is where the critical qualifier applies: AI synthesizes your own notes from Phase 3, not AI-generated summaries. At Phase 4, AI's ability to rapidly traverse material and identify connections adds genuine value. Phase 1 (Orientation) also has significant AI value. Phase 3 (Source reading) has the most limited AI role.

Question 9

What does "steelmanning" mean in the context of research stress-testing?

A) Verifying that research conclusions are supported by steel-quality (very strong) evidence B) Presenting the strongest possible version of the opposing argument, as a proponent would make it C) Identifying the weakest links in your own research chain D) Using AI to generate additional supporting evidence for your conclusion

Answer **B) Presenting the strongest possible version of the opposing argument, as a proponent would make it** Steelmanning is the opposite of strawmanning. Where a strawman presents the weakest or most easily refuted version of an opposing argument, a steelman presents the opposing argument at its most rigorous and compelling. In research stress-testing, prompting AI to steelman opposing views helps ensure you are genuinely engaging with the best version of the counterargument rather than a weakened version that is easy to dismiss. This produces stronger, more credible research conclusions.

Question 10

What does the research evidence suggest about AI-assisted systematic review screening?

A) AI systematic reviews are less accurate than manual reviews and should not be used B) AI systematic review screening has reduced time-to-completion by 30-65% in multiple studies, with acceleration concentrated in abstract screening C) AI performs well for full-text screening but poorly for abstract screening D) The evidence is insufficient to draw any conclusions about AI's role in systematic reviews

Answer **B) AI systematic review screening has reduced time-to-completion by 30-65% in multiple studies, with acceleration concentrated in abstract screening** The chapter cites a 2023 study in Research Synthesis Methods finding 30-65% time reduction across multiple medical and public health reviews. The acceleration is concentrated in abstract screening — a tedious, high-volume task where AI performs reliably when given clear inclusion/exclusion criteria. This is one of the strongest evidence-based use cases for AI in research workflows: it is a well-defined classification task with clear criteria, which is where AI performs most reliably.

Question 11

What critical instruction should always be included when asking AI to synthesize material from a defined set of sources?

A) "Summarize each source separately before synthesizing" B) "Work only from the material I have provided; do not add claims from outside this material" C) "Provide citations for every claim in the synthesis" D) "Use no more than 500 words in your synthesis"

Answer **B) "Work only from the material I have provided; do not add claims from outside this material"** Without this constraint, AI will supplement your provided material with knowledge from its training data, which you cannot distinguish from your verified sources. The synthesis then contains a mixture of material from your research and AI-generated material from its general knowledge — and you have no way to tell which is which. The instruction "work only from the material I have provided" constrains AI to the boundaries of your verified research, making the synthesis result verifiable and attributable.

Question 12

What does Raj's scenario illustrate about the relationship between domain expertise and AI research assistance?

A) Domain experts should not use AI for research because they already know enough B) Domain expertise creates a trap: experts may be more confident in AI descriptions that confirm their existing beliefs and less likely to verify claims that align with their priors C) Domain expertise makes AI research assistance unnecessary; AI is only useful for novices D) Domain experts always catch AI errors more reliably than novices

Answer **B) Domain expertise creates a trap: experts may be more confident in AI descriptions that confirm their existing beliefs and less likely to verify claims that align with their priors** Raj's scenario explicitly calls out this trap: "His technical depth is an asset, but it also creates a trap: he may be more confident in AI descriptions that align with his existing beliefs, and less likely to verify claims that confirm his priors." Confirmation bias interacts with AI research assistance in a specific way for experts — they are likely to catch errors that conflict with their knowledge, but may accept errors that confirm it. This is why structured verification is important even for domain experts.

Question 13

What distinguishes NotebookLM from a general AI model for research purposes?

A) NotebookLM is faster than general AI models B) NotebookLM can only analyze documents shorter than 10,000 words C) NotebookLM works from documents you upload, reducing the risk of AI adding material from outside your specified sources D) NotebookLM does not require internet access

Answer **C) NotebookLM works from documents you upload, reducing the risk of AI adding material from outside your specified sources** NotebookLM's defining characteristic is that it works within the boundaries of documents you upload, not from its general training knowledge. When you ask NotebookLM about your uploaded documents, it draws from those documents — reducing (though not eliminating) the risk of AI supplementing your specified research with outside material. This makes it useful for ensuring that synthesis remains grounded in the specific source set you have curated.

Question 14

The chapter describes a finding that expert researchers and novice researchers use AI differently. What does the expert pattern look like, and why does it produce better outcomes?

A) Experts use more sophisticated prompts; novices use simple prompts — better prompts produce better outcomes B) Experts use AI primarily for writing assistance and orientation in new areas; novices use AI for substance (summarizing papers, generating literature reviews). The expert pattern maintains the human research layer that produces genuine understanding C) Experts use AI more frequently than novices; higher frequency of use produces better outcomes D) Experts verify AI outputs more often than novices; verification is the only relevant variable

Answer **B) Experts use AI primarily for writing assistance and orientation in new areas; novices use AI for substance (summarizing papers, generating literature reviews). The expert pattern maintains the human research layer that produces genuine understanding** The chapter cites a 2024 survey finding that expert researchers primarily use AI for writing assistance and domain orientation, while novice researchers are more likely to use AI for substance — summarizing papers they have not read, generating literature reviews from scratch. The expert pattern preserves the irreducibly human reading layer that produces genuine domain understanding and the expertise to catch AI errors. The novice pattern skips this layer, producing faster but less reliable research with compounding error risk.

Question 15

Elena's domain sprint scenario demonstrates a specific sequencing principle. What is it?

A) Expert interviews should always come before any AI-assisted research B) AI orientation should come before reading; reading should come before expert interviews; AI should not be used after expert interviews C) AI conversations scaffold the reading (helping you read efficiently and specifically) rather than replacing it; expert interviews fill gaps that neither AI nor reading can fill D) The order of research phases does not matter as long as verification is conducted at the end

Answer **C) AI conversations scaffold the reading (helping you read efficiently and specifically) rather than replacing it; expert interviews fill gaps that neither AI nor reading can fill** Elena's approach is explicitly described: "AI orientation helps her read efficiently — instead of reading broadly, she reads specifically, targeting the areas where her AI orientation flagged genuine complexity or uncertainty." AI conversations are preparation for reading, not a substitute for it. Expert interviews come after both AI orientation and reading, filling gaps in documented knowledge that only direct practitioner experience can address. The sequencing is: AI orientation → targeted reading → expert interviews for what reading cannot answer.