Chapter 21 Exercises: Research, Synthesis, and Information Gathering

These exercises are designed to build both the skills and the habits of rigorous AI-assisted research. Several exercises deliberately expose you to AI's failure modes so that you develop calibrated trust through direct experience.


Part A: Orientation and Landscape Mapping

Exercise 1: The Domain Orientation Drill

Choose a professional domain that is adjacent to your current expertise — related enough to be relevant, unfamiliar enough to require genuine research. Submit this prompt to an AI model:

"I am beginning research on [domain]. I have professional background in [your field] but limited knowledge of this specific area. Provide: (1) the three to five most important questions currently being debated, (2) the dominant theoretical frameworks or schools of thought, (3) the five to ten most significant researchers or practitioners, (4) the major publications or journals, and (5) the most important unresolved disagreements."

After receiving the response, check three specific claims from the landscape map against independent sources (Wikipedia for basic orientation, Google Scholar for academic claims). How accurate is the landscape map? Where are the errors?

Exercise 2: Landscape Map Verification Audit

Take the landscape map from Exercise 1. Identify every verifiable specific claim — named researchers, publication titles, specific theoretical frameworks. For each one: 1. Search for independent confirmation. 2. Mark it as: Verified | Approximately Correct | Incorrect | Cannot Verify.

Summarize your audit: what percentage of specific claims could you verify? What percentage were incorrect or unverifiable? Reflect on what this means for using AI landscape maps as a research starting point.

Exercise 3: The "Teach Me" Technique

Run a "teach me" session on a topic relevant to your work. Structure it as described in Section 5: 1. Opening orientation prompt. 2. Three follow-up depth prompts on specific concepts. 3. Two application-testing prompts. 4. Two boundary-probing prompts on controversy and uncertainty. 5. A closing prompt asking which claims are most important to verify.

After the session, write a one-page summary of what you learned. Compare it to a summary you could have written from your prior knowledge. What did the AI session add? What would you need to verify before relying on this knowledge professionally?


Part B: Source Finding and Verification

Exercise 4: Citation Verification Challenge

Ask a general AI model to provide five citations for research on a topic of your choice. For each citation: 1. Search for the paper in Google Scholar. 2. Search for it in Semantic Scholar. 3. If found, verify that the paper actually makes the claim AI attributed to it.

Document your results: how many of the five citations existed? Of those that existed, how many accurately represented the paper's actual content? Reflect: does this exercise change your approach to AI-generated citations?

Exercise 5: Elicit vs. General AI Comparison

Choose a research question you genuinely want to answer. Submit it to both a general AI model (asking for relevant papers) and to Elicit (elicit.org). Compare the results: - How many papers from each source actually exist and are findable in academic databases? - Which papers are most relevant to your question? - What does each source miss?

Document the comparison. When would you use each tool?

Exercise 6: Source Discovery with Semantic Scholar

Use Semantic Scholar to conduct a targeted literature search on a question relevant to your work. Find three papers that you believe are the most relevant to your question. For each paper: - Read the abstract. - Note the paper's main finding and its limitations. - Identify two or three other papers it cites that might be worth reading.

This exercise builds the "snowball" literature review technique — discovering new sources through the citations of sources you have already found.


Part C: Synthesis Techniques

Exercise 7: Your-Notes-First Synthesis

Read two articles or papers on a topic of your choice. Write your own notes after reading each — at least 200 words of notes per source. Then submit your notes (not the original sources) to AI with this prompt:

"Based on my research notes below, identify: (1) major themes common to both sources, (2) points of tension or contradiction, (3) conclusions supported by both sources, and (4) what questions remain unanswered. Work only from the material I have provided."

Compare the AI synthesis to your own reading. Did AI identify anything you missed? Did AI misrepresent anything in your notes? Which is more reliable — your synthesis or AI's?

Exercise 8: Multi-Source Synthesis Practice

Collect notes or short summaries from three to five sources on a topic (you may use material you have gathered for previous exercises). Submit them with the multi-source synthesis prompt from Section 8 of the chapter:

"I have conducted research across the following sources. Below are my notes from each. Please identify major themes, contradictions between sources, claims appearing in only one source, and produce a 3-5 paragraph synthesis. Work only from the provided material."

Read the synthesis critically. Check whether every claim in the synthesis is attributable to a specific source in your notes. Flag any synthesis claims that appear to come from outside your provided material.

Exercise 9: Synthesis Accuracy Audit

Take any AI-generated research synthesis — from a previous exercise or one you create fresh. For each specific claim in the synthesis: 1. Identify which source it came from (or should have come from). 2. Verify that the source actually supports that claim.

How many claims are accurately sourced? How many are misattributed, overgeneralized, or appear to come from outside the provided sources?


Part D: Stress-Testing and Critical Evaluation

Exercise 10: Skeptic and Steelman Practice

Choose a research conclusion you have reached or that you find in published material. Submit it to AI with three different prompts:

  1. Skeptic: "What are the three strongest objections to this conclusion?"
  2. Devil's advocate: "Argue against this conclusion as forcefully as possible."
  3. Steelman: "Present the strongest possible version of the opposing view."

Compare the three responses. What does each add? Which is most useful for stress-testing research? Are there objections AI raises that you had not considered?

Exercise 11: Assumption Surfacing

Take a conclusion from your own professional work or expertise — something you believe to be well-established. Submit it to AI:

"Identify the key assumptions embedded in the following conclusion: [state conclusion]. Rank them from most critical to least critical. For each assumption, identify what evidence would be needed to support it."

Evaluate the response: are there assumptions AI identified that you genuinely had not made explicit? Are there assumptions it missed? This exercise is as much about your own reasoning as about AI's capability.

Exercise 12: Research Gap Analysis

Take a research synthesis you have completed (from previous exercises or your own work). Submit it to AI:

"Based on this research synthesis, what are: (1) the most important questions that remain unanswered, (2) the evidence that would be most useful to find, (3) the alternative conclusions that could be drawn from this evidence, and (4) the assumptions in this synthesis that are least well-supported?"

Use the gap analysis to identify whether additional research is needed before the synthesis is ready to use in a professional context.


Part E: Research Tools

Exercise 13: Perplexity for Current Information

Choose a topic where current information matters — a recent regulatory change, a recent market development, a technology trend from the last 12 months. Use Perplexity to research it. Evaluate: - Are the sources Perplexity cites real and accessible? - Are the specific claims accurate when you check the cited sources? - How does the currency of the information compare to what a general AI model provides on the same topic?

Exercise 14: NotebookLM for Document Analysis

Upload a set of documents to NotebookLM (three to five documents in a domain you are researching or that are relevant to a current project). Ask it: 1. "What are the main themes across these documents?" 2. "What do these documents say about [specific question]?" 3. "Identify any contradictions between documents."

Evaluate the accuracy of its responses by checking them against the actual documents. Does NotebookLM accurately represent the contents of the documents you provided?

Exercise 15: Research Stack Design

Design a research stack for a research project you are currently working on or that is realistic for your professional context. Specify: - Which tools you will use for source discovery (and why) - Which tools you will use for synthesis (and why) - What your verification workflow will be - Which stages are AI-assisted and which are human-only

Document this as a workflow you could repeat for similar projects.


Part F: Advanced Research Practice

Exercise 16: Domain Sprint Simulation

Choose a domain you know nothing about that is adjacent to your professional field. Give yourself four hours to conduct a structured domain sprint following the six-phase workflow from Section 3. After four hours, write a 500-word orientation document summarizing what you have learned.

Then identify: How much of this knowledge would you trust without additional verification? What would you need to read before presenting this as professional knowledge?

Exercise 17: Expert Interview Preparation

Identify an expert in a domain you are researching (real or hypothetical). Use AI to prepare for a 45-minute expert interview:

"I am preparing to interview an expert with this background: [describe]. My research question is [state question]. Given my current knowledge of the domain, what are the 10 most valuable questions I should ask? Prioritize questions that: (a) can only be answered by someone with direct experience, and (b) would help me understand aspects of the domain that are not well-documented in public sources."

Evaluate the question list: which questions would genuinely add value beyond what you could learn from reading? Which questions could be answered by additional desk research?

Exercise 18: Research Integrity Audit

Take a professional research document you have previously produced — a report, a brief, a literature review — and conduct a full verification audit. For every specific factual claim in the document: 1. Can you trace it to a verified primary source? 2. If you cannot, is it something that could have come from an AI-assisted step in your workflow?

Reflect: does this audit change how you feel about the research quality of the document? Does it change how you will approach future research projects?


Instructors: Exercise 4 (Citation Verification Challenge) is recommended as an early-course exercise, ideally before students have used AI heavily for research. The direct experience of AI citation hallucination is more memorable than being told about it. Exercise 16 (Domain Sprint) is suitable as a graded assignment with clear deliverable requirements.