Chapter 21 Further Reading: Research, Synthesis, and Information Gathering

Research on AI Research Assistance

"Generative AI for Systematic Review Acceleration" Multiple studies from 2023-2024 published in Research Synthesis Methods and similar journals examine AI's role in systematic review workflows. Search Google Scholar for "systematic review AI screening" for current literature. The acceleration findings (30-65% time reduction for abstract screening) have been replicated across multiple domains.

"Evaluating the Accuracy of AI-Generated Citations" A growing literature examines AI citation hallucination rates. Key searches: "LLM citation hallucination," "AI citation accuracy research." Published studies from 2023 onward consistently find hallucination rates high enough to require verification of every AI-generated citation.

"AI Literacy and Research Quality" Several 2024 studies examine how AI tool use affects research quality across skill levels, finding the expert/novice divergence pattern described in this chapter. Search "AI research assistance expert novice" in Google Scholar.


Books on Research Methods

"The Craft of Research" by Wayne Booth, Gregory Colomb, and Joseph Williams University of Chicago Press, 4th edition, 2016. The standard academic treatment of research methodology for non-scientists. The chapters on building and evaluating evidence are directly relevant to the AI research workflow — they articulate the standards against which AI-generated synthesis should be measured.

"How to Read a Paper: The Basics of Evidence-Based Medicine" by Trisha Greenhalgh Wiley-Blackwell, 6th edition, 2019. Essential for anyone conducting literature reviews in health-related domains. Provides the framework for evaluating study quality — which is exactly what you need to evaluate AI-summarized research claims critically.

"The Literature Review: Six Steps to Success" by Lawrence Machi and Brenda McEvoy Corwin Press, 3rd edition, 2016. A practical guide to the literature review process. Useful for understanding the full review process against which AI-accelerated elements can be assessed.


Research Tools Documentation and Guides

Elicit Documentation and Tutorial Available at elicit.org. Elicit's own documentation explains how it searches Semantic Scholar, how to interpret its paper summaries, and its known limitations. Understanding Elicit's limitations — including that it covers primarily academic literature and may miss practitioner and industry sources — is important for calibrating its use.

Consensus Help Center Available at consensus.app. Documents how Consensus determines scientific consensus ratings and the types of questions it handles well versus poorly. Most useful for clearly formulated yes/no questions about empirical relationships.

Semantic Scholar API and Search Documentation Available at semanticscholar.org. Semantic Scholar's documentation includes guidance on advanced search techniques, citation graph navigation (finding papers that cite key papers), and field-specific search. Understanding how to use the citation graph is particularly valuable for comprehensive literature review.

NotebookLM User Guide Available at notebooklm.google.com. Google's documentation for NotebookLM, including limitations on document types and sizes, how citation grounding works, and the use cases it is and is not suited for.


On Information Evaluation

"Calling Bullshit: The Art of Skepticism in a Data-Driven World" by Carl Bergstrom and Jevin West Random House, 2020. A rigorous and readable guide to evaluating claims, statistics, and information in a world where plausible-sounding assertions are everywhere. The techniques for evaluating quantitative claims are directly applicable to evaluating AI-generated research synthesis.

"The Intelligence Trap: Revolutionize Your Thinking and Make Wiser Decisions" by David Robson W. W. Norton, 2019. Examines why intelligent, expert people make reasoning errors — including the confirmation bias trap that affects expert researchers using AI. Relevant to the Raj scenario's discussion of domain expertise as both asset and liability.


On Systematic Review Methodology

Cochrane Handbook for Systematic Reviews of Interventions Available free at training.cochrane.org. The definitive guide to systematic review methodology. Even for non-medical researchers, the sections on study screening, data extraction, and synthesis provide the methodological standard against which AI-accelerated review processes should be evaluated.

PRISMA Guidelines (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Available at prisma-statement.org. The reporting standard for systematic reviews. Understanding PRISMA helps researchers understand what a complete, rigorous review requires — which clarifies where AI acceleration is appropriate and where it would compromise rigor.


Tools Referenced in This Chapter

Elicit — elicit.org Research question-based literature search across Semantic Scholar. Best for academic research questions with clear empirical focus.

Consensus — consensus.app Scientific consensus evaluation for empirical questions. Best for yes/no questions about research evidence.

Perplexity — perplexity.ai Real-time web search with citations. Best for current events and time-sensitive research.

NotebookLM — notebooklm.google.com Document-grounded AI assistant. Best for analysis within a defined document set.

Semantic Scholar — semanticscholar.org Academic search engine covering 200M+ papers. Best for citation discovery and literature mapping.

Google Scholar — scholar.google.com Broad academic search with citation tracking. Essential for citation verification.

PubMed — pubmed.ncbi.nlm.nih.gov Authoritative database for medical and life sciences literature. Essential for citation verification in health-related research.