Further Reading: Chapter 19 — Specialized and Domain-Specific AI Tools

Research Tools

Elicit https://elicit.com The primary specialized research AI tool covered in this chapter. A free tier is available with some limitations. The Elicit team publishes detailed documentation of their approach to research synthesis, including their methodology for paper extraction and their handling of uncertainty — useful reading for understanding how the tool works, not just how to use it.

Consensus https://consensus.app Free tier available. The "Consensus Meter" feature that shows the direction of research evidence on specific questions is the most distinctive capability. For practitioners who regularly need evidence-based answers to empirical questions, Consensus is one of the first specialized tools worth evaluating.

Semantic Scholar https://www.semanticscholar.org Free AI-powered academic search from the Allen Institute for AI. Particularly strong for computer science and biomedical research. The AI features include paper summaries, citation context, and related work mapping. A useful complement to Elicit for research coverage.

Research Rabbit https://researchrabbitapp.com A citation network navigation and visualization tool. Different from Elicit — it maps the citation relationships between papers rather than synthesizing findings. Useful for understanding the field structure and finding papers you might miss with keyword search alone. Free to use.

Harvey AI https://harvey.ai Harvey's website includes case studies and technical documentation that are informative for practitioners evaluating legal AI tools. Their published research on their approach to legal fine-tuning is particularly useful.

Thomson Reuters Westlaw AI Features https://legal.thomsonreuters.com/en/products/westlaw Documentation for Westlaw's AI-assisted legal research features, including the integration of Casetext capabilities post-acquisition. For legal professionals, Westlaw's AI features represent the most comprehensive legal research database with AI assistance.

"ChatGPT Hallucinations in Legal Briefs: A Growing Problem" — ABA Journal Search the American Bar Association Journal for coverage of AI-generated legal brief hallucinations. Multiple documented cases of attorneys receiving court sanctions for citing nonexistent AI-generated cases. Essential reading for any legal professional considering AI tools in their practice.

Medical AI

Nuance DAX Technical Documentation https://www.nuance.com/healthcare/ambient-clinical-intelligence Documentation for Nuance's ambient clinical intelligence platform. Includes published clinical validation studies showing documentation time reduction. A model for what evidence-based claims about specialized AI tools should look like.

"Artificial Intelligence in Healthcare: Anticipating Challenges to Ethics, Privacy, and Bias" — Journal of the American Medical Informatics Association Available via PubMed. A peer-reviewed overview of ethical and technical considerations for AI in clinical settings. Useful background for any practitioner working in or adjacent to healthcare AI deployment.

Marketing AI

Persado Research and Case Studies https://www.persado.com/resources Persado publishes performance data from their language optimization work. Their research on which linguistic elements drive conversion in different email and ad contexts is informative even if you are not using their tool — it provides useful signal about what makes marketing language effective.

"AI Marketing Tools: What Works and What Is Hype" — Harvard Business Review Search HBR for recent coverage on AI marketing tools. HBR has published several empirically grounded pieces on AI marketing adoption that distinguish genuine capability from vendor claims. The 2024-2025 coverage is most current.

Design AI

Adobe Firefly https://firefly.adobe.com The primary documentation and portal for Adobe's AI tools. For creative professionals whose commercial work requires clarity on training data and rights provenance, understanding Firefly's approach is practically important.

Figma AI Documentation https://help.figma.com/hc/en-us/sections/AI Figma's official documentation for their AI features, including First Draft (AI-generated wireframes from descriptions), layout suggestions, and other AI-assisted design capabilities.

Evaluation and Critical Frameworks

"The AI Hype Index" — MIT Technology Review MIT Technology Review's ongoing tracking of AI capability claims versus demonstrated performance. Useful for calibrating vendor claims across multiple domains. The critical perspective is valuable counterbalance to marketing materials.

NIST AI Risk Management Framework https://www.nist.gov/artificial-intelligence The U.S. National Institute of Standards and Technology's framework for AI risk management. Originally aimed at organizations deploying AI systems, but also useful for practitioners evaluating AI tools — the risk categories (accuracy, reliability, safety, fairness, accountability) map directly to evaluation questions practitioners should ask.

"Evaluating AI Systems: A Practical Guide for Non-Technical Users" — Partnership on AI https://partnershiponai.org Partnership on AI publishes accessible guides on AI evaluation for practitioners without deep technical backgrounds. Their work on responsible AI deployment, particularly in high-stakes domains, provides useful supplementary framing for the evaluation questions in this chapter.

Note on Currency

The specialized AI tools landscape changes more rapidly than almost any other category of enterprise software. Tools are acquired, shut down, significantly upgraded, or replaced by new entrants frequently. Harvey AI, Nuance DAX, Elicit, and the other specific tools mentioned in this chapter were active and well-regarded as of early 2026. Their specific features, pricing, and competitive positioning will have evolved by the time you read this. Use these resources as starting points for your own current-state research, not as definitive current descriptions.

The evaluation framework, by contrast, is designed to remain useful as specific tools change. When a tool you depended on changes significantly or is replaced, the six evaluation questions in this chapter provide a reliable structure for assessing whatever comes next.