Chapter 40 Further Reading

Newsletters (Signal-Dense, Practitioner-Focused)

"The Batch" (deeplearning.ai) Andrew Ng's weekly newsletter from deeplearning.ai. Consistently strong on what's actually advancing versus what's hype. Ng is a serious researcher who writes accessibly for practitioners. Available at deeplearning.ai/the-batch.

"Import AI" by Jack Clark (importai.substack.com) Clark is a former OpenAI policy researcher who covers AI research and its implications with unusual rigor. Less breathless than most AI coverage; more substantive. Particularly good on open-source AI developments.

"The Pragmatic Engineer" (newsletter.pragmaticengineer.com) For developers and technical practitioners, Gergely Orosz's newsletter regularly covers AI coding tools with the kind of practical, tested perspective that general AI coverage rarely provides.

"The Marketing AI Institute" (marketingaiinstitute.com) Specifically focused on AI in marketing — the most relevant domain-specific coverage for practitioners in Alex's position. Less theoretical than general AI coverage; more "here's what's working in campaigns."

"Lenny's Newsletter" (lennysnewsletter.com) Product management and product strategy focus, with regular deep-dives on AI tools for product builders. More practical than academic.


Researchers and Practitioners Worth Following

Yann LeCun (Meta AI) Chief AI Scientist at Meta, frequently contrarian about AI hype, strong on the technical limitations of current AI systems. A valuable perspective for calibrating against optimism.

Andrej Karpathy Former Tesla AI Director and OpenAI researcher, now independent. Produces educational content about how AI systems actually work that is unusually accessible and rigorous.

Ethan Mollick (One Useful Thing) Wharton professor who researches AI's effects on knowledge work. His newsletter and writing (oneusefulthing.org) is among the most consistently practical and research-grounded available.

Gary Marcus Cognitive scientist and AI researcher who maintains a consistently skeptical perspective on AI capability claims. Essential reading for calibrating against the hype cycle; his critiques are substantive rather than reflexive.


First-Party Sources (Labs and Research Organizations)

Anthropic Research (anthropic.com/research) Anthropic publishes research papers and model cards. Understanding the capabilities and limitations of tools you use directly is better than filtering through journalism.

OpenAI Research (openai.com/research) Similar: primary source for understanding what OpenAI's models actually do and don't do. The technical reports for major model releases are accessible and informative.

Google DeepMind (deepmind.google/research) DeepMind's research publications cover fundamental AI research that often has practical implications years ahead of its commercial deployment.

The NBER Working Paper Series (nber.org) The National Bureau of Economic Research publishes working papers on AI's economic and labor market effects — the most rigorous empirical research on productivity and work impact questions.


Books for Deeper Orientation

"The Age of AI: And Our Human Future" by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher Three perspectives — diplomatic, technological, and philosophical — on AI's broad implications. Useful for historical and strategic context, less for tactical guidance.

"Power and Prediction: The Disruptive Economics of Artificial Intelligence" by Agrawal, Gans, and Goldfarb A follow-up to "Prediction Machines" focused on AI's disruptive dynamics. More relevant to practitioners thinking about organizational strategy than individual practice.

"The Alignment Problem" by Brian Christian An accessible, deeply researched account of the challenge of making AI systems do what we actually want them to do. Important background for understanding the design choices behind current AI tools and their limitations.

"Atlas of AI" by Kate Crawford Critical examination of AI's material, labor, and political economy dimensions. Provides important context for understanding the broader implications of AI adoption decisions.


Podcasts for Selective Listening

"Lex Fridman Podcast" Long-form conversations with AI researchers and practitioners. Not efficient for staying current (episodes are 2-4 hours), but excellent for deep dives on specific topics when you have the time.

"Hard Fork" (New York Times) Weekly technology and AI podcast from the New York Times. More accessible than research-focused podcasts; good for understanding the cultural and business dimensions of AI development. Best for practitioners who need to communicate about AI with non-technical audiences.

"AI Explained" (YouTube/podcast) Accessible breakdowns of AI research papers and developments. Good for practitioners who want to understand the technical basis of new developments without reading full research papers.


Communities for Peer Learning

LinkedIn Groups and Reddit Communities (r/LocalLLaMA, r/ChatGPT, r/artificial) Mixed quality but can surface practical insights from practitioners. r/LocalLLaMA is particularly useful for practitioners interested in self-hosted AI.

Domain-specific AI communities Whatever professional field you're in — healthcare, legal, finance, education, software development — there are now active communities of practitioners discussing AI adoption. Finding and joining the one most relevant to your domain is often more valuable than following general AI communities.

Your organization's internal AI channels As Alex's case study illustrates, the most actionable AI content is often what colleagues discover in your specific professional context. Investing in your internal AI community is as important as external sources.