Chapter 38: Further Reading

Foundational Philosophy of Mind

1. Chalmers, David J. "Facing Up to the Problem of Consciousness." Journal of Consciousness Studies 2, no. 3 (1995): 200–219. The paper that named and formalized the hard problem of consciousness. Essential reading for understanding the explanatory gap and why it matters for AI consciousness claims. Chalmers's argument is clear and his framing has shaped the field's debate for thirty years.

2. Chalmers, David J. The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press, 1996. The book-length treatment of Chalmers's argument. Develops the hard problem, the zombie argument, and the case for property dualism. More demanding than the article but essential for understanding the philosophical stakes.

3. Dennett, Daniel C. Consciousness Explained. Boston: Little, Brown, 1991. Dennett's major reductionist account of consciousness. Argues against the Cartesian Theater model and for a multiple-drafts account. Provides the strongest single-volume case against the hard problem being genuinely hard. Useful as a counterpoint to Chalmers.

4. Nagel, Thomas. "What Is It Like to Be a Bat?" Philosophical Review 83, no. 4 (1974): 435–450. The classic paper on the subjective character of experience. Nagel argues that there is something it is like to be a bat — a subjective perspective — that cannot be captured by objective physical description. A foundational text for the hard problem.

5. Searle, John R. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3, no. 3 (1980): 417–424. The original Chinese Room paper, including commentaries and Searle's reply. Searle argues against strong AI and the sufficiency of computational process for understanding. Includes the systems reply and other objections.


Theories of Consciousness

6. Baars, Bernard J. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press, 1988. The foundational text for Global Workspace Theory. Baars develops the global workspace model and its implications for understanding consciousness in biological systems.

7. Dehaene, Stanislas. Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. New York: Viking, 2014. Dehaene's accessible account of Global Workspace Theory and the neuroscience of consciousness. Discusses the "ignition" model of conscious access and its experimental basis. Includes discussion of implications for AI.

8. Tononi, Giulio. "Consciousness as Integrated Information: A Provisional Manifesto." Biological Bulletin 215, no. 3 (2008): 216–242. A clear and accessible statement of Integrated Information Theory and the phi measure. Discusses IIT's implications for AI and the provocative prediction that some simple systems may be conscious while complex digital systems may not be.

9. Graziano, Michael S. A. Rethinking Consciousness: A Scientific Theory of Subjective Experience. New York: W. W. Norton, 2019. Graziano's accessible account of Attention Schema Theory. Argues that consciousness is the brain's model of its own attention, and discusses implications for AI design and for understanding the social attribution of consciousness.


AI Consciousness and Moral Status

10. Schwitzgebel, Eric, and Mara Garza. "A Defense of the Rights of Artificial Intelligences." Midwest Studies in Philosophy 39, no. 1 (2015): 98–119. A careful philosophical argument for taking AI rights seriously. Schwitzgebel is among the most intellectually honest philosophers on AI consciousness, refusing to dismiss the question despite acknowledging the deep uncertainty.

11. Floridi, Luciano, and J. W. Cowls, et al. "An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations." Minds and Machines 28, no. 4 (2018): 689–707. A comprehensive ethical framework for AI that addresses moral status questions among others. Useful for contextualizing the consciousness debate within a broader AI ethics framework.

12. Butlin, Patrick, et al. "Consciousness in Artificial Intelligence: Insights from the Science of Consciousness." arXiv preprint arXiv:2308.08708 (2023). A rigorous recent survey of consciousness science applied to AI, authored by a group of philosophers and neuroscientists including some of the most respected figures in consciousness research. Offers careful, hedged assessments of whether current AI systems satisfy criteria from major consciousness theories. Essential recent reading.

13. Metzinger, Thomas. "Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology." Journal of Artificial Intelligence and Consciousness 8, no. 1 (2021): 43–66. Metzinger argues for a moratorium on designing AI systems with the capacity for suffering, on the grounds that creating such capacity without adequate understanding would be ethically impermissible. A provocative and serious contribution.


Anthropomorphism and Emotional AI

14. Weizenbaum, Joseph. Computer Power and Human Reason: From Judgment to Calculation. San Francisco: W. H. Freeman, 1976. Weizenbaum's classic warning about the inappropriate attribution of human qualities to computers, written after his disturbing experience with the ELIZA chatbot. A prescient and philosophically sophisticated account of the ELIZA effect.

15. Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books, 2011. Turkle's sociological account of human relationships with digital technology, including robots and AI companions. Discusses the psychological dynamics of human-AI attachment and raises questions about what these attachments reveal about human needs.


Labor and the Human Dimension

16. Gray, Mary L., and Siddharth Suri. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. New York: Houghton Mifflin Harcourt, 2019. Documents the "ghost work" — human labor performed by data annotators, content moderators, and others — that underlies AI systems. Essential for understanding the human cost of AI training.


17. Solum, Lawrence B. "Legal Personhood for Artificial Intelligences." North Carolina Law Review 70, no. 4 (1992): 1231–1287. An early and still-relevant legal analysis of AI personhood, examining what it would mean to extend legal personhood to AI systems and what precedents exist for non-human legal persons.

18. Danaher, John. "Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism." Science and Engineering Ethics 26 (2020): 2023–2049. Danaher argues for "ethical behaviourism" — treating AI systems as moral patients based on their behavior rather than waiting for resolution of the consciousness question. Articulates a form of the precautionary argument.


Case-Specific Resources

19. Lemoine, Blake. "Is LaMDA Sentient? — An Interview." Medium, 2022. The original published transcript of Lemoine's conversations with LaMDA, along with his analysis. Read alongside the scientific and philosophical critiques for a full picture.

20. Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. The "stochastic parrots" paper that contributed to Timnit Gebru's departure from Google. Argues that large language models produce plausible-sounding text without genuine understanding, and discusses the risks of mistaking statistical coherence for intelligence. Essential context for the LaMDA controversy.