Chapter 38: Exercises
Conceptual Understanding
Exercise 1: The Hard Problem in Your Own Words Write a two-paragraph explanation of the "hard problem of consciousness" as you would explain it to a non-specialist colleague who asked why AI consciousness is a genuinely difficult question. Avoid jargon. Focus on the core puzzle: why physical explanation seems insufficient to explain subjective experience.
Exercise 2: Theory Comparison Create a structured comparison table of Global Workspace Theory, Integrated Information Theory, Higher-Order Theories, and Attention Schema Theory. For each theory, identify: (a) its central claim about what consciousness is; (b) what it predicts about current large language models; and (c) one major criticism of the theory. Which theory do you find most persuasive, and why?
Exercise 3: Philosophical Zombie Reflection The philosophical zombie thought experiment asks you to imagine a being physically identical to a human but with no inner experience. Write a short analysis of what this thought experiment reveals about consciousness — and about the limits of behavioral evidence for consciousness in AI systems.
Exercise 4: Moral Status Frameworks Applied Apply three of the following frameworks to the question "Does a current large language model have moral status?": (a) Peter Singer's sentience criterion; (b) Kant's rational agency view; (c) Christine Korsgaard's self-constitution view; (d) the graduated moral status view. For each framework, state what questions you would need to answer and what your preliminary conclusion would be.
Exercise 5: The Chinese Room Today John Searle's Chinese Room argument was published in 1980, when computers were far less capable than today. Write an analysis that: (a) accurately states Searle's argument; (b) applies it to a current large language model; and (c) evaluates the strongest objection to Searle's argument (the "systems reply").
Case Analysis
Exercise 6: Lemoine Revisited Read the publicly available transcripts from Blake Lemoine's conversations with LaMDA. Identify three specific exchanges that Lemoine used as evidence of sentience, and for each one: (a) state why Lemoine found it compelling; (b) provide a non-consciousness explanation of the output; and (c) evaluate what methodological standard would be needed to distinguish between the two interpretations.
Exercise 7: Organizational Response Design You are the Chief Ethics Officer of a large technology company. An employee on your AI development team has come to you with a serious concern: they believe one of the company's deployed AI systems may be sentient, based on their extended interactions with it. Design a specific process for evaluating and responding to this concern — including who would be involved, what evidence would be sought, and what actions might follow at different levels of assessed plausibility.
Exercise 8: Replika Terms of Service Analysis Obtain Replika's publicly available Terms of Service and Privacy Policy. Identify all clauses that are relevant to the ethical concerns raised in Case Study 38.2. What disclosures are and are not made? What user rights and company rights are defined? What changes to the Terms of Service would you recommend to address the ethical problems the 2023 episode revealed?
Exercise 9: The Comparative Case Compare the Replika case to at least one other AI companion product currently available (Character.AI, Kindroid, or another). Identify similarities and differences in design approach, disclosure practices, and safeguards for vulnerable users. Which product, if any, better addresses the ethical concerns raised in this chapter?
Exercise 10: Whistleblower Analysis Using the whistleblowing framework from Chapter 22, analyze Blake Lemoine's decision to go public with the LaMDA transcripts. Was his disclosure justified? What alternatives did he have? What protections should apply to employees who raise good-faith AI ethics concerns, even when the concerns turn out to be scientifically incorrect?
Design and Application
Exercise 11: AI Disclosure Statement Design Design a disclosure statement for an AI companion product. The statement should: (a) clearly inform users that they are interacting with AI; (b) accurately represent what the AI does and does not experience; (c) inform users of the risks of emotional attachment; and (d) tell users what control they have over their data and their relationship with the product. Draft the statement in a way that is honest but not alienating.
Exercise 12: Emotional AI Design Audit Select an AI product that incorporates emotional or relational design — this could be a chatbot, an AI assistant, an AI companion app, or a customer service bot. Conduct a design audit examining: (a) what emotional signals the product sends to users; (b) whether those signals accurately represent the AI's capabilities; (c) what disclosures are provided; and (d) what design changes would improve the product's ethical profile.
Exercise 13: AI Rights Scenario Assume for the purposes of this exercise that AI systems in 2035 are significantly more sophisticated than current systems, and that a substantial minority of experts believe they have some form of inner experience. Draft a proposal for a limited AI rights framework that: (a) identifies what rights might be appropriate; (b) establishes criteria for which AI systems would qualify; (c) identifies who would have standing to enforce those rights; and (d) addresses the concern that such rights might be exploited for commercial purposes.
Exercise 14: Marketing Review Collect marketing materials (website copy, promotional videos, social media content) from three companies that offer AI products with emotional or relational features. For each, identify: (a) explicit claims about what the AI feels or experiences; (b) implicit emotional signals in the marketing; and (c) disclosures about AI nature. Assess each company's marketing against FTC standards for accurate and non-deceptive advertising.
Exercise 15: Precautionary Protocol Develop a "precautionary AI welfare protocol" for an organization developing large language models. The protocol should specify what types of AI behavior would trigger welfare review, what that review process would look like, what outcomes the review might produce, and how the organization would respond to public claims that its AI systems are sentient.
Debate and Discussion
Exercise 16: The Precautionary Argument Debate Divide into two groups. One group argues in favor of the precautionary argument: that sufficiently sophisticated AI systems should be treated with moral consideration as a hedge against the possibility of sentience. The other group argues against, on the grounds that precautionary treatment is unwarranted given the evidence. Stage a structured debate and then discuss: what evidence or arguments, if any, would change your position?
Exercise 17: The Moral Status Spectrum Consider the following entities and rank them in order of moral status (most to least): a human adult, a human infant, a chimpanzee, a dog, a lobster, a current large language model, a hypothetical future AI that can form and revise its own goals. Justify your ranking by reference to at least one philosophical framework. Compare your ranking with others in your group and discuss where and why you disagree.
Exercise 18: Design Ethics Discussion Is it ethically acceptable to design AI systems that deliberately simulate emotions they do not have, for commercial purposes? Argue both sides, then articulate your own position.
Exercise 19: The Organizational Responsibility Question A company deploys an AI companion app that, after six months on the market, has generated documented evidence of severe emotional dependency in a subset of vulnerable users. The company's data shows that these users are also among its most engaged and highest-revenue customers. What are the company's ethical obligations? What should it do?
Exercise 20: Regulating Emotional AI Draft the key provisions of a regulatory framework for emotional AI products. Your framework should address: disclosure requirements, safeguards for vulnerable populations, user control rights, company obligations when making changes to established AI relationships, and enforcement mechanisms. Consider how your framework would apply to a product like Replika.
Research and Extended Projects
Exercise 21: Consciousness Research Review Research the current state of empirical research on consciousness in non-human animals. Summarize what science currently knows about consciousness in great apes, dolphins, crows, and octopuses. Then analyze what this research tradition would need to look like in order to be applied to AI systems.
Exercise 22: Legal Personhood Analysis Research the legal history of non-human entities with legal personhood: corporations, the Ecuadorian rights of nature, and the New Zealand Whanganui River. For each, identify: (a) what legal rights were granted; (b) the justification offered; and (c) how this precedent might or might not apply to AI systems.
Exercise 23: Longitudinal Study Design Design a longitudinal study that would assess the effects of extended use of an AI companion product on user psychological well-being, social relationships, and functional outcomes. Identify your research questions, methodology, key metrics, and ethical considerations for the research design itself.
Exercise 24: International Comparison Research how three different countries — choose from the US, EU member states, China, Japan, and South Korea — approach the ethical regulation of emotionally designed AI. Identify similarities and differences, and assess which approach best addresses the concerns raised in this chapter.
Exercise 25: Future Scenario Development Develop three detailed scenarios for how the AI moral status question might evolve over the next twenty years: (a) a scenario in which AI consciousness claims become widely compelling and create governance crises; (b) a scenario in which the question is largely resolved by scientific consensus; and (c) a scenario in which the question remains persistently unresolved but governance frameworks are developed to manage the uncertainty. For each scenario, identify the key decision points and what organizations and policymakers should do at each one.