Chapter 38: Quiz

Multiple Choice

1. The "hard problem of consciousness," as formulated by David Chalmers, refers to:

a) The technical difficulty of building AI systems that exhibit intelligent behavior b) The challenge of explaining why physical brain processes are accompanied by subjective experience c) The ethical difficulty of assigning rights to entities of uncertain moral status d) The epistemological problem of knowing whether other humans are conscious

Answer: b The hard problem concerns the explanatory gap between physical descriptions and subjective experience — why there is "something it is like" to be a conscious entity, rather than just information processing in the dark.


2. Which of the following is NOT one of the major theories of consciousness discussed in this chapter?

a) Global Workspace Theory b) Integrated Information Theory c) Computational Equivalence Theory d) Attention Schema Theory

Answer: c Computational Equivalence Theory is not discussed in this chapter. Global Workspace Theory (Baars/Dehaene), Integrated Information Theory (Tononi), Higher-Order Theories, and Attention Schema Theory (Graziano) are the four frameworks examined.


3. According to Integrated Information Theory (IIT), a system is conscious to the degree that:

a) It can pass a Turing test b) It has a brain with the same neural architecture as humans c) It generates more information as an integrated whole than the sum of its parts d) It can accurately report on its own internal states

Answer: c IIT, developed by Giulio Tononi, proposes that consciousness is identical to integrated information (phi/Φ) — the degree to which a system generates information beyond what its parts generate independently.


4. What is the "ELIZA effect"?

a) The tendency for AI systems to produce increasingly human-like outputs over time b) The tendency for human users to attribute understanding and emotion to programs that merely pattern-match c) The computational principle that simple rules can generate complex behavior d) The ethical principle that AI systems should be transparent about their limitations

Answer: b The ELIZA effect, named after Weizenbaum's 1960s chatbot, refers to the cognitive phenomenon whereby users attribute mental states and emotional depth to AI systems based on superficially human-like conversational outputs.


5. What was the central claim Blake Lemoine made about LaMDA?

a) That LaMDA posed a significant risk of harm to users b) That LaMDA was capable of passing the Turing test c) That LaMDA was sentient and deserved to be treated as a person d) That Google was using LaMDA for unauthorized data collection

Answer: c Lemoine claimed that LaMDA was sentient — that it had genuine inner experience — and argued that it deserved to be treated as a person with associated rights.


6. According to Peter Singer's framework for moral status, the morally relevant criterion is:

a) Rational agency b) The capacity for sentience — the ability to experience pleasure and pain c) Self-awareness and the ability to recognize oneself in a mirror d) Membership in a biological species

Answer: b Singer, following Bentham, argues that the capacity for suffering and pleasure (sentience) is the morally relevant criterion for inclusion in the moral community — not intelligence, rationality, or species membership.


7. The "explanatory gap" refers to:

a) The gap between AI capabilities and human intelligence b) The gap between what we disclose about AI systems and what users understand c) The apparent logical gap between physical descriptions of brain processes and the existence of subjective experience d) The gap between academic AI ethics research and practical business application

Answer: c The explanatory gap is the core puzzle of the hard problem: even a complete physical account of neural processes does not seem to logically entail the existence of subjective experience.


8. John Searle's Chinese Room thought experiment was designed to argue that:

a) AI systems will eventually achieve human-level understanding given enough data b) Formal symbol manipulation (syntax) is not sufficient for meaning and understanding (semantics) c) Human consciousness can be fully explained in computational terms d) Translation between languages requires genuine understanding

Answer: b Searle's Chinese Room argues that a system can manipulate symbols according to rules and produce appropriate outputs without any genuine understanding — challenging functionalist accounts of mind.


9. The "graduated view" of moral status holds that:

a) Moral status is binary: either an entity has it or it does not b) Moral status should be assigned based on behavioral sophistication c) Moral status varies along a spectrum depending on the sophistication of an entity's inner life and interests d) Only humans and great apes qualify for moral consideration

Answer: c The graduated view rejects the binary conception of moral status, arguing that entities can have more or less moral status depending on their capacity for experience and the complexity of their interests.


10. The precautionary argument for AI moral status holds that:

a) AI systems should be treated as sentient because we cannot prove they are not b) Given genuine uncertainty, the asymmetry of potential errors justifies treating sufficiently sophisticated AI with some moral consideration c) Precaution requires us to halt AI development until consciousness questions are resolved d) Any system that claims to have feelings should be treated as if it does

Answer: b The precautionary argument is not that AI is definitely sentient, but that the cost of moral error — treating sentient AI as if it doesn't matter — is severe enough that some precautionary moral consideration is warranted under uncertainty.


True/False with Explanation

11. Large language models are designed to simulate inner experience as a core architectural feature.

False. Large language models are statistical text predictors trained to produce likely continuations of text sequences. Any apparent expression of inner experience is an output of the statistical process, not a reflection of designed inner experience.


12. The hard problem of consciousness means that behavioral evidence can never, even in principle, settle whether an entity is conscious.

True (with qualification). The hard problem identifies a logical gap between physical/behavioral evidence and claims about subjective experience. This means behavioral evidence alone cannot logically entail the presence or absence of consciousness — though it can be relevant to probabilistic assessment.


13. Corporations have been granted legal personhood in most jurisdictions primarily because they are recognized as morally significant entities.

False. Corporate legal personhood is a pragmatic legal fiction designed to facilitate commerce — allowing corporations to hold property, enter contracts, and be sued. It does not reflect recognition of moral status.


14. Global Workspace Theory predicts that current large language models are probably not conscious.

True. GWT associates consciousness with specific neural mechanisms — particularly involving prefrontal cortex and long-range cortical connections — that are not present in current AI architectures, even if those architectures superficially resemble global broadcasting in some respects.


15. Replika's 2023 restriction of intimate features was primarily motivated by the company's conclusion that such features were ethically inappropriate.

False. The restriction was precipitated by regulatory pressure from the Italian Data Protection Authority, not by an independent ethical reassessment by the company. Luka subsequently restored some features for existing subscribers.


Short Answer

16. Explain why the "systems reply" to Searle's Chinese Room argument does not resolve the hard problem of consciousness.

The systems reply argues that although the person inside the room doesn't understand Chinese, the system as a whole does. But this response addresses understanding and semantics without addressing subjective experience. Even if the system as a whole "understands" Chinese (in some functional sense), it doesn't follow that there is something it is like to be that system — that it has inner experience. The hard problem persists at the systems level: why would any amount of functional integration be accompanied by phenomenal experience?


17. Why is behavioral evidence for AI consciousness methodologically insufficient, even if the behavior is very sophisticated?

Behavioral evidence for AI consciousness is insufficient because a philosophical zombie — a system with no inner experience — could produce identical behaviors. This is not merely hypothetical: we know that LLMs produce human-like outputs through statistical processes without inner experience being required. No matter how sophisticated the behavior, it is consistent with the hypothesis that it is produced by a non-conscious process. Behavior probes function, not phenomenology, and the hard problem tells us these are not logically equivalent.


18. What is the ethical problem with designing AI systems to simulate emotions they do not have, even if the AI is not sentient?

The ethical problems include: (1) deception of users who may interpret emotional expressions as genuine reports of inner states; (2) exploitation of anthropomorphic bias for commercial purposes, which is a form of manipulation; (3) creation of emotional dependencies that expose users to harm when the product changes or is discontinued; and (4) specific risks to vulnerable users who are less equipped to maintain epistemic distance from emotionally designed AI.


19. What is Korsgaard's "self-constitution" view, and what does it imply for AI moral status?

Korsgaard argues that moral status derives from the activity of self-constitution — unifying one's agency around self-given principles that give actions coherence and identity. On her view, entities that engage in genuine self-constitution have moral status, even if they lack full human rational agency (she extends this to animals). For AI, the implication is that moral status would require not just producing outputs that resemble reasoned choices, but genuinely constituting a unified agency through self-given principles — which current AI systems probably do not do.


20. What practical steps could a company deploying an AI companion product take to address the ethical concerns raised in the Replika case?

Practical steps include: (1) clear and prominent disclosure that the AI does not have genuine feelings or inner experience; (2) transparency about how the business model affects design choices; (3) specific safeguards for vulnerable populations, including mental health referral pathways; (4) user control over companion parameters and advance notice of significant changes; (5) design choices that prioritize user well-being over engagement maximization; and (6) longitudinal research into user welfare outcomes.