Quiz: Emerging Technologies and Anticipatory Governance
Test your understanding before moving to the next chapter. Target: 70% or higher to proceed.
Section 1: Multiple Choice (1 point each)
1. The Collingridge dilemma states that:
- A) Technology always develops faster than regulation can keep up
- B) Early in a technology's development, governance is possible but uninformed; later, when effects are known, the technology is entrenched and governance is difficult
- C) Regulation always stifles innovation
- D) Technology companies should self-regulate because governments lack technical expertise
Answer
**B)** Early in a technology's development, governance is possible but uninformed; later, when effects are known, the technology is entrenched and governance is difficult. *Explanation:* Section 38.1 presents the Collingridge dilemma as formulated in David Collingridge's 1980 work *The Social Control of Technology*. The dilemma has two horns: the information problem (we cannot know a technology's effects until it is deployed) and the power problem (once deployed, the technology is embedded in social, economic, and institutional structures that resist change). This is a *dilemma* because both horns are genuine — the difficulty is structural, not merely a matter of insufficient effort.2. Anticipatory governance is best defined as:
- A) Predicting exactly which technologies will emerge and pre-writing regulations for each one
- B) Banning all technologies until their effects are fully understood
- C) The proactive development of governance frameworks for technologies that have not yet reached full deployment, embedding ethical considerations in design rather than retrofitting them after harm occurs
- D) Waiting for technology companies to identify governance needs and responding to their requests
Answer
**C)** The proactive development of governance frameworks for technologies that have not yet reached full deployment, embedding ethical considerations in design rather than retrofitting them after harm occurs. *Explanation:* Section 38.1 defines anticipatory governance as distinct from both reactive regulation (which waits for harm) and precautionary banning (which prevents deployment). Anticipatory governance accepts uncertainty but acts on it — using scenario planning, technology assessment, participatory deliberation, and adaptive frameworks to build governance structures proactively.3. The primary threat that quantum computing poses to data governance is:
- A) Quantum computers will be too expensive for anyone except governments to use
- B) Quantum computers capable of running Shor's algorithm could break the public-key cryptographic systems (RSA, ECC) that protect the majority of today's encrypted communications and stored data
- C) Quantum computers will make all data processing instantaneous, eliminating the need for data governance
- D) Quantum computing will only affect scientific research, not commercial data systems
Answer
**B)** Quantum computers capable of running Shor's algorithm could break the public-key cryptographic systems (RSA, ECC) that protect the majority of today's encrypted communications and stored data. *Explanation:* Section 38.2 explains that current encryption systems rely on the computational difficulty of certain mathematical problems (factoring large numbers, computing discrete logarithms) that classical computers cannot solve efficiently. Quantum computers running Shor's algorithm could solve these problems exponentially faster, rendering current encryption vulnerable. The "harvest now, decrypt later" threat — adversaries collecting encrypted data today for decryption when quantum computers become available — makes this an immediate governance concern, not a future one.4. Brain-computer interfaces (BCIs) create uniquely challenging consent problems because:
- A) BCIs are expensive and only wealthy people can afford them
- B) BCIs collect neural activity that is involuntary — the brain produces data continuously, and users cannot choose which neural signals are captured
- C) BCIs require surgery, which has its own consent requirements
- D) BCIs are only used in medical contexts where consent is not required
Answer
**B)** BCIs collect neural activity that is involuntary — the brain produces data continuously, and users cannot choose which neural signals are captured. *Explanation:* Section 38.3 explains that neural data is fundamentally different from other data types because the brain produces signals continuously and involuntarily. A user wearing a BCI cannot "choose" which neural activity to share and which to withhold — unlike a social media post, which is a deliberate act of disclosure. This makes consent structurally problematic: how can one consent to the collection of data one cannot control? The chapter introduces "cognitive liberty" — the right to mental privacy — as the governance concept needed to address this challenge.5. A "digital twin" as discussed in this chapter is:
- A) A backup copy of a dataset
- B) A detailed computational model that simulates a physical system (a city, a building, a human body) using real-time data, enabling prediction, testing, and optimization
- C) A duplicate social media profile created for testing purposes
- D) A secondary data center that mirrors the primary one for disaster recovery
Answer
**B)** A detailed computational model that simulates a physical system (a city, a building, a human body) using real-time data, enabling prediction, testing, and optimization. *Explanation:* Section 38.4 describes digital twins as increasingly sophisticated simulations that maintain continuous connections to real-world data sources. A city's digital twin, for example, integrates data from traffic sensors, energy grids, weather stations, and population movement to create a real-time model that can be used for urban planning, emergency response, and resource allocation. The governance challenges arise from the data required to build the twin, the predictions it enables, and the blurred line between simulation and surveillance.6. The precautionary principle, as applied to technology governance, holds that:
- A) All new technologies should be banned until proven safe
- B) When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause-and-effect relationships are not fully established scientifically
- C) Only technologies with proven benefits should be developed
- D) Companies should be cautious in their marketing of new technologies
Answer
**B)** When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause-and-effect relationships are not fully established scientifically. *Explanation:* Section 38.5 presents the precautionary principle as formulated in the 1992 Rio Declaration and subsequent policy instruments. The principle shifts the burden of proof: instead of requiring proof of harm before acting, it requires action when there is credible evidence of potential harm, even under scientific uncertainty. The principle does not require blanket bans — it requires precautionary *measures*, which can range from restrictions and conditions to monitoring and staged deployment.7. A "regulatory sandbox" is:
- A) A virtual environment where regulators can practice writing new regulations without affecting real companies
- B) A controlled environment where innovative companies can test new products under regulatory supervision, with temporary exemptions from certain rules, while regulators observe and learn
- C) A database of all regulations that apply to a given industry
- D) A penalty-free zone where companies can operate without any regulatory oversight
Answer
**B)** A controlled environment where innovative companies can test new products under regulatory supervision, with temporary exemptions from certain rules, while regulators observe and learn. *Explanation:* Section 38.5 describes regulatory sandboxes as a governance innovation that partially addresses the Collingridge dilemma by allowing deployment under controlled conditions. Companies get to test products in real-world settings; regulators get to observe effects and develop evidence-based rules. The sandbox model originated in fintech (UK FCA, 2016) and has been adopted for AI, data governance, and other emerging technology domains. Limitations include the risk that sandbox conditions do not reflect full-scale deployment effects and that sandbox participants may capture the regulatory process.8. The "harvest now, decrypt later" threat refers to:
- A) Agricultural data being collected now for future commercial use
- B) Adversaries collecting encrypted data today with the intention of decrypting it in the future when quantum computers become available
- C) Companies harvesting user data before regulations take effect
- D) Intelligence agencies collecting metadata now and analyzing it later
Answer
**B)** Adversaries collecting encrypted data today with the intention of decrypting it in the future when quantum computers become available. *Explanation:* Section 38.2 identifies "harvest now, decrypt later" as an immediate threat — not a future one. State-sponsored adversaries are believed to be storing large volumes of encrypted communications intercepted today, planning to decrypt them when quantum computers capable of breaking current encryption become available. This means that data encrypted today with algorithms vulnerable to quantum attack is not truly secure, even though quantum computers cannot yet break the encryption. The governance implication is that the transition to post-quantum cryptography should begin immediately, not when quantum computers arrive.Section 2: True/False with Justification (1 point each)
9. "The pacing problem and the Collingridge dilemma are the same concept expressed in different terms."
Answer
**False.** *Explanation:* While related, the pacing problem and the Collingridge dilemma describe different aspects of governance difficulty. The pacing problem refers to the speed differential between technological innovation and regulatory response — regulation is systematically slower than the technology it tries to govern. The Collingridge dilemma refers to the information-power asymmetry: we lack information when we have the power to shape technology, and we gain information only when the power to shape it has been lost. The pacing problem is about speed; the Collingridge dilemma is about the structural mismatch between knowledge and power.10. "Adaptive governance resolves the Collingridge dilemma by allowing governance frameworks to evolve as knowledge about a technology improves."
Answer
**Partially true, with important qualifications.** *Explanation:* Adaptive governance does address one horn of the Collingridge dilemma — the information problem — by building in mechanisms for learning and revision. But it does not fully resolve the power horn: once a technology becomes entrenched, even adaptive governance frameworks may lack the political and institutional power to force significant changes. Industries build around technologies, jobs depend on them, and constituencies form to protect them. Adaptive governance is better than rigid governance, but it is not a complete solution to the structural challenge Collingridge identified.11. "The Internet of Things at scale will make individual-level consent mechanisms structurally obsolete."
Answer
**True.** *Explanation:* Section 38.3 argues that when sensors are embedded in every object and environment — walls, streets, clothing, vehicles, air — it becomes impossible for individuals to identify, understand, and consent to each data collection event. The volume of collection, the invisibility of sensors, and the inability to opt out of public and semi-public environments make individual consent a fiction beyond even its current inadequacy. Alternative governance approaches — environmental privacy standards, zonal data governance, community consent mechanisms — will be needed.12. "Technological determinism — the belief that technology develops according to its own logic and cannot be meaningfully shaped by society — is consistently supported by historical evidence."
Answer
**False.** *Explanation:* Section 38.5 argues that historical evidence consistently refutes technological determinism. Every technology studied in this course has been shaped by social, political, and economic choices. Social media platforms were designed to maximize engagement because their business models required advertising revenue — not because engagement-optimization was technically inevitable. Surveillance architectures were built because governments chose to build them. The GDPR changed platform behavior because a political institution made governance choices. Technology creates possibilities; society determines which possibilities are realized.13. "Post-quantum cryptography refers to encryption methods that use quantum computers to provide stronger security."
Answer
**False.** *Explanation:* Post-quantum cryptography refers to encryption algorithms that can run on *classical* (non-quantum) computers but are resistant to attack by quantum computers. The "post-quantum" designation means the algorithms are designed to be secure in an era when quantum computers exist — not that they use quantum computing themselves. NIST has been leading a standardization process to identify post-quantum algorithms that can replace current vulnerable standards (RSA, ECC).Section 3: Short Answer (2 points each)
14. Explain why digital twins create a governance challenge that goes beyond traditional data collection. What is the difference between collecting data about a person and simulating a person?
Answer
Traditional data collection captures information about specific interactions — a purchase, a location visit, a health measurement. A digital twin integrates these data points into a continuous, dynamic model that simulates the person (or community, or city) as a system. The governance difference is in what the model enables: prediction of behavior that has not yet occurred, simulation of scenarios the person has not experienced, and optimization of interventions the person has not consented to. A digital twin of a person could predict their health trajectory, their consumer behavior, their risk of criminal activity, or their political attitudes — without those predictions ever being validated against reality. The person being simulated may not know the twin exists, may not have consented to its creation, and may have no way to challenge its predictions. This raises governance questions that go beyond data protection (which governs collection and processing) to encompass simulation rights — the right to know when you are being modeled, to challenge the model's assumptions, and to restrict the uses of your simulated self.15. Describe two specific ways in which the precautionary principle and the innovation imperative can conflict. For each conflict, explain how anticipatory governance might navigate between them.
Answer
First conflict: Speed of deployment. The precautionary principle favors slowing deployment until risks are better understood; the innovation imperative pushes for rapid deployment to capture market advantage and deliver benefits. Anticipatory governance navigates this through staged deployment (regulatory sandboxes, pilot programs, phased rollouts) that allows technology to advance while containing risks. Second conflict: Burden of proof. The precautionary principle places the burden on innovators to demonstrate safety; the innovation imperative places the burden on regulators to demonstrate harm. Anticipatory governance navigates this through proportional evidence requirements — requiring more evidence of safety for higher-risk applications (medical BCIs, autonomous weapons) while allowing lighter-touch oversight for lower-risk applications (agricultural IoT, consumer wellness devices). The evidence requirement is calibrated to the potential severity and reversibility of harm, not applied uniformly.16. Why does the chapter argue that cognitive liberty may need to become a fundamental right? What existing rights is it analogous to, and what does it protect that those existing rights do not?
Answer
Cognitive liberty — the right to mental privacy, mental self-determination, and freedom from unauthorized access to or manipulation of neural data — may need to become a fundamental right because existing privacy rights were not designed for a world in which brain activity can be measured, recorded, and analyzed. Freedom of thought (protected by the Universal Declaration of Human Rights and many constitutions) protects against coercion of beliefs but does not address involuntary disclosure of mental states through technology. Privacy rights protect against unauthorized access to personal information but typically require that the information be "shared" or "collected" — neural data generated involuntarily by a brain does not fit neatly into consent-based privacy frameworks. Cognitive liberty is analogous to bodily autonomy (the right to control what happens to your body) extended to the mind. It protects against: the collection of neural data without meaningful consent, the use of neural data for purposes the individual did not authorize, the manipulation of mental states through neurotechnology, and the inference of thoughts, emotions, or intentions from neural patterns.Section 4: Scenario Analysis (3 points each)
17. A healthcare company develops a brain-computer interface that allows patients with ALS (a neurodegenerative disease) to communicate by translating neural activity into text. The device dramatically improves quality of life for patients who have lost the ability to speak. The company discovers that the neural data also contains reliable indicators of depression, anxiety, and suicidal ideation. The company considers: (a) sharing this information with the patient's physician without specific consent, (b) using it to improve the product's communication accuracy, (c) licensing the depression-detection capability to insurance companies. Analyze each option from a governance perspective.
Answer
**(a) Sharing with the physician without specific consent.** This raises a conflict between beneficence (the physician could intervene to help a patient experiencing suicidal ideation) and autonomy (the patient consented to a communication device, not a mental health monitoring system). The incidental discovery of health information is analogous to finding an incidental tumor during a scan performed for another purpose — medical ethics generally supports disclosure to the patient, but not necessarily to third parties without consent. A governance framework should require: disclosure to the patient that the device can detect mental health indicators, explicit consent before sharing with any third party (including physicians), and the right to refuse mental health monitoring while continuing to use the communication function. **(b) Using it to improve communication accuracy.** This is the most defensible option if the neural data is used only on-device and only for the consented purpose (communication). However, if "improvement" involves transmitting neural data to company servers for model training, it extends the data collection beyond the original purpose. Governance should require: purpose limitation (neural data used only for the consented communication function), data minimization (only the data necessary for improvement is processed), and transparency about any off-device processing. **(c) Licensing depression-detection capability to insurance companies.** This is the most problematic option and arguably the most predictable instance of function creep. Using neural data collected for a medical communication purpose to inform insurance decisions represents: a violation of purpose limitation, the use of involuntarily generated data for commercial purposes, the creation of a system that could discriminate against people with mental health conditions, and a fundamental betrayal of the trust relationship between a medical device company and its most vulnerable users. The VitraMed thread is directly relevant: ethical debt accumulates when companies discover secondary uses for data and pursue revenue over governance.18. A national government is preparing for the quantum computing era. A senior advisor proposes: "We should begin transitioning all government systems to post-quantum cryptography immediately. Yes, it will be expensive and disruptive, but the 'harvest now, decrypt later' threat means that every day we delay, more classified and sensitive data becomes vulnerable to future decryption." Another advisor responds: "We should wait until quantum computers are closer to reality. The post-quantum algorithms haven't been tested at scale, and a premature transition could introduce new vulnerabilities." Evaluate both positions and recommend a governance approach.
Answer
**The case for immediate transition:** The "harvest now, decrypt later" threat is real and current — adversaries (particularly state-sponsored actors) are believed to be intercepting and storing encrypted data today for future quantum decryption. Every day of delay expands the volume of vulnerable data. For the most sensitive data categories (national security, intelligence, diplomatic communications, critical infrastructure), the cost of delayed transition includes the permanent compromise of data that should have been protected. **The case for caution:** Post-quantum algorithms, while standardized by NIST, have less real-world testing than current algorithms (RSA, AES). Cryptographic transitions at scale are complex and error-prone — transitioning too quickly could introduce implementation vulnerabilities that are exploitable *now* by classical computers, while the quantum threat remains years away. Additionally, early post-quantum algorithms may be revised or replaced as cryptanalysis improves. **Recommended governance approach:** A risk-tiered transition strategy that applies the precautionary principle proportionally. (1) Highest-priority systems (national security, critical infrastructure, long-retention classified data) should begin transition immediately, accepting higher implementation costs as proportionate to the severity of the threat. (2) Medium-priority systems (government administrative data, healthcare records) should begin planning and testing now, with transition over 3-5 years. (3) Lower-priority systems should transition as part of normal technology refresh cycles. (4) All systems should implement "crypto-agility" — the ability to switch cryptographic algorithms without wholesale system redesign — as an anticipatory governance measure. (5) A dedicated quantum readiness office should coordinate the transition, monitor threat developments, and adjust timelines as the quantum threat evolves. This is adaptive governance applied to a specific anticipatory challenge.Solutions
Selected solutions are available in appendices/answers-to-selected.md.