> "Consent is not a rubber stamp. It is a process of ongoing negotiation between those who hold power and those who are affected by it."
Learning Objectives
- Trace the historical development of informed consent from medical ethics to data protection law
- Analyze the legal requirements for valid consent under GDPR and CCPA
- Evaluate why notice-and-consent frameworks fail in practice, using empirical evidence
- Identify and classify dark patterns that manufacture consent through manipulative design
- Distinguish between meaningful consent and theatrical consent
- Compare alternative governance frameworks: legitimate interest, contextual integrity, and information fiduciary
- Analyze the special challenges of consent for children and vulnerable populations
In This Chapter
- Chapter Overview
- 9.1 The Origins of Informed Consent
- 9.2 Notice and Consent in Data Protection Law
- 9.3 Consent Fatigue and the Privacy Policy Problem
- 9.4 Dark Patterns and Manufactured Consent
- 9.5 The Consent Fiction: Meaningful vs. Theatrical Consent
- 9.6 The VitraMed Consent Crisis
- 9.7 Beyond Consent: Alternative Frameworks
- 9.8 Children and Consent: Special Protections
- 9.9 Toward a Post-Consent Governance Framework
- 9.10 Chapter Summary
- What's Next
- Chapter 9 Exercises -> exercises.md
- Chapter 9 Quiz -> quiz.md
- Case Study: Cookie Consent Banners: A Study in Theatrical Consent -> case-study-01.md
- Case Study: VitraMed's Patient Consent Redesign -> case-study-02.md
Chapter 9: Data Collection and Consent
"Consent is not a rubber stamp. It is a process of ongoing negotiation between those who hold power and those who are affected by it." --- Onora O'Neill, Autonomy and Trust in Bioethics (2002)
Chapter Overview
At the end of Chapter 8, we surveyed the vast surveillance infrastructure that now saturates daily life --- state programs that hoover up communications metadata, corporate platforms that track every click, biometric systems that map faces and bodies, and the convergent apparatus that blurs the line between government watching and corporate watching. A natural response to that survey is: How is any of this legal?
The answer, in most cases, is a single word: consent.
You consented. Or rather, something called "consent" occurred at some point --- a click, a checkbox, a continued use of a service --- that was interpreted by the data collector as authorization for everything that followed. This chapter examines that claim. We will discover that what passes for consent in the digital economy bears almost no resemblance to the concept as understood in the ethical traditions from which it derives. The gap between consent as a moral concept and consent as a legal mechanism is not a minor implementation problem. It is a structural failure that undermines the legitimacy of the entire data governance regime.
This is the chapter where the consent fiction --- one of this book's four recurring themes --- moves from the background to center stage. We have encountered it in passing throughout Part 1. Here we dissect it.
In this chapter, you will learn to: - Understand how informed consent evolved from a medical ethics principle to the foundation of data protection law - Evaluate the empirical evidence on why notice-and-consent frameworks fail - Recognize dark patterns that manufacture consent through design manipulation - Distinguish between consent that is meaningful and consent that is theatrical - Assess alternative frameworks that do not depend on individual consent - Analyze the special challenges of consent for children
9.1 The Origins of Informed Consent
9.1.1 From Nuremberg to the Belmont Report
The modern concept of informed consent has its roots not in technology law but in medical ethics --- and specifically in the aftermath of atrocity.
During World War II, Nazi physicians conducted horrific medical experiments on concentration camp prisoners --- immersing subjects in freezing water, infecting them with diseases, performing surgeries without anesthesia. The subjects did not consent. Many died. At the Nuremberg Doctors' Trial (1946-1947), the tribunal convicted 16 physicians and established the Nuremberg Code, whose first principle reads:
"The voluntary consent of the human subject is absolutely essential. This means that the person involved should have legal capacity to give consent; should be so situated as to be able to exercise free power of choice, without the intervention of any element of force, fraud, deceit, duress, over-reaching, or other ulterior form of constraint or coercion; and should have sufficient knowledge and comprehension of the elements of the subject matter involved as to enable him to make an understanding and enlightened decision."
This is a demanding standard. It requires voluntary choice, free from coercion, with sufficient knowledge to make an enlightened decision. Every word matters. "Legal capacity" excludes children and individuals who cannot make autonomous decisions. "Free power of choice" excludes situations where refusal carries significant penalties. "Without the intervention of any element of... deceit" excludes dark patterns and misleading framing. "Sufficient knowledge and comprehension" excludes incomprehensible terms of service.
The Nuremberg Code was refined through the Declaration of Helsinki (1964) and codified for U.S. research through the Belmont Report (1979), which established three principles for ethical research involving human subjects:
- Respect for persons: Individuals should be treated as autonomous agents, and persons with diminished autonomy should be protected
- Beneficence: Researchers should minimize harm and maximize benefit
- Justice: The burdens and benefits of research should be distributed fairly
The Belmont Report was itself prompted by another atrocity: the Tuskegee syphilis study (1932-1972), in which the U.S. Public Health Service enrolled 600 Black men in a study of untreated syphilis without informed consent, withholding treatment even after penicillin became available. The study ran for 40 years. Participants were told they were receiving "free health care"; they were actually being observed as their disease progressed. Twenty-eight men died of syphilis, 100 died of related complications, 40 wives were infected, and 19 children were born with congenital syphilis.
The Tuskegee case demonstrates what happens when "consent" is extracted from populations with limited power, limited information, and limited alternatives. The participants technically "agreed" to participate. But the agreement was based on deception ("free health care"), the participants lacked the knowledge to evaluate what they were consenting to, and their socioeconomic circumstances made refusal practically difficult. Sound familiar?
The Belmont Report's standard for informed consent includes three components: - Information: Subjects must be told the purpose, procedures, risks, benefits, and alternatives - Comprehension: The information must be presented in a way the subject can understand - Voluntariness: The decision must be free from coercion or undue influence
9.1.2 What Medical Consent Looks Like
To appreciate how far data consent has drifted from its ethical origins, consider what informed consent looks like in a well-functioning medical context:
A patient considering surgery meets with the surgeon. The surgeon explains, in plain language: - What the procedure involves - Why it is recommended - What the expected benefits are - What the risks are, including their probability - What the alternatives are, including doing nothing - What will happen to any biological samples collected during the procedure
The patient can ask questions. The patient can take time to decide. The patient can consult with family or seek a second opinion. The patient can refuse without penalty. If the patient consents, the consent covers the specific procedure discussed --- not every possible future use of the patient's body or data. And critically, the surgeon has an independent ethical obligation not to perform unnecessary procedures, regardless of whether the patient consents --- consent is a necessary condition but not a sufficient one.
"Now compare that to a cookie consent banner," Dr. Adeyemi said. The class laughed, but the laughter had a bitter edge. The comparison was devastating.
Intuition: Medical informed consent is not perfect --- research has documented its limitations extensively, including inadequate comprehension rates, time pressures, and power imbalances between doctors and patients. But it represents a genuine attempt to implement the moral principles of autonomy and respect for persons. What we see in the digital context is not an imperfect attempt to implement these principles; it is a system designed to appear to implement them while systematically undermining them. The difference is between a flawed implementation and a deliberate subversion.
9.1.3 The Migration to Data Protection
How did a concept developed in the context of medical experiments come to govern the collection of browsing data?
The bridge was the emergence of data protection law in the 1970s and 1980s. As governments and corporations began computerizing personal records, policymakers needed a framework for governing data collection. Consent --- already established as the moral foundation for medical research --- seemed like a natural fit. If people should consent before their bodies are used in research, shouldn't they consent before their data is used in commerce?
The analogy was intuitive but misleading. Medical research involves a defined researcher, a defined subject, a defined procedure, a defined risk, and a defined duration. Data collection in the digital economy involves multiple collectors (most unknown to the data subject), continuous and indefinite collection, undefined future uses, risks that are probabilistic and compound over time, and no clear endpoint.
The Fair Information Practice Principles (FIPPs), first articulated by the U.S. Department of Health, Education, and Welfare in 1973 and subsequently adopted by the OECD in 1980, established notice and choice as core principles of data governance. These principles became the foundation of data protection law worldwide --- from the EU's Data Protection Directive (1995) to the GDPR (2018) to California's CCPA (2020).
The migration was complete: a moral concept designed for the specific context of medical research had become the general-purpose governance mechanism for the entire data economy. And in the process, it was stripped of nearly everything that made it meaningful.
9.2 Notice and Consent in Data Protection Law
9.2.1 The Framework
The dominant legal framework for data governance in most democratic nations is notice and consent: organizations must notify individuals about their data practices and obtain consent before collecting or processing personal data. This framework takes different forms in different jurisdictions, but the core logic is consistent.
Under the GDPR (EU, 2018): Consent must be: - Freely given: Not conditioned on access to a service unless the data processing is necessary for the service - Specific: Given for a defined purpose, not a blanket authorization - Informed: The data subject must know who is collecting, what is collected, and why - Unambiguous: A clear affirmative act (not pre-checked boxes or silence) - Withdrawable: The data subject must be able to withdraw consent as easily as they gave it
Article 7 of the GDPR also specifies that consent cannot serve as the legal basis for processing when there is "a clear imbalance between the data subject and the controller" --- a provision that acknowledges, at least in principle, the power asymmetry that distorts consent. The GDPR's recitals further specify that consent is not freely given when the data subject "has no genuine or free choice or is unable to refuse or withdraw consent without detriment."
Under the CCPA/CPRA (California, 2020/2023): The California framework takes a different approach. Rather than requiring opt-in consent for most processing, it provides consumers with: - The right to know what data is collected and how it is used - The right to delete personal data - The right to opt out of the sale or sharing of personal data - The right to non-discrimination for exercising privacy rights - The right to correct inaccurate personal data (added by CPRA)
The CCPA's opt-out model places the initial burden on consumers rather than companies --- a design choice with significant consequences for actual privacy protection. Behavioral economics research consistently shows that default settings are sticky: most people do not change them. An opt-out model defaults to data collection; an opt-in model defaults to privacy. The choice of default is itself a governance decision with profound consequences.
9.2.2 The Gap Between Law and Practice
Both the GDPR and the CCPA represent genuine legislative efforts to protect privacy. But the gap between statutory language and lived experience is enormous.
Consider the GDPR's requirement that consent be "freely given." In a 2019 survey by the Norwegian Consumer Council, 90% of popular apps and services used design techniques that steered users toward granting maximum data access. "Freely given" consent, in practice, means consent extracted through interfaces designed to make refusal difficult, confusing, or costly.
Consider the GDPR's requirement that consent be "informed." A 2024 study by researchers at Carnegie Mellon found that the average privacy policy was 4,977 words long (approximately 15-20 minutes to read) and written at a reading level requiring 14 years of education. The average American's reading level is closer to 8th grade. Even well-educated users struggle with the technical and legal language. The disconnect is not accidental: privacy policies are written by lawyers to limit legal liability, not by communicators to inform users.
Consider the GDPR's requirement that consent be "specific." In practice, many consent mechanisms bundle multiple purposes together --- "by clicking Accept, you agree to our use of your data for service improvement, personalization, advertising, analytics, and sharing with our partners." This is the privacy equivalent of a restaurant offering a single menu item: "eat everything we serve or eat nothing." Specificity requires granularity; most consent interfaces provide none.
Consider the CCPA's "right to opt out." To opt out of data sharing with every company that holds your data, you would need to identify hundreds of companies (many of which you've never heard of), locate each company's opt-out mechanism, and submit individual requests to each one. A 2021 study by Consumer Reports found that completing opt-out requests for a single data broker took an average of 10 to 15 minutes --- and most consumers didn't know which brokers held their data. The total effort required to exercise your CCPA rights across all relevant entities is, for practical purposes, infinite.
Common Pitfall: It is tempting to blame individuals for not reading privacy policies or exercising opt-out rights. This is a fundamental misattribution of responsibility. The system is designed to produce consent without comprehension. Blaming users for not reading privacy policies is like blaming factory workers for unsafe working conditions --- it locates the problem in individual behavior rather than in the structural conditions that make individual action futile. The appropriate response to a governance failure is not to demand better governed individuals but to demand better governance.
9.3 Consent Fatigue and the Privacy Policy Problem
9.3.1 The Mathematics of Impossibility
The most damning evidence against the notice-and-consent framework is arithmetic.
Researchers Aleecia McDonald and Lorrie Faith Cranor at Carnegie Mellon University calculated (2008, updated 2012) that if the average American read every privacy policy they encountered in a year, it would take approximately 76 working days --- 244 hours of reading, at an estimated economic cost of $781 billion in lost productivity nationally. This figure has only grown as the number of digital services has increased.
To put this in perspective: 76 working days is more than one-third of a standard working year. It exceeds the total annual vacation time in any country on Earth. An individual who dedicated themselves to reading privacy policies as a full-time job from January to mid-April would still not have finished.
This is not a problem that can be solved by making privacy policies shorter or simpler (though both would help). The fundamental issue is structural: the notice-and-consent model requires individuals to make informed decisions about data practices across hundreds of services, each with complex and frequently changing terms. No human being can do this. The model demands the impossible, then treats the inevitable failure as consent.
9.3.2 Consent Fatigue in Practice
The psychological consequence of impossible demands is consent fatigue --- the tendency to stop engaging with consent mechanisms altogether, clicking "accept all" or scrolling past notices without reading them. Research documents this clearly:
- A 2020 study in Journal of Cybersecurity found that 93% of Europeans who encountered a cookie consent banner accepted all cookies, even when a "reject all" option was available
- Eye-tracking studies show that users spend an average of 1.5 seconds on consent notices before clicking
- Longitudinal studies show that consent fatigue increases over time --- users who initially read some policies eventually stop reading any
- Consent fatigue correlates with socioeconomic status --- people with lower digital literacy and less time are more likely to accept defaults, compounding existing inequalities
"I used to read the terms of service," Mira admitted. "Freshman year, I actually tried. I read five of them. Each one was different, each one was thousands of words, and by the end I couldn't remember what any of them said. So I stopped. I just click 'agree.' And I'm an information science major. What chance does a normal person have?"
The phenomenon extends beyond individual fatigue. Organizational consent fatigue affects institutions as well. When GDPR-mandated cookie consent banners appeared on every European website, they created a new form of friction that users quickly learned to dismiss automatically. The consent mechanism, intended to empower users, instead trained them to click through consent interactions without engagement --- producing the opposite of its intended effect.
9.3.3 The Rational Response to Irrationality
Here is the uncomfortable truth that notice-and-consent defenders must confront: clicking "accept all" without reading is the rational response to an irrational system.
If the cost of reading every privacy policy exceeds the benefit of the protection that reading would provide --- and if refusing consent means losing access to services that have become essential for work, education, and social life --- then the utility-maximizing strategy is to consent blindly. Users are not being irrational when they skip privacy policies. They are being perfectly rational within a system that has made informed choice impossible.
This insight, drawn from Chapter 6's utilitarian framework, exposes the consent fiction at its core. A governance regime that relies on a mechanism that rational actors cannot meaningfully use is not a governance regime --- it is a legitimation mechanism for the absence of governance.
Reflection: Think about the last five digital services you signed up for. How many privacy policies did you read? How many consent screens did you engage with substantively? If your answer is "none" or "very few," you are in the overwhelming majority --- and your behavior is rational. What does this tell you about the notice-and-consent model?
9.4 Dark Patterns and Manufactured Consent
9.4.1 What Are Dark Patterns?
We introduced dark patterns in Chapter 4's discussion of the attention economy. Here we examine them specifically as mechanisms for manufacturing consent --- design techniques that manipulate users into granting permissions they would not grant if the choice were presented fairly.
The term was coined by UX designer Harry Brignull in 2010. In the consent context, dark patterns include:
Confirmshaming. Presenting the option to decline consent in language designed to make the user feel bad. "No, I don't want to protect my account" or "I'll stay uninformed." The accept option reads "Yes, keep me safe." The framing casts refusal as irrational or irresponsible. The manipulation is subtle but effective: it attaches social and psychological costs to the privacy-protective choice.
Privacy Zuckering (named after Mark Zuckerberg). Confusing users into sharing more data than they intend through complex settings, misleading defaults, and periodic resets that undo previously established privacy preferences. Facebook's privacy settings have been redesigned at least seven times since 2004, with each redesign resetting some preferences and adding new sharing options that default to maximum disclosure.
Obstruction. Making the path to deny consent significantly more burdensome than the path to grant it. The "Accept All" button is large, colored, and requires one click. The "Manage Preferences" path requires multiple clicks, scrolling through dozens of categories, understanding technical distinctions between "essential" and "functional" cookies, and navigating a multi-step confirmation process. Research by Utz et al. (2019) found that the number of clicks required to reject all non-essential cookies ranged from 2 to 17 --- compared to exactly 1 click to accept all.
Forced action. Requiring consent to non-essential data collection as a condition for using a service. "To continue using this app, you must agree to share your contacts and location." The GDPR prohibits this (consent must be "freely given"), but enforcement is inconsistent and the prohibition is routinely circumvented through creative framing of what constitutes "essential" processing.
Default manipulation. Pre-selecting the maximum data-sharing options and requiring users to actively deselect each one. Even when pre-checked boxes are prohibited (as under GDPR), similar effects are achieved through default settings that favor data collection and through toggle switches that default to "on" for every category of data sharing.
Nagging. Repeatedly re-prompting users who have declined consent until they relent. "You still haven't connected your contacts. Connect now for the best experience." The asymmetry is structural: the system has infinite patience; the user does not. Nagging exploits a well-documented psychological phenomenon: ego depletion --- the tendency for willpower to diminish with repeated exercise.
Aesthetic manipulation. Using visual design to make the privacy-protective option appear less desirable. The "Accept" button is bold, colorful, and inviting. The "Decline" option is gray, small, or rendered as a text link rather than a button. The visual hierarchy tells the user which option the system wants them to choose.
9.4.2 Empirical Evidence on Dark Patterns
The effects of dark patterns on consent are not speculative. Rigorous research has quantified them:
The Norwegian Consumer Council's "Deceived by Design" report (2018) analyzed the privacy settings of Facebook, Google, and Windows 10. It found that all three used dark patterns to steer users toward maximum data sharing: default settings favored data collection, privacy-protective options required more clicks and more technical knowledge, and the framing of options consistently presented data sharing as beneficial and data restriction as costly. Google's location tracking settings, for example, required users to navigate through multiple menus to find and disable "Web & App Activity," "Location History," and "YouTube Search History" --- each controlled by a separate toggle in a separate location.
The cookie consent study by Nouwens et al. (2020) analyzed the design of 680 cookie consent banners on UK websites. Key findings: - Only 11.8% of sites met the minimum requirements of GDPR consent law - Sites that offered an explicit "reject all" option saw only 0.1% of users accept all cookies, compared to 93% acceptance on sites without a "reject all" button - The single most influential design factor was whether the "reject" option was as prominent as the "accept" option
This last finding deserves emphasis. When consent mechanisms are designed fairly --- giving the options equal prominence --- most users decline. The fact that most users accept in practice is not evidence of genuine preferences but of design manipulation. The consent we observe is manufactured.
"The Nouwens study is the most devastating critique of notice-and-consent I've ever read," Dr. Adeyemi told the class. "It shows that the consent we measure is almost entirely an artifact of interface design. Change the button, change the consent rate from 93% to 0.1%. What we call 'consent' is not a choice. It is a design output."
A 2022 study by Soe et al. examined the Consent Management Platforms (CMPs) --- the third-party services that websites use to implement cookie consent banners. They found that the most popular CMPs offered website operators configurable settings for how prominently to display the "reject" option, what color to make each button, and how many clicks to require for opting out. The CMP vendors marketed these features explicitly as tools for maximizing consent rates. The consent infrastructure was designed, from the ground up, to produce consent --- not to enable choice.
9.4.3 Regulatory Response to Dark Patterns
Regulators have begun addressing dark patterns, though enforcement remains uneven:
- The GDPR's European Data Protection Board (EDPB) issued guidelines in 2022 specifically addressing dark patterns in social media, identifying categories of manipulative design and declaring them incompatible with valid consent
- The FTC has taken enforcement actions against companies using dark patterns, including a $245 million fine against Fortnite maker Epic Games in 2022 for using dark patterns to trick users into making purchases
- The EU's Digital Services Act (DSA) explicitly prohibits "dark patterns that materially distort or impair" users' decision-making
- California's CPRA regulations require that opt-out mechanisms be as easy to use as opt-in mechanisms
- France's CNIL fined Google 150 million euros and Facebook 60 million euros in 2022 specifically for making it harder to refuse cookies than to accept them
But enforcement faces a fundamental challenge: dark patterns are not binary. There is no bright line between a design that is "persuasive" (legal) and one that is "manipulative" (illegal). The spectrum is continuous, and sophisticated designers can achieve manipulative outcomes through subtle design choices that are difficult to classify as violations. For every dark pattern that regulators identify and prohibit, new variants emerge that achieve similar effects through different mechanisms.
Connection: Recall from Chapter 4 the discussion of persuasive design and behavioral surplus. Dark patterns in consent interfaces are a specific application of the broader design philosophy that treats user behavior as something to be engineered rather than respected. The attention economy's incentive to maximize engagement produces the consent economy's incentive to maximize permissions --- the same structural dynamic, different mechanisms. The same UX teams that design for engagement design for consent extraction.
9.5 The Consent Fiction: Meaningful vs. Theatrical Consent
9.5.1 Defining the Consent Fiction
The consent fiction --- one of this book's four recurring themes --- refers to the gap between consent as a moral concept (an autonomous, informed, voluntary decision) and consent as a legal mechanism (a click, a checkbox, a continued use that authorizes data processing).
To call something a "fiction" is not to say it is a lie. A fiction, in the legal sense, is a construct that the system treats as true even when everyone involved knows it is not literally true. The legal fiction of corporate personhood (treating corporations as "persons" for certain legal purposes) is not a lie --- everyone knows corporations are not people --- but it serves useful legal functions. Similarly, the consent fiction treats "I clicked accept without reading" as informed consent --- not because anyone believes the user actually read and understood the terms, but because the fiction serves useful systemic functions: it provides a legal basis for data processing, limits corporate liability, and maintains the appearance of individual autonomy.
The problem is that the functions the consent fiction serves are useful primarily for data collectors, not data subjects. The fiction legitimizes data practices that most individuals would reject if they understood them. It converts the appearance of autonomy into the reality of acquiescence.
9.5.2 Five Markers of Theatrical Consent
How do you distinguish consent that is meaningful from consent that is theatrical? The following markers indicate theatrical consent:
1. Comprehension is impossible. If the terms are so long, complex, or technical that a reasonable person cannot understand them, consent to those terms cannot be "informed" in any meaningful sense.
2. Refusal is penalized. If declining consent means losing access to a service that is functionally necessary (email, social media for professional networking, educational software required by a course), consent is not "voluntary" --- it is coerced by the absence of alternatives.
3. The granularity is wrong. If consent is bundled --- "agree to all 47 data practices or agree to none" --- the individual cannot make targeted decisions about specific practices. Bundled consent treats a complex decision as a binary one.
4. The asymmetry persists. If the data subject remains ignorant of what happens to their data after consenting, the consent did not actually address the information asymmetry that justified requiring consent in the first place. You consent in the dark and remain in the dark.
5. The power dynamics are unchanged. If, after consenting, the data subject has no meaningful ability to monitor compliance, challenge violations, or withdraw consent without significant cost, the consent did not alter the underlying power relationship. Power before consent and power after consent are identical --- which means the consent changed nothing.
Mira applied these markers to VitraMed's patient consent forms. "Our consent form is six pages long. It's written in legal language that I --- an information science major --- struggle to parse. If patients refuse to sign, they can't use the portal that their doctor requires for appointment scheduling. The form bundles consent for clinical data use, research use, and third-party sharing into a single signature. Patients have no way to verify what we actually do with their data. And withdrawing consent requires a written request to an address that isn't on the form itself." She paused. "By every one of these markers, our consent is theatrical."
"And yet," Dr. Adeyemi said, "it is legally sufficient."
"That's the problem," Mira replied.
9.5.3 Why the Fiction Persists
If the consent fiction is so obviously inadequate, why does it persist? Several structural factors explain its resilience:
Industry preference. The consent model places the burden of data governance on individuals rather than organizations. From an industry perspective, this is far preferable to substantive regulation that would constrain data practices regardless of consent. The consent model externalizes governance costs: instead of companies bearing the cost of evaluating whether their data practices are ethical, individuals bear the cost of evaluating --- and inevitably failing to evaluate --- thousands of consent requests.
Regulatory path dependence. Consent has been the foundation of data protection law for decades. The entire legal and technical infrastructure --- privacy policies, consent management platforms, data processing agreements --- is built around it. Replacing it would require rethinking the entire legal architecture --- a politically and technically daunting task that no single legislative session can accomplish.
The autonomy narrative. Consent aligns with a powerful liberal narrative about individual choice and self-determination. Challenging consent feels like challenging autonomy itself. Advocates who critique the consent model are accused of paternalism: "Who are you to decide what people can consent to?" This rhetorical move is powerful because it turns the language of empowerment against the people trying to empower --- a dynamic we'll see again in discussions of platform regulation.
Complexity avoidance. Alternatives to consent (which we examine in Section 9.7) are more complex to implement and require substantive judgments about which data practices are acceptable. Consent offers a procedural solution that avoids these judgments --- you don't have to decide whether a practice is acceptable if the user "consented." Procedure replaces substance; process replaces principle.
Common Pitfall: The critique of consent presented here does not mean consent is worthless. Consent matters. The moral principle of autonomy is real and important. The critique is that what currently passes for consent in the digital economy has been stripped of the elements that make consent morally meaningful --- comprehension, voluntariness, specificity, and genuine choice. The solution is not to abandon consent but to supplement it with frameworks that do not depend entirely on individual decision-making.
9.6 The VitraMed Consent Crisis
9.6.1 A Predictive Analytics Dilemma
VitraMed's expansion into predictive analytics brought the consent question to a crisis point. The company's machine learning models could now predict --- with varying degrees of accuracy --- which patients were likely to develop certain conditions based on patterns in their electronic health records. This capability raised consent questions that the existing framework could not answer.
When a patient signed VitraMed's consent form, they authorized the company to use their data for "clinical care and related services." But did "related services" include:
- Using their data to train a predictive model that would then be used for other patients?
- Generating predictions about conditions the patient had never discussed with their doctor?
- Sharing predictive scores with the patient's insurance company as part of a "care coordination" program?
- Selling de-identified (but potentially re-identifiable) data to pharmaceutical companies for drug development?
- Combining their health data with commercially available consumer data (purchase history, social media activity) to improve predictive accuracy?
"The consent form says 'related services,'" Vikram argued during a family dinner that Mira recounted to Eli. "That's broad enough to cover all of this."
"Broad enough to cover it legally," Mira replied. "But not broad enough to cover it ethically. When Mrs. Okafor signed that form, she was thinking about her blood pressure medication. She wasn't thinking about an algorithm predicting whether she'd develop dementia."
"And she definitely wasn't thinking about her data being merged with her grocery store loyalty card data," Eli added.
The VitraMed case illustrates a problem that the notice-and-consent model cannot resolve: the scope of consent is defined at the moment of collection, but the uses of data evolve continuously after collection. A patient who consented to EHR management in 2020 could not have anticipated predictive analytics capabilities that didn't exist until 2024. Consent becomes a time-locked authorization applied to a temporally unbounded practice.
9.6.2 Community Consent and the Detroit Parallel
Eli's Detroit thread raises a different but related consent problem: community consent. When surveillance cameras are installed in a neighborhood, who consents? The city council? The residents of the surveilled neighborhood? Every individual who walks past a camera?
The notice-and-consent model, built around individual decision-making, has no mechanism for community-level consent. This is a critical gap, because many surveillance technologies affect communities, not just individuals:
- Predictive policing algorithms target neighborhoods, not individual suspects
- Facial recognition cameras surveill everyone who passes, not just persons of interest
- Environmental sensors collect data about community patterns, not individual behaviors
- ShotSpotter acoustic surveillance monitors entire districts, capturing all sounds --- not just gunshots
"Nobody asked my grandmother if she wanted a camera on her corner," Eli said. "Nobody asked the congregation at Greater Emmanuel Baptist. The city council voted --- but my neighborhood doesn't have a representative on the city council who lives in the neighborhood. So whose consent are we talking about?"
The concept of community consent --- the idea that communities should have collective decision-making authority over data practices that affect them as communities --- is not well developed in existing law. Indigenous data sovereignty frameworks (discussed in Chapter 3) offer one model, in which communities assert collective governance rights over data about their members. The Detroit community's campaign for a surveillance technology ordinance (Chapter 8, Section 8.7.4) represents another --- creating a democratic process through which communities can collectively approve or reject surveillance technologies.
Connection: The community consent problem connects to Chapter 5's discussion of structural power and Chapter 6's justice theory. Behind Rawls's veil of ignorance, you wouldn't know whether you lived in the surveilled neighborhood or the unsurveilled one. A just consent framework would give equal voice to the most affected communities --- precisely the communities that current consent mechanisms most effectively exclude.
9.7 Beyond Consent: Alternative Frameworks
9.7.1 Legitimate Interest
The GDPR recognizes six legal bases for processing personal data. Consent is only one of them. Legitimate interest (Article 6(1)(f)) permits data processing when it is necessary for a legitimate interest of the controller or a third party, unless overridden by the interests or fundamental rights of the data subject.
Legitimate interest shifts the analytical burden from the individual (did they consent?) to the organization (is their interest in processing legitimate, and does it outweigh the data subject's interests?). This requires organizations to conduct a balancing test:
- Purpose test: What is the legitimate interest being pursued? Is it genuine and lawful?
- Necessity test: Is the data processing necessary to achieve that interest, or could it be achieved with less data or no data?
- Balancing test: Do the individual's interests, rights, or freedoms override the legitimate interest? This requires considering the nature of the data, the impact on individuals, and whether individuals would reasonably expect the processing.
The legitimate interest framework has strengths: it forces organizations to articulate and justify their data practices, and it provides a mechanism for weighing competing interests without depending on individual consent. But it also has weaknesses: the balancing test is subjective, and organizations tend to find that their own interests are legitimate and that individuals' interests do not override them. Without strong regulatory enforcement, legitimate interest can become a mechanism for bypassing consent rather than a genuine alternative governance framework. As the UK's Information Commissioner has noted, legitimate interest is the GDPR's most flexible lawful basis --- which means it is also the most susceptible to abuse.
9.7.2 Contextual Integrity as Governance
Nissenbaum's contextual integrity framework (Chapter 7, Section 7.3) offers a more robust alternative. Rather than asking "did the individual consent?" or "is the organization's interest legitimate?", contextual integrity asks: does the data practice conform to the informational norms of the relevant social context?
Applied as a governance framework, contextual integrity would: - Identify the social context of data collection (healthcare, education, commerce, law enforcement) - Determine the established informational norms of that context (what data flows are expected by participants?) - Evaluate whether a new data practice conforms to or violates those norms - Permit conforming practices without individual consent (because they match existing expectations) - Require justification and potentially prohibition for non-conforming practices (because they breach expectations)
This approach has several advantages: - It does not depend on individual comprehension of complex technical practices - It respects the social norms that actually govern people's privacy expectations - It provides a substantive standard (contextual appropriateness) rather than a procedural one (did they click?) - It can address community-level data practices, not just individual transactions - It offers a principled basis for distinguishing between data uses that feel acceptable and those that feel wrong
"Contextual integrity explains why VitraMed's predictive analytics feel wrong even when patients have technically consented," Mira realized. "The healthcare context has norms: your doctor uses your data to treat you. When VitraMed uses your data to predict conditions you haven't discussed, for purposes you haven't imagined, shared with entities you didn't know about --- every one of those is a context violation. The consent form doesn't fix the violation. It just papers over it."
The framework has limitations as well. Who defines the "established norms" of a context? Norms evolve, and sometimes they should be disrupted --- medical research, for example, often requires data flows that breach traditional clinical norms. Contextual integrity provides tools for identifying norm violations but does not automatically determine whether violations are justified.
9.7.3 The Information Fiduciary Model
Legal scholar Jack Balkin has proposed treating certain data-collecting entities as information fiduciaries --- entities that owe duties of care, confidentiality, and loyalty to the individuals whose data they hold, analogous to the duties that doctors, lawyers, and financial advisors owe to their clients.
Under this model: - An information fiduciary would be prohibited from using personal data in ways that are contrary to the interests of the data subject - The duty would be imposed by law, not dependent on consent - The standard would be loyalty --- acting in the data subject's interest, not exploiting them - Violations would be actionable regardless of what the terms of service said
The fiduciary model addresses a core weakness of consent: it shifts the obligation from the less powerful party (the data subject) to the more powerful party (the data collector). You don't need your doctor to explain every aspect of medical ethics to you, and you don't need to read a 5,000-word contract; the doctor owes you duties of care regardless of whether you understand them. An information fiduciary model would impose similar duties on platforms.
Critics of the fiduciary model raise important objections: - Scope: The doctor-patient relationship involves one doctor and one patient. Facebook has billions of users. Can fiduciary duties scale to relationships that are fundamentally different in character from the professional relationships where fiduciary obligations originated? - Interest conflicts: A doctor's interest and a patient's interest are largely aligned (the patient gets better, the doctor succeeds). A platform's interest (maximize engagement and data extraction) and a user's interest (privacy, autonomy, wellbeing) are structurally misaligned. Can fiduciary duties resolve conflicts that are embedded in the business model, or would compliance require changing the business model entirely? - Enforcement: Fiduciary duties in traditional contexts are enforced through malpractice suits and professional licensing. What enforcement mechanism would apply to technology platforms? The litigation costs alone could make the model inaccessible to most individuals.
9.7.4 Data Trusts and Collective Governance
A fourth alternative --- data trusts --- proposes that personal data be managed by independent trustees who owe duties to the data subjects, rather than by the data collectors themselves. Data trusts would negotiate terms of data use on behalf of individuals, monitor compliance, and take legal action when terms are violated.
This model addresses the power asymmetry directly: individuals do not negotiate with platforms alone but through an institutional intermediary with expertise, resources, and legal standing. Data trusts are being piloted in several jurisdictions, including the UK's Open Data Institute and various municipal experiments in Barcelona, Amsterdam, and Toronto (the Sidewalk Labs controversy, where community resistance to a data trust managed by Google's sister company highlighted the difficulty of ensuring trustee independence).
Intuition: Notice the pattern across these alternative frameworks: each one shifts some portion of the governance burden from the individual data subject to an institution --- the regulating state (legitimate interest), the social context (contextual integrity), the data collector (fiduciary duty), or an independent intermediary (data trust). The common recognition is that individual consent alone cannot carry the weight of data governance. The question is which institutional arrangement best protects data subjects while permitting beneficial data uses.
9.8 Children and Consent: Special Protections
9.8.1 Why Children Cannot Consent
The consent framework's inadequacies are most acute --- and most morally urgent --- when the data subject is a child. Children lack the cognitive capacity, life experience, and legal standing to provide meaningful consent to data practices that may affect them for decades.
The moral reasoning here draws on Chapter 6's ethical frameworks: - Deontologically: Children cannot exercise the autonomous rational agency that makes consent morally valid. Treating a child's click on "I Agree" as consent violates the Kantian principle of respect for persons --- it uses the form of autonomy without the substance of autonomous choice. - From care ethics: Children are in relationships of profound dependency. Those responsible for their care --- parents, schools, platforms --- have heightened obligations to protect them. Care ethics would evaluate platform data practices by asking: does this serve the child's interests, or exploit their vulnerability? - From justice theory: Behind the veil of ignorance, you would insist on robust protections for children, because you might be one --- or you might be a parent, unable to monitor every digital interaction your child has.
The stakes are also distinctive. Data collected about a child today will persist indefinitely. A 10-year-old's browsing history, social media activity, and location data will be available to future employers, insurers, and governments --- shaping opportunities the child cannot yet imagine, for decisions the child cannot yet anticipate.
9.8.2 COPPA: The Children's Online Privacy Protection Act
The U.S. Children's Online Privacy Protection Act (COPPA), enacted in 1998, requires websites and apps to: - Post a clear privacy policy - Obtain verifiable parental consent before collecting data from children under 13 - Give parents the right to review and delete their children's data - Not condition a child's participation on providing more data than necessary
COPPA was a pioneering law, but its limitations have become evident:
The age-13 threshold was based on developmental psychology research from the 1990s. Many experts now argue that the threshold should be higher --- that 13-year-olds are not developmentally equipped to consent to the data practices of modern social media platforms. The UK's Age Appropriate Design Code (2021) takes a broader view, establishing design standards for services "likely to be accessed by children" under 18.
Verification failures. COPPA requires "verifiable" parental consent, but verification mechanisms are easily circumvented. Most platforms rely on self-reported age --- children simply enter a false birthdate. A 2022 Thorn survey found that 45% of children under 13 used social media platforms, most of them by lying about their age. The verification problem is partially structural: robust age verification (requiring identification documents, biometric verification) raises its own privacy concerns, creating a tension between protecting children and surveilling them.
The scope problem. COPPA applies to services directed at children or that have actual knowledge of child users. This creates an incentive for platforms to avoid knowing the ages of their users --- willful blindness that circumvents the law's protections. YouTube, before the FTC settlement in 2019, claimed that it was not directed at children despite hosting millions of hours of children's content.
The parental consent problem. COPPA assumes that parental consent is a meaningful substitute for the child's own consent. But this assumption has its own limitations: parents face the same comprehension and time barriers as other data subjects, parents' interests may not perfectly align with their children's interests, and the parent-child relationship is itself a power relationship subject to the critiques we have examined.
9.8.3 GDPR Article 8 and the Children's Code
The GDPR establishes higher protections for children's data: - Article 8 requires parental consent for data processing of children under 16 (member states may lower this to 13) - Recital 38 states that children "merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned" - The data controller must make "reasonable efforts" to verify parental consent, "taking into consideration available technology"
The UK's Age Appropriate Design Code (formally the Children's Code), which came into force in September 2021, goes further, requiring platforms to: - Default to maximum privacy settings for child users - Not use nudge techniques that encourage children to weaken privacy protections - Switch off geolocation services by default - Not use profiling unless there is a compelling reason to do so - Provide age-appropriate explanations of data practices - Produce Data Protection Impact Assessments for services likely to be accessed by children
The Children's Code has had measurable effects. TikTok disabled direct messaging for users under 16, disabled notifications after 9 p.m. for users under 18, and defaulted accounts for users under 16 to private. Instagram restricted adults from sending messages to minors who don't follow them. YouTube disabled autoplay for users under 18.
"The Children's Code gets something right that the adult consent framework gets wrong," Dr. Adeyemi observed. "It puts the burden on the platform, not the user. It says: you must design your service to protect children, regardless of what they click. If we applied that principle to adults --- design your service to protect users, regardless of whether they read your privacy policy --- we would have a fundamentally different data governance regime."
Reflection: Dr. Adeyemi's observation raises a provocative question: if we recognize that children cannot meaningfully consent to data practices, and if the obstacles to adult consent (comprehension, power asymmetry, consent fatigue, dark patterns) are differences of degree rather than kind, does the justification for shifting the burden from individuals to institutions apply to adults as well? At what point does the gap between the consent we imagine and the consent we practice become large enough to demand a different approach entirely?
9.9 Toward a Post-Consent Governance Framework
9.9.1 What Consent Can and Cannot Do
This chapter has argued that the notice-and-consent model is fundamentally inadequate as the sole or primary governance mechanism for personal data. But this does not mean consent is useless. Consent can play a legitimate role in a broader governance framework when:
- The data practice is comprehensible --- the individual can actually understand what they are agreeing to
- The choice is genuine --- refusal is possible without losing access to essential services
- The scope is specific --- consent covers a defined purpose, not a blanket authorization
- The consequences are proportionate --- the stakes of the decision match the individual's capacity to evaluate them
- Institutional safeguards exist alongside individual consent --- the consent is one layer of protection, not the only one
Under these conditions, consent is not a fiction --- it is a meaningful expression of autonomy. The problem is not the concept of consent but the conditions under which it is currently extracted.
9.9.2 A Layered Model
Drawing on the frameworks examined in this chapter, a robust data governance regime would include multiple layers:
Layer 1: Substantive limits. Some data practices should be prohibited regardless of consent. No individual can consent to a practice that violates fundamental rights, just as no employee can consent to unsafe working conditions and no patient can consent to medical fraud. The EU AI Act's prohibition on social scoring and real-time facial recognition in public spaces reflects this principle. So does the emerging consensus that children should not be subjected to behavioral advertising regardless of parental consent.
Layer 2: Contextual norms. Data practices that conform to the informational norms of their social context should be permitted without individual consent. Your doctor sharing your records with a specialist you've been referred to does not require a separate consent interaction --- it conforms to the norms of the healthcare context. Contextual integrity provides the analytical framework for identifying which practices conform and which violate.
Layer 3: Institutional oversight. Data practices that fall between clear permission and clear prohibition should be subject to institutional review --- by regulators, data protection authorities, ethical review boards, or data trusts. This layer provides expert evaluation that individuals cannot perform. It is the analog of building codes (which protect occupants regardless of whether they understand structural engineering) or food safety regulations (which protect consumers regardless of whether they can evaluate bacterial contamination).
Layer 4: Meaningful consent. For data practices where individual choice is genuinely meaningful --- where the stakes, the options, and the consequences can be understood --- consent can serve as an additional governance mechanism. But it is the fourth layer, not the first. Consent supplements institutional governance; it does not replace it.
Layer 5: Accountability and enforcement. Regardless of consent, data collectors should be held accountable for how data is used. Post-hoc accountability --- through audits, breach notification, litigation, and regulatory enforcement --- catches failures that consent cannot prevent.
"Five layers," Eli noted. "Consent is layer four. Right now, it's the only layer. That's the problem."
"And the Accountability Gap is the reason layers one through three don't exist yet," Mira added.
9.10 Chapter Summary
Key Concepts
- Informed consent (medical ethics): Requires voluntariness, comprehension, and adequate information --- a demanding standard that originated in the aftermath of research atrocities (Nuremberg, Tuskegee)
- Notice and consent (data protection law): The dominant legal framework, requiring notification of data practices and individual authorization --- but plagued by structural failures in comprehension, voluntariness, and specificity
- Consent fatigue: The psychological response to impossible demands for attention to privacy policies; 76 working days per year would be needed to read all applicable policies
- Dark patterns: Design techniques that manufacture consent through manipulation --- confirmshaming, obstruction, default manipulation, nagging, aesthetic manipulation
- Theatrical consent: Consent that satisfies legal requirements but fails every marker of meaningful autonomous choice --- identifiable through five markers (comprehension, voluntariness, granularity, asymmetry, power dynamics)
- Alternative frameworks: Legitimate interest (organizational justification), contextual integrity (conformity to social norms), information fiduciary (duty of loyalty), data trusts (collective governance)
- Children's consent: COPPA and GDPR Article 8 recognize that children cannot meaningfully consent, imposing protective duties on platforms --- a principle whose logic may extend to adults
Key Debates
- Is the notice-and-consent model reformable, or must it be replaced?
- Can dark patterns be regulated without over-restricting legitimate persuasion?
- Should the information fiduciary model be applied to all data-collecting platforms?
- Does the argument for shifting governance burdens from individuals to institutions apply to adults, or only to children?
- Is the consent fiction a necessary compromise or an inexcusable abdication of governance responsibility?
Applied Framework
To evaluate a consent mechanism: (1) assess comprehension --- can a reasonable person understand what they are consenting to?; (2) assess voluntariness --- can the person refuse without significant penalty?; (3) assess specificity --- is consent granular or bundled?; (4) assess the information asymmetry --- does consent address the power gap between collector and subject?; (5) assess accountability --- what happens when the data collector violates the terms, and what recourse does the subject have?
What's Next
In Chapter 10: Privacy by Design and Data Minimization, we move from the question of whether individuals consent to data collection to the question of whether data collection should happen at all. We'll examine the principle of data minimization --- collecting only what is necessary --- and the broader framework of Privacy by Design, which embeds privacy protection into the architecture of systems rather than relying on individual choice. Chapter 10 also introduces our first Python code, demonstrating techniques for k-anonymity and differential privacy.
Before moving on, complete the exercises and quiz.
Chapter 9 Exercises -> exercises.md
Chapter 9 Quiz -> quiz.md
Case Study: Cookie Consent Banners: A Study in Theatrical Consent -> case-study-01.md
Case Study: VitraMed's Patient Consent Redesign -> case-study-02.md
Related Reading
Explore this topic in other books
Data & Society What Is Privacy? Data & Society Privacy by Design and Data Minimization RegTech Data Privacy in RegTech AI Ethics Data Privacy Fundamentals