> "Ethics is knowing the difference between what you have a right to do and what is right to do."
Learning Objectives
- Explain the core principles of five major ethical frameworks and their relevance to data governance
- Apply utilitarian analysis to a data governance dilemma, identifying stakeholders, consequences, and limitations
- Apply Kantian deontological analysis to questions of consent, dignity, and data rights
- Apply virtue ethics to evaluate the character traits that data practitioners should cultivate
- Apply care ethics to analyze the relational dimensions of data systems
- Apply justice theory (Rawls) to evaluate the distributional fairness of data policies
- Navigate disagreements between frameworks using a structured ethical reasoning process
In This Chapter
- Chapter Overview
- 6.1 Why Ethics Matters for Data Governance
- 6.2 Utilitarianism: The Greatest Good
- 6.3 Deontology: Duties, Rights, and Dignity
- 6.4 Virtue Ethics: Character and Practice
- 6.5 Care Ethics: Relationships and Responsibility
- 6.6 Justice Theory: Fairness Behind the Veil
- 6.7 When Frameworks Disagree
- 6.8 Chapter Summary
- What's Next
- Chapter 6 Exercises → exercises.md
- Chapter 6 Quiz → quiz.md
- Case Study: The VitraMed Data Sharing Dilemma — A Five-Framework Analysis → case-study-01.md
- Case Study: Should Clearview AI Exist? An Ethical Analysis → case-study-02.md
Chapter 6: Ethical Frameworks for the Data Age
"Ethics is knowing the difference between what you have a right to do and what is right to do." — Potter Stewart, U.S. Supreme Court Justice
Chapter Overview
Throughout Part 1, we've encountered questions that data alone cannot answer. Should VitraMed sell de-identified patient data to researchers? Should Detroit's Smart City sensors be removed from Eli's neighborhood? Should platforms be allowed to use engagement-maximizing algorithms that may harm mental health? Should individuals be able to sell their personal data?
These are ethical questions — questions about what we should do, not just what we can do. And they cannot be answered by technical expertise alone. They require philosophical tools: frameworks for reasoning about right and wrong, good and bad, fair and unfair.
This chapter provides those tools. It introduces five major ethical frameworks, applies each to concrete data dilemmas, and develops a process for navigating the inevitable disagreements that arise when different frameworks point in different directions. This is not an abstract exercise — it is the foundation for every governance judgment you'll encounter in the chapters ahead.
In this chapter, you will learn to: - Apply five ethical frameworks to data governance dilemmas - Identify when frameworks converge (making decisions easier) and diverge (requiring judgment) - Construct ethical arguments that acknowledge multiple perspectives - Use a structured ethical reasoning process for real-world decisions
6.1 Why Ethics Matters for Data Governance
6.1.1 The Gap Between Legal and Ethical
A common misconception is that compliance with the law is sufficient for ethical behavior. It is not.
- Legal but unethical: Facebook's data sharing with Cambridge Analytica was legal under Facebook's terms of service at the time. It was nevertheless a betrayal of user trust.
- Ethical but illegal: Whistleblowers like Edward Snowden and Frances Haugen violated confidentiality agreements. Whether their actions were ethical is debated; that they were illegal is clear.
- Legal vacuum: Many data practices — facial recognition in public spaces, algorithmic hiring, emotional manipulation through design — operate in legal gray areas where regulation hasn't caught up with practice.
"The law tells you the floor," Dr. Adeyemi said on the first day of the ethics unit. "Ethics tells you the ceiling. The space between the two is where most of the interesting — and most of the important — decisions happen."
6.1.2 The Role of Ethical Reasoning
Ethical reasoning does not provide definitive answers to every question. What it provides is:
- A vocabulary for articulating why something feels wrong (or right)
- A structure for analyzing complex situations with multiple stakeholders
- A discipline for considering perspectives beyond your own
- A basis for justifying decisions to others — and for evaluating the justifications others offer
6.2 Utilitarianism: The Greatest Good
6.2.1 Core Principles
Utilitarianism, developed by Jeremy Bentham (1748-1832) and John Stuart Mill (1806-1873), holds that the morally right action is the one that produces the greatest overall good (or "utility") for the greatest number of people.
Key features: - Consequentialist: Actions are judged by their outcomes, not their intentions - Aggregative: Costs and benefits are summed across all affected parties - Impartial: Each person's wellbeing counts equally — no special treatment for the decision-maker - Maximizing: Among available options, choose the one that produces the most net good
6.2.2 Applied to Data: Utilitarian Analysis of VitraMed's Data Sharing
Scenario: VitraMed has de-identified patient data from 200,000 individuals. A major research university wants access to study diabetes prevention. The research could benefit millions of future patients. But some privacy advocates argue that even de-identified data carries re-identification risks.
Utilitarian analysis:
| Stakeholder | Benefit | Cost |
|---|---|---|
| Future patients | Improved diabetes prevention (potentially millions benefit) | None directly |
| Current patients | Indirect benefit from advancing medical knowledge | Risk of re-identification; potential for insurance discrimination if re-identified |
| VitraMed | Revenue from data licensing; reputation as research partner | Risk of backlash if breach occurs |
| Researchers | Valuable dataset for publication and clinical insights | None |
| Society | Public health benefit | Erosion of trust in health data systems if breach occurs |
A utilitarian analysis would likely support sharing the data, because the aggregate benefits (improved health outcomes for millions) appear to outweigh the aggregate costs (probabilistic risk of re-identification for some). But several complications emerge:
Common Pitfall: Utilitarian calculations depend heavily on how you estimate costs and benefits. If you underestimate the probability of re-identification, or if you fail to account for the compounding effect of eroded trust (if patients stop sharing data because they don't trust the system, future research suffers), the calculation changes dramatically. Utilitarianism is only as good as the estimates it relies on.
6.2.3 Strengths and Limitations
Strengths: - Provides a clear, systematic decision procedure - Focuses on real-world consequences, not abstract principles - Demands consideration of all affected parties - Well-suited to policy analysis where trade-offs are explicit
Limitations: - Can justify sacrificing minorities for the benefit of majorities ("tyranny of the majority") - Requires quantifying values (privacy, dignity, autonomy) that resist quantification - Ignores distributive fairness — an outcome where 100 people gain $1 and 1 person loses $100 has the same utilitarian value as one where $100 is distributed equally - Vulnerable to manipulation through framing effects (who defines "good"?)
6.3 Deontology: Duties, Rights, and Dignity
6.3.1 Core Principles
Deontological ethics, most associated with Immanuel Kant (1724-1804), holds that certain actions are inherently right or wrong, regardless of their consequences. The morality of an action depends on whether it conforms to moral duties and respects moral rights.
Kant's categorical imperative provides two key tests:
- Universalizability: Act only according to a maxim that you could will to become a universal law. If everyone acted this way, would the system be coherent?
- Humanity as an end: Treat humanity, whether in your own person or in that of another, always as an end and never merely as a means.
6.3.2 Applied to Data: Deontological Analysis of Consent
The second formulation of the categorical imperative — treating people as ends, never merely as means — has direct implications for data governance:
- Using people's data to improve a service they use treats them partly as means (their data improves the product) but also as ends (the improvement benefits them). This is permissible under Kant's framework.
- Harvesting behavioral surplus — using data beyond what's needed for service improvement to predict and modify behavior for advertiser benefit — treats users merely as means. Their data is extracted for someone else's profit, with no corresponding benefit to the data subject. This violates the categorical imperative.
- Consent under coercion — "agree to these terms or lose access to essential services" — undermines the autonomy that Kant considers central to human dignity. Consent that is not genuinely voluntary is not morally valid.
Eli resonated with the deontological perspective. "It's not about whether surveillance makes the neighborhood statistically safer," he argued. "It's about whether treating an entire community as a population to be monitored — without their meaningful consent — respects their dignity. Kant would say no."
Mira pushed back: "But what if the data actually prevents crime? Doesn't the consequence matter?"
"To a Kantian, no," Dr. Adeyemi intervened. "Or rather — consequences matter in a practical sense, but they cannot justify treating people as mere instruments. You cannot surveil a community without consent and call it ethical just because the outcome was good. The means matter, not just the ends."
6.3.3 Strengths and Limitations
Strengths: - Protects individual rights even when violating them would benefit the majority - Provides clear constraints on permissible behavior (duties, rights, dignity) - Aligns naturally with data protection law (GDPR's rights-based framework is essentially deontological) - Resists the utilitarian tendency to sacrifice individuals for aggregate benefit
Limitations: - Can be rigid — what happens when duties conflict? (Duty to protect privacy vs. duty to prevent harm) - Focused on individual rights, less equipped to analyze structural/systemic issues - The universalizability test can be difficult to apply to novel situations - May prohibit actions that seem intuitively right (e.g., lying to protect someone from a murderer, in Kant's own example)
6.4 Virtue Ethics: Character and Practice
6.4.1 Core Principles
Virtue ethics, rooted in Aristotle (384-322 BCE) and revived by Alasdair MacIntyre, Philippa Foot, and others, shifts the focus from actions (what should I do?) to character (what kind of person should I be?).
Central concepts: - Virtues are character traits that enable human flourishing: courage, justice, temperance, wisdom, honesty, compassion - Phronesis (practical wisdom) is the ability to discern the right action in specific circumstances — not by applying a formula but by exercising judgment cultivated through experience - The mean — virtues lie between extremes. Courage is between cowardice and recklessness. Transparency is between secrecy and oversharing.
6.4.2 Applied to Data: The Virtuous Data Practitioner
Virtue ethics asks: what character traits should a data scientist, a privacy officer, a platform designer, or a policymaker cultivate?
| Virtue | In Data Practice | Vice of Excess | Vice of Deficiency |
|---|---|---|---|
| Justice | Fair treatment of data subjects; equitable distribution of data benefits and burdens | Rigid rule-following that ignores context | Favoritism, bias, indifference to inequity |
| Honesty | Transparent about data practices, limitations, uncertainties | Radical transparency that violates confidence | Concealment, misdirection, obfuscation |
| Courage | Speaking up when data practices cause harm, even at personal cost | Reckless whistleblowing without evidence | Silence in the face of wrongdoing |
| Temperance | Data minimization; collecting only what is needed | Refusing to collect data even when it serves genuine good | Data hoarding; collecting everything because you can |
| Compassion | Considering the human impact of data decisions | Paralysis — unable to make any decision that might harm someone | Indifference to the humans behind the data points |
Ray Zhao, the CDO at NovaCorp, brought a virtue ethics perspective to his guest lecture. "I can't apply a formula to every data decision. The GDPR gives me rules, and those matter. But the hard cases — the ones where the rules run out or conflict — require judgment. And judgment comes from character. I've hired brilliant data engineers who had no ethical instincts. They could optimize anything, but they couldn't feel when something was wrong. That feeling — that's practical wisdom."
6.4.3 Strengths and Limitations
Strengths: - Focuses on the person making the decision, not just the decision itself - Recognizes the role of judgment, context, and experience - Can handle novel situations (practical wisdom adapts where rules break down) - Emphasizes moral development — ethics as a practice, not just a checklist
Limitations: - Provides less concrete guidance than utilitarianism or deontology - What counts as a "virtue" may be culturally specific - Risk of moral elitism ("I know what's right because of my character") - Harder to institutionalize — you can write rules and calculate consequences, but you can't legislate character
6.5 Care Ethics: Relationships and Responsibility
6.5.1 Core Principles
Care ethics, developed by Carol Gilligan, Nel Noddings, Joan Tronto, and others, centers moral reasoning on relationships and responsibilities to particular others rather than abstract principles or aggregate calculations.
Key commitments: - Relational: Moral reasoning should start from the web of relationships in which people are embedded, not from an imagined position of independence - Attentive: Ethics requires attention to the particular needs, vulnerabilities, and contexts of those affected - Responsive: The appropriate ethical response depends on the specific relationship and situation, not on universal rules - Responsibility: Ethics is about taking responsibility for those who depend on you, not just respecting their rights
6.5.2 Applied to Data: Care Ethics and Health Data
Care ethics offers a distinctive lens on VitraMed's dilemmas:
- The patients who entrust their health data to VitraMed are in a relationship of vulnerability — they are sick, they depend on the system for care, and they have limited power over how their data is used.
- A care ethics perspective would ask not "does the aggregate benefit justify the risk?" (utilitarian) or "did the patient consent?" (deontological) but "are we taking responsibility for the vulnerability of the people who depend on us?"
- This shifts the question from abstract rights to concrete relationships: Does VitraMed know its patients? Does it understand their fears about data misuse? Has it listened to their concerns? Is it responsive when things go wrong?
"I like this framework," Mira said, "because it acknowledges something the others don't — that data isn't just about individuals and their rights. It's about relationships. When a patient shares their health data, they're trusting us. And trust is a relationship, not a contract."
6.5.3 Strengths and Limitations
Strengths: - Centers the experiences of vulnerable populations - Recognizes the relational nature of data systems - Corrects the abstraction of other frameworks (actual people, not hypothetical agents) - Well-suited to health data, children's data, and other contexts involving care relationships
Limitations: - Can be difficult to scale — care for particular others doesn't easily translate to policy for millions - Risk of paternalism — "caring for" can shade into "deciding for" - May undervalue the claims of strangers or distant others - Less equipped for structural analysis
6.6 Justice Theory: Fairness Behind the Veil
6.6.1 Core Principles
John Rawls's A Theory of Justice (1971) proposes a thought experiment: imagine designing the rules of society from behind a veil of ignorance — not knowing your own position in society. You don't know whether you'll be rich or poor, black or white, data-literate or not, healthy or sick.
Rawls argues that rational people behind the veil would choose two principles:
- Equal basic liberties for all (including privacy and freedom of thought)
- The difference principle: Social and economic inequalities are permissible only if they benefit the least advantaged members of society
6.6.2 Applied to Data: The Rawlsian Test
The Rawlsian framework provides a powerful test for data policies: Would you accept this data system if you didn't know whether you'd be the data collector or the data subject?
Applied to specific cases:
-
Predictive policing: Behind the veil, you don't know whether you'll live in the surveilled neighborhood or the affluent one that isn't surveilled. You don't know whether you'll be a police officer or a resident. Rawls's framework would likely reject predictive policing as currently practiced, because it concentrates burdens on the least advantaged.
-
Credit scoring: Behind the veil, you don't know your financial history, your race, or your zip code. A Rawlsian credit system would need to demonstrate that its inequalities (different people get different scores) benefit the least advantaged — not just that it benefits lenders.
-
Health data sharing: Behind the veil, you don't know whether you'll benefit from the research or be harmed by a re-identification breach. The difference principle would permit data sharing only if the benefits are structured to reach the most vulnerable populations, not just those who can afford the resulting treatments.
Reflection: Consider the data system you interact with most frequently (social media, search engine, university LMS). Would you accept its terms if you were behind the veil of ignorance — not knowing whether you'd be a user, a moderator, an advertiser, or a person profiled by the system? What changes would you demand?
6.7 When Frameworks Disagree
6.7.1 A Structured Ethical Reasoning Process
The five frameworks will often disagree. A utilitarian analysis might support a policy that a deontological analysis rejects. Virtue ethics might endorse an action that justice theory questions. This is normal, not a failure. The frameworks illuminate different dimensions of a problem.
When facing a data ethics dilemma, use this process:
Step 1: Describe the situation. What is happening? Who is involved? What are the stakes?
Step 2: Identify stakeholders. Who is affected? Who has power? Who is vulnerable?
Step 3: Apply each framework: - Utilitarian: What are the consequences? Who benefits, who is harmed, and by how much? - Deontological: What duties and rights are involved? Is anyone being treated merely as a means? - Virtue ethics: What would a person of practical wisdom do? What character traits does this situation demand? - Care ethics: What relationships are at stake? Who is vulnerable? What does responsible care look like? - Justice theory: Behind the veil of ignorance, would you accept this outcome? Does it benefit the least advantaged?
Step 4: Identify convergences. Where do multiple frameworks agree? These convergences provide strong ethical ground.
Step 5: Identify divergences. Where do frameworks disagree? What values are in tension?
Step 6: Make a judgment. Having considered multiple perspectives, what is your reasoned conclusion? Articulate your reasoning transparently — others should be able to understand why you decided as you did, even if they disagree.
6.7.2 The Case for Moral Pluralism
No single framework captures everything that matters morally. Consequences matter (utilitarianism). Rights and dignity matter (deontology). Character and judgment matter (virtue ethics). Relationships and care matter (care ethics). Fairness to the least advantaged matters (justice theory).
The mature ethical reasoner uses all of these lenses, recognizes their tensions, and exercises judgment about which considerations are most important in a given context. This is moral pluralism — the view that multiple ethical frameworks each capture genuine moral truths, and that practical ethics requires navigating among them.
6.8 Chapter Summary
Key Frameworks
| Framework | Central Question | Data Governance Application |
|---|---|---|
| Utilitarianism | What produces the greatest good? | Cost-benefit analysis of data policies |
| Deontology | What respects duties and rights? | Consent, privacy rights, dignity |
| Virtue Ethics | What would a good person do? | Practitioner character, professional ethics |
| Care Ethics | What serves our relationships? | Vulnerable populations, trust, responsibility |
| Justice Theory | What is fair to the least advantaged? | Distributional analysis, equity |
Key Debates
- Can utilitarian calculations justify privacy violations if the aggregate benefit is large enough?
- Does meaningful consent exist in the data economy, or is all digital consent coerced?
- Is it possible to be a "virtuous" data practitioner within a system designed for extraction?
- How should we weigh the interests of current data subjects against the interests of future beneficiaries?
Applied Framework
The six-step ethical reasoning process: Describe → Identify stakeholders → Apply frameworks → Find convergences → Identify divergences → Make a reasoned judgment.
What's Next
Part 1 is now complete. You have the foundations: you understand what data is, where it came from, who claims it, how attention is monetized, how power operates through data, and how to reason ethically about data dilemmas.
In Part 2: Privacy in the Digital Age, we begin applying these foundations to the specific challenge of privacy — starting with Chapter 7: What Is Privacy? Definitions and Debates, where we'll discover that a concept everyone thinks they understand turns out to be surprisingly difficult to define.
Before moving on, complete the exercises and quiz to practice applying ethical frameworks to data governance scenarios.
Chapter 6 Exercises → exercises.md
Chapter 6 Quiz → quiz.md
Case Study: The VitraMed Data Sharing Dilemma — A Five-Framework Analysis → case-study-01.md
Case Study: Should Clearview AI Exist? An Ethical Analysis → case-study-02.md
Related Reading
Explore this topic in other books
Why They Watch Ethics, Mental Health & Responsible Creation Creator Economy Ethics of Influence AI Ethics Generative AI Ethics