Quiz: Ethical Frameworks for the Data Age
Test your understanding before moving to the next chapter. Target: 70% or higher to proceed.
Section 1: Multiple Choice (1 point each)
1. According to the chapter, which of the following best describes the relationship between legal compliance and ethical behavior in data governance?
- A) Legal compliance is sufficient for ethical behavior, because laws are designed to encode society's ethical standards.
- B) Legal compliance is necessary but not sufficient for ethical behavior, because many ethically significant data practices fall outside existing regulation.
- C) Legal compliance and ethical behavior are entirely unrelated, because laws have nothing to do with moral values.
- D) Ethical behavior always requires going beyond what the law demands, because all data laws are too permissive.
Answer
**B)** Legal compliance is necessary but not sufficient for ethical behavior, because many ethically significant data practices fall outside existing regulation. *Explanation:* Section 6.1.1 presents three categories that demonstrate the gap between legal and ethical: "legal but unethical" (Facebook/Cambridge Analytica), "ethical but illegal" (whistleblowers), and "legal vacuum" (facial recognition, algorithmic hiring). Dr. Adeyemi's distinction — "The law tells you the floor. Ethics tells you the ceiling." — captures this precisely. Option A is the misconception the section is designed to correct. Option C overstates the separation. Option D is too absolute — some laws do adequately capture ethical requirements.2. A utilitarian analysis of a data governance decision is best characterized as:
- A) An analysis that asks whether the decision respects the rights and dignity of all affected individuals.
- B) An analysis that asks what a person of good character would do in the situation.
- C) An analysis that sums the expected benefits and harms across all stakeholders and selects the option producing the greatest net good.
- D) An analysis that asks whether the decision would be acceptable to those who are least advantaged by it.
Answer
**C)** An analysis that sums the expected benefits and harms across all stakeholders and selects the option producing the greatest net good. *Explanation:* Section 6.2.1 identifies utilitarianism as consequentialist, aggregative, impartial, and maximizing. The core procedure is to identify all affected parties, estimate the benefits and harms to each, sum the total, and choose the option with the greatest net good. Option A describes deontology. Option B describes virtue ethics. Option D describes justice theory (Rawls).3. The chapter identifies a "Common Pitfall" in utilitarian analysis of data governance. Which of the following best captures this pitfall?
- A) Utilitarian calculations are always biased toward the interests of corporations because corporations have more data about outcomes.
- B) Utilitarian calculations depend on estimates of costs and benefits that may be systematically inaccurate — particularly regarding low-probability, high-impact harms and compounding effects like erosion of trust.
- C) Utilitarian calculations always favor sharing data because the benefits of data sharing are always greater than the costs.
- D) Utilitarian calculations are invalid because happiness cannot be measured.
Answer
**B)** Utilitarian calculations depend on estimates of costs and benefits that may be systematically inaccurate — particularly regarding low-probability, high-impact harms and compounding effects like erosion of trust. *Explanation:* Section 6.2.2 warns explicitly that utilitarian analysis is "only as good as the estimates it relies on." The chapter highlights two specific risks: underestimating the probability of re-identification and failing to account for the compounding effect of eroded trust on future data practices. Options A and C are overstatements not supported by the text. Option D is a philosophical objection that the chapter does not make — it presents utilitarianism as a legitimate, if limited, framework.4. Kant's categorical imperative, in its second formulation, requires that we treat humanity "always as an end and never merely as a means." Applied to data governance, which of the following practices would most clearly violate this principle?
- A) A hospital using patient data to improve treatment outcomes for those same patients.
- B) A platform collecting behavioral data beyond what is needed for service delivery in order to sell predictions about user behavior to third-party advertisers.
- C) A government collecting census data to allocate resources to underserved communities.
- D) A university analyzing anonymized student performance data to identify courses that need pedagogical improvement.
Answer
**B)** A platform collecting behavioral data beyond what is needed for service delivery in order to sell predictions about user behavior to third-party advertisers. *Explanation:* Section 6.3.2 draws this distinction precisely: using data to improve a service people use treats them partly as means but also as ends (they benefit from the improvement). Harvesting behavioral surplus — extracting data beyond what service delivery requires to predict and modify behavior for advertiser profit — treats users "merely as means." Their data serves someone else's profit with no corresponding benefit to the data subject. Options A, C, and D all involve using data in ways that at least partly serve the interests of the data subjects themselves.5. Which of the following best describes the concept of phronesis (practical wisdom) as applied to data governance?
- A) The ability to write comprehensive privacy policies that cover all possible scenarios.
- B) The ability to perform cost-benefit analyses more accurately than others.
- C) The capacity for good ethical judgment in specific, context-dependent situations — cultivated through experience rather than algorithmic application of rules.
- D) The philosophical knowledge required to distinguish between utilitarianism, deontology, and virtue ethics.
Answer
**C)** The capacity for good ethical judgment in specific, context-dependent situations — cultivated through experience rather than algorithmic application of rules. *Explanation:* Section 6.4.1 defines phronesis as "the ability to discern the right action in specific circumstances — not by applying a formula but by exercising judgment cultivated through experience." Ray Zhao's guest lecture reinforces this: "The hard cases — the ones where the rules run out or conflict — require judgment. And judgment comes from character." Option A describes procedural compliance. Option B describes utilitarian competence. Option D describes academic knowledge, which is not what Aristotle meant by practical wisdom.6. Care ethics differs from the other four frameworks in the chapter primarily because it:
- A) Is the only framework that considers consequences.
- B) Centers moral reasoning on relationships and responsibilities to particular others rather than abstract principles, aggregate calculations, or individual rights.
- C) Is the only framework that can be applied to health data.
- D) Rejects the idea that ethical reasoning should consider the interests of vulnerable populations.
Answer
**B)** Centers moral reasoning on relationships and responsibilities to particular others rather than abstract principles, aggregate calculations, or individual rights. *Explanation:* Section 6.5.1 identifies four key commitments of care ethics: relational, attentive, responsive, and responsibility-based. Unlike utilitarianism (aggregate calculations), deontology (universal principles), virtue ethics (individual character), and justice theory (structural fairness), care ethics starts from the web of relationships in which people are embedded and asks what responsible care looks like within those relationships. Option A is incorrect — utilitarianism also considers consequences. Option C is too narrow — care ethics applies broadly. Option D is the opposite of care ethics' commitments.7. According to the chapter, Rawls's difference principle would evaluate a data governance policy by asking:
- A) Whether the policy maximizes aggregate utility for all citizens.
- B) Whether every individual consented to the policy.
- C) Whether the policy's inequalities benefit the least advantaged members of society.
- D) Whether the policy reflects the virtues of justice and temperance.
Answer
**C)** Whether the policy's inequalities benefit the least advantaged members of society. *Explanation:* Section 6.6.1 states Rawls's difference principle: "Social and economic inequalities are permissible only if they benefit the least advantaged members of society." The chapter's application to health data sharing makes this concrete: the difference principle "would permit data sharing only if the benefits are structured to reach the most vulnerable populations, not just those who can afford the resulting treatments." Option A describes utilitarianism. Option B describes deontological consent. Option D describes virtue ethics.8. The six-step ethical reasoning process presented in Section 6.7.1 recommends applying all five frameworks before making a judgment. The primary purpose of applying multiple frameworks is to:
- A) Ensure that the decision-maker arrives at a single, objectively correct answer.
- B) Illuminate different dimensions of the problem, identify areas of convergence and divergence among frameworks, and support a transparent, well-reasoned judgment.
- C) Prove that ethical analysis is impossible because frameworks always disagree.
- D) Demonstrate that utilitarianism is superior to the other four frameworks.
Answer
**B)** Illuminate different dimensions of the problem, identify areas of convergence and divergence among frameworks, and support a transparent, well-reasoned judgment. *Explanation:* Section 6.7.1 describes the six steps: Describe, Identify stakeholders, Apply frameworks, Identify convergences, Identify divergences, Make a judgment. The purpose is not to find the single correct answer (A) but to ensure multiple perspectives are considered. The chapter explicitly states: "The frameworks illuminate different dimensions of a problem" and "This is normal, not a failure." Options C and D misrepresent the chapter's argument.9. Dr. Adeyemi's statement that "The law tells you the floor. Ethics tells you the ceiling" implies that:
- A) Ethical behavior is always stricter than legal compliance.
- B) There is a significant space between what is legally required and what is ethically ideal, and this space is where most important data governance decisions are made.
- C) Laws should be eliminated because they are ethically insufficient.
- D) Ethics is aspirational and therefore irrelevant to practical decision-making.
Answer
**B)** There is a significant space between what is legally required and what is ethically ideal, and this space is where most important data governance decisions are made. *Explanation:* Section 6.1.1 presents Dr. Adeyemi's statement in the context of arguing that compliance with the law is necessary but insufficient. The "space between" the floor and the ceiling is where ethical frameworks become indispensable — and where "most of the interesting — and most of the important — decisions happen." Option A is close but too absolute (sometimes the ethical and legal requirements coincide). Options C and D are not supported by the text.10. Which of the following is identified in the chapter as a limitation shared by both utilitarianism and care ethics?
- A) Both frameworks ignore individual rights entirely.
- B) Both frameworks have difficulty scaling — utilitarianism because it requires accurate estimates across many stakeholders, and care ethics because care for particular others does not easily translate to policy for millions.
- C) Both frameworks reject the idea of moral duties.
- D) Both frameworks can only be applied to health data scenarios.
Answer
**B)** Both frameworks have difficulty scaling — utilitarianism because it requires accurate estimates across many stakeholders, and care ethics because care for particular others does not easily translate to policy for millions. *Explanation:* Section 6.2.3 identifies utilitarianism's dependence on quantifying values that resist quantification and estimating consequences accurately across all parties — a challenge that grows with scale. Section 6.5.3 identifies a parallel limitation of care ethics: "Can be difficult to scale — care for particular others doesn't easily translate to policy for millions." Option A is incorrect — utilitarianism considers individual interests (just aggregatively) and care ethics centers on particular persons. Options C and D are not supported by the text.Section 2: True/False with Justification (1 point each)
For each statement, determine whether it is true or false and provide a brief justification.
11. "According to the chapter, utilitarianism is the most appropriate framework for data governance because it provides the most systematic and rigorous decision procedure."
Answer
**False.** *Explanation:* The chapter presents utilitarianism as one of five valuable frameworks, not as the most appropriate one. Section 6.7.2 explicitly argues for moral pluralism: "No single framework captures everything that matters morally." While utilitarianism's systematic procedure is identified as a strength (Section 6.2.3), the chapter also identifies significant limitations — including the tyranny of the majority, the difficulty of quantifying values like dignity and privacy, and the ignoring of distributive fairness. The chapter's position is that each framework captures genuine moral truths and that practical ethics requires navigating among them.12. "Kant's categorical imperative would permit a data practice as long as the data subjects provided informed consent."
Answer
**False.** *Explanation:* Section 6.3.2 discusses "consent under coercion" — agreeing to terms of service to avoid losing access to essential services. Kant's framework requires that consent be genuinely voluntary; "consent that is not genuinely voluntary is not morally valid." Furthermore, the categorical imperative's first formulation (universalizability) imposes constraints beyond consent: a practice must be coherent if universalized, regardless of whether individuals agree to it. Consent is relevant to deontological analysis, but it is not the sole criterion — the practice must also respect human dignity and be universalizable.13. "Care ethics and justice theory are fundamentally incompatible because care ethics focuses on particular relationships while justice theory focuses on structural fairness."
Answer
**False.** *Explanation:* While the chapter identifies different emphases — care ethics starts from relationships and vulnerability, justice theory starts from structural fairness and the position of the least advantaged — it presents them as complementary perspectives within a pluralist framework, not as incompatible theories. Both frameworks share concern for vulnerable populations (care ethics through attention to particular needs; justice theory through the difference principle). The six-step ethical reasoning process explicitly asks practitioners to apply both frameworks and look for convergences and divergences, implying they work together to illuminate different dimensions of the same problem.14. "The chapter argues that when ethical frameworks disagree, the decision-maker should average the recommendations to find a compromise."
Answer
**False.** *Explanation:* Section 6.7.1 does not recommend averaging or mechanical compromise. Instead, it advocates a six-step process that identifies convergences (where frameworks agree) and divergences (where they disagree), and then asks the decision-maker to "make a judgment" by exercising reasoned judgment about "which considerations are most important in a given context." The process requires articulating reasoning transparently so "others should be able to understand *why* you decided as you did, even if they disagree." This is moral judgment, not mathematical averaging.15. "Virtue ethics is uniquely suited to addressing structural problems in data governance because it focuses on institutional character rather than individual character."
Answer
**False.** *Explanation:* As presented in the chapter, virtue ethics focuses on *individual* character — the traits a data practitioner should cultivate (Section 6.4.2). The virtuous data practitioner table describes individual virtues like justice, honesty, courage, temperance, and compassion. Section 6.4.3 explicitly identifies this as a limitation: "Harder to institutionalize — you can write rules and calculate consequences, but you can't legislate character." The chapter does not claim that virtue ethics addresses structural problems; if anything, it acknowledges that structural analysis is better served by frameworks like justice theory.Section 3: Short Answer (2 points each)
16. Explain the difference between Eli's and Mira's positions in the surveillance debate in Section 6.3.2. What ethical framework does each character draw on, and what is the core disagreement between them? Why does Dr. Adeyemi side with Eli's framework while acknowledging that Mira's concern is legitimate?
Sample Answer
Eli draws on deontological ethics (Kant): he argues that surveilling a community without meaningful consent is wrong regardless of whether it produces good outcomes, because it violates the community's dignity. His test is not the consequences but whether the practice treats people as mere instruments. Mira draws on consequentialist reasoning: she asks whether the data "actually prevents crime" — implying that good outcomes might justify the practice. The core disagreement is whether outcomes can legitimize a practice that violates rights (Mira's position) or whether rights and dignity constrain what can be done regardless of outcomes (Eli's position). Dr. Adeyemi sides with the Kantian framework not because consequences are irrelevant in all senses, but because "they cannot justify treating people as mere instruments." She acknowledges that consequences matter "in a practical sense" — meaning that a Kantian is not indifferent to outcomes — but insists that good consequences cannot retroactively justify treating an entire community as a population to be monitored without their meaningful consent. This reflects the chapter's broader argument that different frameworks illuminate different dimensions: deontology protects individuals from being sacrificed for aggregate benefit, even when utilitarianism might support the sacrifice. *Key points for full credit:* - Correctly identifies Eli's position as deontological and Mira's as consequentialist/utilitarian - Articulates the core disagreement (outcomes vs. rights/dignity) - Explains Dr. Adeyemi's nuanced position — consequences matter but cannot override dignity17. Section 6.6.2 applies the Rawlsian veil of ignorance to three cases: predictive policing, credit scoring, and health data sharing. Choose one of these cases and explain (a) what it means to evaluate it "behind the veil," (b) what the difference principle would require, and (c) why this analysis might yield different conclusions than a utilitarian analysis of the same case.
Sample Answer
*Using the predictive policing example:* (a) Evaluating predictive policing behind the veil of ignorance means imagining that you do not know whether you will be a resident of the surveilled neighborhood, a resident of an unsurveilled affluent neighborhood, a police officer using the system, or a person falsely flagged by the algorithm. You must design the data policy without knowing which position you will occupy. (b) The difference principle would require that any inequalities in how the system distributes benefits and burdens must benefit the least advantaged. Since predictive policing as currently practiced concentrates surveillance burdens on already-disadvantaged communities (often low-income and disproportionately communities of color) without directing proportional benefits to those same communities, Rawls's framework would likely reject it. The difference principle would require that if surveillance is permitted at all, its benefits (safety improvements) must reach the most surveilled populations, and the burdens (loss of privacy, risk of false positives, chilling effects) must not fall disproportionately on them. (c) A utilitarian analysis might reach a different conclusion because it aggregates benefits and costs across all stakeholders. If the aggregate reduction in crime outweighs the aggregate loss of privacy and dignity, a utilitarian analysis could support predictive policing even if the burdens are concentrated on disadvantaged communities. The Rawlsian analysis specifically rejects this reasoning: the distribution of benefits and burdens matters, not just their aggregate sum. *Key points for full credit:* - Correctly explains what the veil of ignorance hides - Applies the difference principle to the specific case - Identifies the distributional vs. aggregate distinction between Rawls and utilitarianism18. The chapter presents five frameworks for ethical reasoning about data governance. In three to four sentences, explain why the chapter argues that moral pluralism is preferable to relying on a single framework. Use at least two specific examples from the chapter to support your answer.
Sample Answer
The chapter argues for moral pluralism because each framework captures genuine moral truths that the others miss: consequences matter (utilitarianism), rights and dignity matter (deontology), character and judgment matter (virtue ethics), relationships and care matter (care ethics), and fairness to the least advantaged matters (justice theory). A single-framework approach would be blind to the dimensions the other frameworks illuminate. For example, a purely utilitarian analysis of VitraMed's data sharing might support sharing because aggregate benefits are large, but it would miss the deontological concern that patients' consent was not genuinely voluntary and the care ethics concern that sharing might betray the trust of vulnerable patients who depend on the system. Similarly, a purely deontological analysis of community surveillance might provide a clear prohibition, but it would not address the care ethics question of what responsible care for the community actually looks like or the Rawlsian question of who bears the greatest burden. Moral pluralism ensures that all of these dimensions are considered, even when they point in different directions. *Key points for full credit:* - Explains that each framework captures different moral truths - Provides at least two specific examples showing why a single framework is insufficient - Connects the argument to the chapter's position that tensions between frameworks are "normal, not a failure"Section 4: Applied Scenario (5 points)
19. Read the following scenario and answer all parts.
Scenario: The HealthPlus Data Partnership
HealthPlus, a mid-sized health insurance company, is approached by CityWell, a municipal public health agency, with a proposal: CityWell wants access to HealthPlus's claims data — including diagnosis codes, prescription records, treatment histories, and demographic information — to build a predictive model for identifying neighborhoods at high risk of a diabetes epidemic. CityWell promises the data will be de-identified and used only for public health planning, not for individual-level decisions.
HealthPlus's policyholders signed privacy notices stating their data could be shared for "health improvement purposes." The company's legal team confirms the sharing is permissible under HIPAA's public health exception. The CDO, however, is uneasy: the data includes sensitive information about low-income communities with historically justified distrust of health institutions, the de-identification may not withstand re-identification attacks given the small population sizes in some neighborhoods, and CityWell has no formal data governance framework.
The CDO asks: "Just because we can, should we?"
(a) Apply utilitarian analysis to this scenario. Identify at least three stakeholder groups, the potential benefits and harms to each, and the conclusion a utilitarian analysis would likely reach. Identify one key assumption that, if wrong, would change the utilitarian conclusion. (1 point)
(b) Apply deontological analysis. Does the data sharing treat any stakeholder group merely as a means? Is the consent provided via privacy notices genuinely informed and voluntary? What would Kant's framework conclude? (1 point)
(c) Apply care ethics. Who is in a position of vulnerability? What does the care ethics framework require that the other frameworks do not explicitly address? (1 point)
(d) Apply the Rawlsian veil of ignorance. If you did not know whether you would be a policyholder in a small, low-income neighborhood, a HealthPlus executive, a CityWell epidemiologist, or a diabetes patient whose neighborhood is flagged, what governance conditions would you demand before permitting the data sharing? (1 point)
(e) Using all four analyses above plus virtue ethics (what would a CDO of practical wisdom do?), state your reasoned judgment: Should HealthPlus share the data? Under what conditions? Identify where the frameworks converge and where they diverge. (1 point)
Sample Answer
**(a)** **Utilitarian analysis.** Key stakeholder groups: - **Residents in high-risk neighborhoods:** Benefit from targeted public health interventions (diabetes prevention programs, improved access to care). Risk: if data is re-identified, potential stigmatization of neighborhoods and individuals; insurance discrimination if data flows back to insurers. - **HealthPlus policyholders broadly:** Benefit if public health improvements reduce overall healthcare costs and thus premiums. Minimal direct cost unless breach of trust reduces willingness to share health information with insurers. - **CityWell and public health:** Significant benefit — predictive models could enable early intervention, reduce diabetes prevalence, and save public health dollars. - **HealthPlus as an organization:** Reputational benefit as a public health partner; risk of reputational harm if data is misused or breached. A utilitarian analysis would likely support sharing, given the potentially large public health benefits relative to the probabilistic harms. **Key assumption:** that the de-identification is robust enough to prevent re-identification. If this assumption is wrong — and the chapter notes that small population sizes make re-identification more likely — the cost side of the calculation increases dramatically, potentially reversing the conclusion. **(b)** **Deontological analysis.** The consent given via privacy notices — "health improvement purposes" — is vague. Policyholders may not have understood that their data could be shared with a government agency for population-level modeling. If the consent was not genuinely informed (most people do not read privacy notices) and not genuinely voluntary (health insurance is not optional for most people), then the data sharing may treat policyholders merely as means — their data serves the public health goal, but they were not given a meaningful opportunity to understand and agree to this specific use. Kant's framework would likely require explicit, informed consent for this specific use before proceeding. The universalizability test raises additional concerns: if we universalize the principle "insurers may share health data with government agencies whenever legal and potentially beneficial," we create a system in which the health data relationship is fundamentally transformed from confidential to instrumentally useful — a transformation many would not endorse. **(c)** **Care ethics.** The most vulnerable parties are the low-income community members whose health data would be analyzed. They are in a relationship of dependency with HealthPlus (they need health insurance), they have historically justified distrust of health institutions, and they have the least power to contest how their data is used. Care ethics would require that HealthPlus not merely confirm legal permissibility but actively engage with the affected communities: listening to their concerns, understanding their fears about data misuse, and ensuring that the benefits of the research are directed back to them. Care ethics surfaces something the other frameworks do not explicitly address: the *quality of the relationship* between HealthPlus and its policyholders, and whether the sharing preserves or betrays the trust that makes that relationship functional. **(d)** **Rawlsian analysis.** Behind the veil, I would demand: 1. Independent oversight: an independent body (not CityWell or HealthPlus alone) should review the de-identification methodology before any data is shared. 2. Purpose limitation: a binding agreement that the data can only be used for diabetes prevention research, with penalties for mission creep. 3. Community engagement: the affected neighborhoods should have representatives on the governance body overseeing the data use. 4. Benefit-sharing: the difference principle requires that the benefits (diabetes prevention programs, health resources) be directed specifically to the most disadvantaged neighborhoods, not just used for general public health modeling. 5. Re-identification protection: if re-identification is detected, immediate data destruction and notification. **(e)** **Reasoned judgment.** Frameworks converge on the value of the public health goal — all five support diabetes prevention research in principle. They diverge on the conditions: - Utilitarianism supports sharing with minimal conditions (if estimates are favorable). - Deontology demands genuinely informed consent for this specific use. - Care ethics requires engagement with vulnerable communities and attention to the relational dynamics of trust. - Justice theory requires that benefits reach the least advantaged and that governance structures protect those with the least power. - A CDO of practical wisdom would recognize the tension, resist the temptation to defer to legal permissibility alone, and insist on conditions before sharing. My judgment: HealthPlus should share the data, but only after (1) obtaining a rigorous independent review of the de-identification methodology, (2) establishing a formal data governance agreement with CityWell that includes purpose limitation and data destruction timelines, (3) engaging community representatives from affected neighborhoods in the governance process, and (4) ensuring that the resulting public health interventions are directed to the neighborhoods whose data made the research possible. Without these conditions, the sharing should not proceed — even though it is legal. *Key points for full credit:* - Applies each framework distinctly (not just relabeling the same analysis) - Identifies convergence (all support the health goal) and divergence (conditions differ) - Provides a reasoned judgment with specific conditions, not just "yes" or "no"20. In two to three paragraphs, explain how the six-step ethical reasoning process from Section 6.7.1 differs from simply asking "what does the law require?" Describe a data governance scenario where the six-step process would lead to a significantly different outcome than a purely legal analysis. Explain what the ethical analysis reveals that the legal analysis does not.
Sample Answer
The six-step ethical reasoning process differs from legal analysis in three fundamental ways. First, legal analysis asks whether a practice is permitted or prohibited by existing law — a binary question with deterministic answers (at least in principle). The six-step process asks what is ethically right given a complex web of stakeholders, values, and consequences — a judgment that requires weighing competing considerations. Second, legal analysis operates within the scope of existing regulation, which the chapter notes often lags behind data practice. Many of the most consequential data governance decisions occur in "legal vacuums" where no regulation clearly applies. The six-step process fills this void by providing structured reasoning tools. Third, legal analysis focuses on compliance — meeting minimum requirements. The six-step process aspires to ethical excellence — identifying the best available action, not just the permissible one. Consider a scenario where a social media company discovers that its recommendation algorithm disproportionately exposes teenagers to content associated with eating disorders. The algorithm is not illegal — no current law prohibits engagement-optimizing algorithms for teen users (in most jurisdictions). A legal analysis would conclude that no compliance obligation is triggered. The six-step process, however, would: identify affected stakeholders (teens, parents, the company, advertisers, therapists); apply utilitarian analysis (the harms to teens' mental health likely outweigh the engagement benefits); apply deontological analysis (optimizing engagement at the expense of teen wellbeing treats teens merely as means for advertiser revenue); apply care ethics (the company is in a position of power over vulnerable minors and has a responsibility to their wellbeing); apply justice theory (behind the veil, no one would choose a system that disproportionately harms the most vulnerable users); and apply virtue ethics (a company of practical wisdom would not defend a known harm by pointing to the absence of prohibition). The ethical analysis reveals what the legal analysis cannot: that the absence of a legal prohibition does not create a permission, and that the company's obligation to its most vulnerable users extends beyond what regulation currently demands. This is precisely the space between Dr. Adeyemi's "floor" and "ceiling" — and it is where ethical frameworks become indispensable. *Key points for full credit:* - Distinguishes legal analysis (compliance, binary, scope-limited) from ethical analysis (judgment, multidimensional, aspires beyond compliance) - Provides a concrete scenario where the outcomes diverge - Connects the analysis to the chapter's floor/ceiling metaphorScoring & Review Recommendations
| Score Range | Assessment | Next Steps |
|---|---|---|
| Below 50% (< 14 pts) | Needs review | Re-read Sections 6.1-6.4 carefully, redo Part A exercises |
| 50-69% (14-19 pts) | Partial understanding | Review specific weak areas, focus on the six-step process in Section 6.7 |
| 70-85% (20-24 pts) | Solid understanding | Ready to proceed to Chapter 7; review any missed frameworks briefly |
| Above 85% (> 24 pts) | Strong mastery | Proceed to Chapter 7: What Is Privacy? Definitions and Debates |
| Section | Points Available |
|---|---|
| Section 1: Multiple Choice | 10 points (10 questions x 1 pt) |
| Section 2: True/False with Justification | 5 points (5 questions x 1 pt) |
| Section 3: Short Answer | 6 points (3 questions x 2 pts) |
| Section 4: Applied Scenario | 7 points (Q19: 5 parts x 1 pt + Q20: 2 pts) |
| Total | 28 points |