Chapter 6 Quiz: Who Builds These Systems?

Instructions: This quiz tests comprehension of Chapter 6 concepts, cases, and arguments. Answers follow each section.


Part A: Multiple Choice

Select the best answer for each question.

Question 1 Frances Haugen worked at Facebook in which capacity before becoming a whistleblower?

A) Software engineer on the News Feed algorithm team B) Product manager on the civic integrity team C) Data scientist on the growth team D) VP of Trust and Safety

Question 2 Which of the following best describes what "OKRs" are and why they matter for platform design?

A) A type of A/B test used to evaluate notification effectiveness B) A regulatory compliance framework required by the EU's Digital Services Act C) A goal-setting framework that translates organizational priorities into measurable quarterly targets, shaping what teams optimize for D) An internal Facebook metric system for tracking content quality

Question 3 The chapter describes the Google Project Maven case as notable because:

A) It was the first time an engineer at a major tech company raised ethical concerns about a product B) Internal employee advocacy resulted in actual policy change — the company declined to renew a Department of Defense AI contract C) Google was fined by regulators for ethical violations related to the project D) The engineers who protested the project were all subsequently fired

Question 4 What did Timnit Gebru's 2020 paper "On the Dangers of Stochastic Parrots" argue, and why was it significant in the context of her employment at Google?

A) It argued that social media recommendation algorithms were causing psychological harm; it challenged Facebook's core product B) It argued that large language models posed risks including environmental costs, bias amplification, and harmful content generation; it challenged Google's core research direction and commercial interests C) It argued that facial recognition technology was racially biased; it resulted in Google canceling its Rekognition product D) It argued that platform advertising was manipulative; it led to a Federal Trade Commission investigation of Google

Question 5 Which of the following is NOT one of the moral disengagement mechanisms identified by Albert Bandura that the chapter applies to platform culture?

A) Diffusion of responsibility B) Euphemistic labeling C) Competitive displacement D) Advantageous comparison

Question 6 The chapter argues that A/B testing functions as a "moral distance-creator" primarily because:

A) A/B tests are run on users without their informed consent, which constitutes an ethical violation B) The choice of what metrics to measure in an A/B test encodes human value judgments that then appear to have been replaced by objective data C) A/B tests are run too quickly to allow proper ethical review of results D) Data scientists who run A/B tests are not trained to interpret the results ethically

Question 7 Sophie Zhang's internal memo at Facebook focused on which category of platform problem?

A) Algorithmic amplification of politically extreme content in the United States B) Instagram's effects on teenage body image C) Coordinated inauthentic behavior — fake engagement networks operated by political actors in multiple countries D) Facebook's use of personal data for targeted advertising without user consent

Question 8 In Dr. Aisha Johnson's first week at Velocity Media, what was the organizational relationship between her role and the product teams whose work she was hired to assess?

A) She reported directly to the CEO and had veto authority over product launches B) She was embedded in each product team with equal standing to the product managers C) She reported to the VP of Trust and Safety and had no formal authority over product decisions D) She co-led the product review committee with the Chief Product Officer


Part A Answers: 1. B — Haugen was a product manager on the civic integrity team, which she had specifically requested because of her personal concern about radicalization. 2. C — OKRs translate organizational goals into quarterly measurable targets that determine individual performance reviews and team budgets. 3. B — The Project Maven case is notable because employee advocacy — the letter signed by 3,000 employees and subsequent resignations — resulted in Google declining to renew the DoD contract, a genuine policy change. 4. B — The paper argued that large language models posed risks including environmental costs, bias amplification, and potential for harmful content generation. It challenged research directions central to Google's commercial interests. 5. C — "Competitive displacement" is not one of Bandura's mechanisms as discussed in the chapter. The four mechanisms discussed are diffusion of responsibility, euphemistic labeling, advantageous comparison, and (implicitly) moral justification. 6. B — The moral distancing function comes from the experimental design choice (what to measure) being a human value judgment that disappears behind the appearance of objective data. 7. C — Zhang's memo documented coordinated inauthentic behavior: fake accounts and pages used to amplify political content in Bolivia, Brazil, Ecuador, Honduras, India, Azerbaijan, and other countries. 8. C — Johnson reported to the VP of Trust and Safety and had no formal authority over product decisions. She had to request an invitation to the Wednesday product review.


Part B: True or False

Write True or False and briefly explain your reasoning (1-2 sentences).

Question 9 According to the chapter, most engineers who build harmful platform features do so because of individual malice or conscious disregard for user welfare.

Question 10 Facebook's internal research showing Instagram's negative effects on teenage girls' body image was promptly published in academic journals after it was completed.

Question 11 The "move fast and break things" ethos was arguably adaptive for Facebook in its early competitive phase but became progressively less appropriate as the platform scaled to billions of users.

Question 12 The chapter argues that hiring ethicists is always harmful to genuine ethics work because it creates the illusion of accountability without the substance.

Question 13 Timnit Gebru's firing was followed by the firing of Margaret Mitchell, another co-author of the "Stochastic Parrots" paper, approximately two months later.

Question 14 The chapter argues that structural incentive change is necessary but that individual engineer choices are irrelevant to outcomes.


Part B Answers: 9. FALSE — The chapter argues explicitly that most engineers who build harmful platform features are "bright, motivated, often genuinely idealistic human beings" operating inside organizational cultures and incentive structures that shape outcomes. Harm emerges primarily from structural forces, not individual malice. 10. FALSE — Facebook's internal Instagram research was not published. It was kept internal, and this was one of the key revelations of the Facebook Papers leak in 2021. 11. TRUE — The chapter explicitly makes this argument, noting that speed was adaptive in the competitive startup phase and became harmful when the "things" that broke included public health infrastructure and democratic institutions rather than minor UX inconveniences. 12. FALSE — The chapter does not argue this. It distinguishes between ethics hires that are genuinely empowered (given authority, budget, independent reporting lines) and those that function as reputation management. The Gebru case is presented as an example of the latter, not as proof that all ethics hires are counterproductive. 13. TRUE — Mitchell was fired in February 2021, approximately two months after Gebru's December 2020 dismissal, after Google searched her emails for policy violations. 14. FALSE — This is explicitly the opposite of the chapter's conclusion. The chapter argues for a "both/and" position: individual choices matter AND structural change is necessary. It criticizes the framing that collapses into "engineers are helpless cogs" just as much as the framing that makes engineers solely responsible.


Part C: Short Answer

Answer each question in 3-6 sentences.

Question 15 What is the "talent pipeline" problem described in the chapter, and how does it connect educational background to design outcomes?

Question 16 Explain what the chapter means by the phrase "ethics washing." What organizational features distinguish genuine ethics commitment from ethics washing?

Question 17 What did the Velocity Media product review meeting reveal about the gap between what is measured and what is not measured? Use the animated notification example to illustrate your answer.

Question 18 Why does the chapter argue that the "heroic engineer" narrative is problematic, even though it acknowledges that individual engineers can and do make ethical choices that sometimes change outcomes?

Question 19 The chapter describes the Facebook Papers as producing "enormous public understanding" but limited "structural impact." Explain why, according to the chapter's analysis, documented knowledge of harm does not automatically translate into structural change.

Question 20 What does the phrase "diffusion of responsibility" mean, and how does the organizational structure of a large tech company create conditions for it?


Part C Answers: 15. The "talent pipeline" problem refers to the fact that engineers at major tech platforms are drawn from a narrow set of elite universities, predominantly trained in CS, mathematics, and engineering, with little humanities or ethics training. This matters because educational environment shapes conceptual frameworks — the implicit models of what problems are worth solving, what metrics constitute success, what questions feel relevant. An environment that prizes optimization and measurability produces professionals who are excellent at optimizing measurable things and may systematically undervalue things that resist measurement, like psychological wellbeing or long-term social trust.

  1. "Ethics washing" refers to the practice of hiring ethicists or creating ethics teams in ways that signal ethical commitment without enabling it structurally. The key distinguishing features of genuine versus performed commitment are: reporting structure (does the ethics function report to an executive with authority over product decisions?), authority (can ethics professionals block or delay product launches?), budget (do they have independent resources?), and what happens when ethics findings conflict with commercial interests. Ethics washing creates a defensible public narrative about ethical commitment while leaving the incentive structures that produce harmful design unchanged.

  2. The Wednesday product review tracked six metrics: DAU, session length, 24-hour return visit rate, content interaction rate, notification open rate, and ad impression counts. No wellbeing metrics were tracked. When the animated notification was presented with positive A/B test results, it was approved for rollout in minutes. Aisha Johnson's question about whether the notification's gains might be concentrated during sleep hours — a harm not being measured — was acknowledged but did not stop the rollout. The animated dot had crossed all the thresholds that the system required it to cross; her concern was a threshold the system did not have.

  3. The heroic engineer narrative is problematic for two reasons. First, it is empirically weak as a general model: most cases of individual internal dissent do not change organizational decisions, and for every Project Maven success there are many cases where engineers raised concerns and features shipped anyway. Second, and more importantly, the narrative misallocates responsibility: it places the burden of structural change on individuals least positioned to achieve it (individual contributors without authority) while relieving organizations, investors, and regulators of accountability. If individual engineer conscience is the solution, then incentive structure reform is unnecessary — a conclusion that serves the interests of those who benefit from unreformed incentive structures.

  4. The gap between documented knowledge and structural change reflects the fact that knowledge of harm, by itself, does not change the incentive structures that produce the harm. Facebook's leadership knew about the internal research; knowing did not automatically translate into action because acting would have reduced engagement and revenue. The business model — advertising revenue dependent on engagement maximization — creates a financial incentive to not act on knowledge of harm when acting reduces engagement. Changing this requires either organizational will at the executive level, external regulatory pressure that changes the cost-benefit calculation, or changes to the business model itself. Knowledge is a necessary but not sufficient condition for structural change.

  5. Diffusion of responsibility is the psychological mechanism by which, as more people share involvement in a decision or action, each individual experiences themselves as bearing a smaller fraction of the moral weight. In a large tech company, a product launch might involve dozens of engineers, data scientists, product managers, legal reviewers, and executives — each of whom worked on one piece of the whole. The engineer who wrote the notification code can reasonably say she did not make the launch decision; the product manager who approved the A/B test can say he did not build the algorithm; the executive who approved the roadmap can say she did not design the specific feature. The result is that the whole lacks a clear moral owner.


Part D: Scenario Response

Question 21 Read the following scenario and answer the question below.

A senior engineer at a social media platform discovers, while reviewing data from a routine A/B test, that a new "Stories" autoplay feature — which is about to be approved for full rollout — increases average session length by 8 minutes per day. She also notices, in the same dataset, that users who are shown the autoplay feature report 12% lower satisfaction scores on the end-of-session survey the platform occasionally shows. The A/B test was designed to measure session length and engagement metrics; satisfaction was incidentally captured in the dataset and was not part of the original test design. The feature is scheduled for approval at tomorrow's product review.

Using the chapter's concepts, identify: (a) which moral disengagement mechanism is most clearly illustrated by the fact that satisfaction was not part of the original test design; (b) what specific actions are available to this engineer before and during tomorrow's product review; and (c) what structural factors will determine whether her actions change the outcome.

Answer to Question 21: (a) The most relevant mechanism is moral disengagement through procedural abstraction — specifically, the A/B test design encoded a value judgment (session length matters; satisfaction does not) that then appeared to be an objective measurement exercise. The A/B test produced data; the decision about what the test would measure was a human choice that disappeared behind the data. This is the mechanism the chapter describes when it notes that "the decision about what to measure is a human decision — a choice about what matters — but it is made early, quietly, and without drama."

(b) Available actions: She could send the complete dataset (including satisfaction data) to the product manager and request that the satisfaction finding be included in tomorrow's presentation. She could raise the finding directly with the product manager and ask that it be flagged in the review. She could attend the product review and raise the finding herself if it is not presented. She could send a concern memo to her manager or to any ethics/trust and safety function. She could use the #product-ethics Slack channel or equivalent to alert colleagues with shared concerns. She could, if the feature moves to approval despite her concern, file a formal dissent.

(c) Structural factors determining the outcome: whether satisfaction is tracked in anyone's OKRs (if not, the 12% decline carries no formal weight); whether there is a formal process requiring wellbeing metrics to be part of the product review; whether anyone with authority considers satisfaction a relevant input to the approval decision; whether the product manager has organizational incentive to present the satisfaction data or to treat it as outside the scope of the test; and whether the company has established norms around surfacing unexpected negative data versus treating the original test design as the scope of evaluation.


Question 22 In three to four paragraphs, explain the central tension Dr. Aisha Johnson faces in her role at Velocity Media, drawing on the chapter's analysis of ethics hires, OKR culture, and structural limits on individual action. What would need to change organizationally for her role to have genuine rather than decorative influence?

Answer to Question 22: Dr. Aisha Johnson's central tension is between her mandate (to assess and improve user wellbeing outcomes) and her organizational position (no formal authority, no OKR presence, no blocking power in product review). She has been given a title that implies influence and a job description that implies authority she does not actually hold. This is the ethics hire problem in its clearest form: the company has signaled ethical commitment through hiring without making the structural commitments — authority, budget, procedural standing — that would allow that commitment to be operational.

The OKR culture compounds this tension. The things that Aisha cares about — sleep disruption, emotional wellbeing, long-term psychological effects — are not in anyone's OKRs. In the incentive system that governs team behavior and individual advancement, her concerns have a specific organizational weight: approximately zero. When she raises a question in a product review, the question is heard courteously. But the decision is made on the metrics that are being tracked, which are the metrics in the OKRs, which are engagement and growth metrics. Her question about sleep disruption is treated as an interesting edge case rather than a blocking concern because the system has no mechanism for making it a blocking concern.

Her situation also illustrates the structural limits of individual action. She is one person, recently arrived, with no direct reports, a small budget, and a reporting line that terminates below the product organization in the hierarchy. Her predecessor was in a similar position and produced three research reports that changed nothing. The research was not bad research; it was research that lacked organizational consequence. Aisha can produce good research too. Whether it changes anything depends on factors she does not control.

For her role to have genuine rather than decorative influence, several structural things would need to change: (1) Wellbeing metrics would need to be incorporated into the OKR system, creating organizational incentives for product teams to track and care about them. (2) The product review process would need a formal wellbeing review stage with blocking authority — the capacity to require design modifications or delay rollout when wellbeing concerns are documented. (3) Aisha or her successor would need reporting lines independent of the product organization, similar to a general counsel who can escalate compliance concerns directly to the board. (4) The company would need to establish norms — backed by consequences — in which wellbeing research findings produce product review actions, not merely polite acknowledgment. Without these changes, her role is a reputation management asset rather than an ethics function.