Chapter 6 Key Takeaways: Who Builds These Systems?


1. Platform harms emerge primarily from structural forces, not individual malice.

Most engineers who build engagement-maximizing systems are motivated, often idealistic people doing what they were hired, trained, and rewarded to do. Understanding harms as structural — produced by incentive systems, organizational culture, and business models — is not an exculpation of individuals; it is a more accurate analysis that produces more effective responses. Blaming individuals without changing structures produces moral theater.


2. The tech workforce is drawn from a narrow pipeline that shapes its design assumptions.

Engineers at major platforms are overwhelmingly recruited from a small set of elite universities, trained in computer science and mathematics, and lack formal humanities or ethics education. This produces professionals who are excellent at optimizing measurable things and may systematically undervalue things that resist measurement, such as psychological wellbeing, social trust, and long-term harm.


3. Demographic homogeneity is a design problem, not just a fairness problem.

When an engineering team shares demographics, life experiences, and blind spots, the systems they build will reflect those shared assumptions. Teams designing platforms for billions of diverse users while being demographically concentrated will systematically fail to anticipate the experiences of people unlike themselves. The gap between designer and designed-for is structural, not incidental.


4. OKR culture shapes design decisions more powerfully than any explicit policy.

Objectives and Key Results determine what teams optimize for, what individuals are rewarded for, and what receives engineering attention. When OKRs measure engagement, session length, and return visit rate — but not wellbeing, satisfaction, or long-term health outcomes — the incentive structure will systematically produce design decisions that favor engagement over wellbeing, regardless of what any individual says they value.


5. A/B testing is a powerful tool that also creates moral distance.

The discipline of A/B testing replaced opinion-based decisions with evidence-based ones — genuinely valuable. But the choice of what to measure in an A/B test is a human value judgment that disappears behind the appearance of objective data. When an A/B test measures return visits and not sleep disruption, it produces a result that appears data-driven but encodes the design team's prior decision about what matters. The data appears to make the decision; the human judgment that shaped the data disappears.


6. The "move fast and break things" ethos was adaptive early and harmful at scale.

The Silicon Valley culture of prioritizing speed over caution made competitive sense for startups challenging incumbents. It became progressively harmful as platforms scaled to billions of users and the "things" that broke were no longer minor UX inconveniences but public health infrastructure, democratic institutions, and adolescent mental health. The culture calcified into organizational DNA and did not update its risk model as stakes increased.


7. Frances Haugen's disclosure revealed documented knowledge of harm without organizational action.

The Facebook Papers showed that Facebook had internal research documenting specific harms — recommendation rabbit holes, Instagram's effects on teenage mental health, vaccine misinformation amplification, ethnic violence facilitation — and had, in multiple cases, chosen not to act when acting would reduce engagement metrics. The problem was not ignorance; it was structural misalignment between knowledge of harm and organizational incentives to act on it.


8. Sophie Zhang's case illustrated systematic underinvestment in non-revenue markets.

Zhang's documentation of fake engagement networks in Bolivia, Ethiopia, India, and other countries showed that Facebook's failure to act on known harms was not random but correlated with economic priority. Countries less central to the advertising business received less content moderation investment, not as an explicit policy but as the predictable output of resource allocation driven by business logic.


9. Individual engineers can and sometimes do change specific decisions.

The Google Project Maven case shows that internal advocacy, when sufficiently organized and credible, can produce policy change. Individual engineers who raise concerns in design reviews, decline to implement specific features, or organize collective action can shift specific outcomes. These cases are real and matter, morally and practically.


10. Individual integrity does not change structural incentive systems.

An engineer who raises a concern in a product review changes the record and may change a specific decision. She does not change the OKR system, the advertising business model, the competitive dynamics between platforms, or the quarterly earnings framework that shapes organizational priorities. Structural problems require structural responses. Individual conscience operating within unreformed incentives is necessary but insufficient.


11. Ethics hires are often positioned without the authority to do their stated job.

Companies hire ethicists as a signal of ethical commitment, but the organizational position of ethics functions — reporting structure, authority, budget, independence — determines whether that commitment is operational. An ethics hire who has no veto authority over product decisions, no presence in OKR review, and no independent reporting lines is a reputation management asset, not an ethics function.


12. The Timnit Gebru case demonstrates what happens when ethics research challenges commercial interests.

Gebru was hired to conduct rigorous ethics research at Google. When that research produced findings — the Stochastic Parrots paper — that challenged the commercial direction of Google's LLM program, she was terminated. The dispute was framed procedurally; the substance of the paper's arguments was never publicly rebutted. The case illustrates the structural conflict of interest inherent in ethics research conducted by employees of the entity being evaluated.


13. The chilling effect of high-profile terminations shapes what research gets done.

When prominent ethics researchers are fired for work that challenges commercial interests, remaining researchers receive a clear signal about the limits of permissible inquiry. This signal does not need to be explicit to be effective. The implicit curriculum of Gebru's firing — certain research, pursued to certain conclusions, in certain forums, produces termination — shapes what questions get asked and what gets published.


14. Diffusion of responsibility allows ethical people to participate in harmful systems.

In large organizations, each individual's contribution to a harmful outcome is partial, mediated, and distant from the harm itself. The engineer who writes notification code does not see the teenager awakened at 2 AM. The product manager who approves the A/B test does not experience the anxiety generated by the feature. The psychological experience of bearing only a fraction of responsibility for a whole reduces moral activation, even when the harm is real.


15. Platform culture has developed a rich vocabulary of euphemism that shapes cognition.

"Engagement optimization," "re-engagement campaigns," "relevance ranking," and "highly engaged users" are not neutral descriptions. They are conceptual frames that shape what questions engineers ask and what alternatives seem available. An engineer thinking about "engagement optimization" asks different questions than an engineer thinking about "designing systems that will affect billions of people's daily mental states." The language encodes values; the values shape design.


16. Both/and, not either/or: individual responsibility AND structural change are necessary.

The failure to hold both claims simultaneously — collapsing into either "engineers are villains" or "engineers are helpless cogs" — is itself an obstacle to meaningful change. Individual choices matter and have consequences; structural incentives shape what individual choices are available and rewarded. Changing what platform systems do requires both engineers who exercise their available degrees of freedom and organizations, regulators, and markets that change the incentive structures within which those choices are made.


17. Effective ethics work inside platform companies requires structural authority, not just a mandate.

For ethics professionals like Dr. Aisha Johnson to have genuine rather than decorative influence, they need: formal authority in product review processes (with blocking power), presence in the OKR system (so wellbeing metrics have organizational weight), independent reporting lines (to escalate concerns outside the product organization), and institutional norms that connect wellbeing findings to product decisions. Without these structural features, ethics work produces reports that are read, acknowledged, and filed — but does not change what gets built.