34 min read

By this point in the book, we have examined how social media platforms exploit psychological vulnerabilities with considerable precision. We have traced how variable reward schedules hijack the dopamine system, how infinite scroll eliminates the...

Learning Objectives

  • Explain the core principles of the Center for Humane Technology and how they emerged from insider critique of the tech industry
  • Distinguish between engagement-maximizing design and Time Well Spent design principles, and apply that distinction to specific platform features
  • Identify and evaluate concrete design patterns that preserve user autonomy, including friction-as-feature, consent architectures, and autonomy-preserving defaults
  • Analyze the business model constraints that make humane design structurally difficult within advertising-dependent platforms, and evaluate alternative monetization models
  • Apply a vocabulary of humane design to assess whether a given platform feature serves or exploits the user

Chapter 39: Design Ethics and Humane Technology: Building Differently

Introduction: The Question That Won't Go Away

By this point in the book, we have examined how social media platforms exploit psychological vulnerabilities with considerable precision. We have traced how variable reward schedules hijack the dopamine system, how infinite scroll eliminates the cognitive stop signs that trigger reflection, how notification design weaponizes anxiety to manufacture compulsive checking, and how algorithmic feeds prioritize outrage and insecurity because those emotions drive engagement metrics. We have looked at regulatory proposals, at legislative hearings, at the FTC and the EU's Digital Services Act. We have sat with Maya as she gradually recognizes the architecture being built around her attention.

Now we arrive at a question that is both simpler and harder than anything we have examined so far: can platforms be built differently?

Not reformed through regulation alone. Not merely made less bad. Actually built differently — designed from the outset to serve the user's genuine interests rather than extract value from their psychological vulnerabilities? And if so, what does that look like in practice? What are the design principles? What are the business models that make those principles sustainable? Who is doing it, and what have they learned?

This chapter does not offer techno-utopian reassurance. The evidence we have accumulated in the preceding chapters is too substantial for easy comfort. But it does offer something more useful than despair: a specific, detailed account of what ethical platform design looks like, who is building it, what it costs, and what it proves. Because the existence of platforms that function without manipulation — Wikipedia, Signal, Mastodon, early Substack — is itself a form of argument. It demonstrates that the claim "there is no other way to run a large platform" is not a law of nature but a choice, dressed up as inevitability.

Let us be precise about what we mean by "building differently." We are not talking about adding a wellness feature to an extractive platform — the equivalent of putting a salad on the menu at a fast food restaurant without changing the kitchen. We are talking about platforms whose core architecture, whose default states, whose monetization logic, and whose success metrics are oriented toward user wellbeing. That is a structural difference, not a cosmetic one. And it requires asking hard questions about design, business models, and the limits of what individual engineers and designers can accomplish inside institutions whose incentives run in the opposite direction.


The Center for Humane Technology: From the Inside Out

To understand where the humane technology movement came from, we need to understand one specific person's trajectory, because it is unusually well-documented and unusually instructive.

Tristan Harris joined Google in 2011 when the company acquired his startup Apture. He became a design ethicist — a role that existed in name at Google, though its structural power within the organization was, as we will see, limited. Harris had been trained at Stanford's Persuasion Lab under B.J. Fogg, the researcher who essentially founded the academic study of persuasive technology. Fogg's work, developed through the 1990s and 2000s, mapped with considerable rigor how technology could be designed to change user behavior. Harris had absorbed that framework and arrived at Google with something that many of his colleagues lacked: a clear conceptual vocabulary for what the company was actually doing to its users.

In 2013, Harris wrote a 141-slide internal presentation titled "A Call to Minimize Distraction & Respect Users' Attention." (We will examine this document in detail in Case Study 01.) The presentation made a case that many Google engineers would have found, at minimum, thought-provoking: that technology companies had accumulated enormous power over human attention without taking any corresponding ethical responsibility for how they used it. Harris argued that the platforms and apps that hundreds of millions of people used every day were not neutral tools. They were environments designed — however intentionally or unintentionally — to maximize engagement, which meant they were in structural tension with their users' genuine interests.

The presentation went viral inside Google. It was read by thousands of employees and attracted praise from senior leadership. And then, largely, nothing happened.

That outcome — the internal acknowledgment without structural change — tells us something important about the limits of individual ethical advocacy within institutions whose incentive structures run in a different direction. Harris eventually left Google in 2015. In 2018, he co-founded the Center for Humane Technology with Randima Fernando and others, with backing from former tech insiders and philanthropic organizations. The CHT's mission was explicit: to reverse "the digital attention crisis and its effects on society" by shifting incentives, company cultures, and products.

What does the CHT actually propose? Its framework has several interlocking components. First, there is the diagnostic claim: that social media platforms are engaged in an "extractive attention economy" in which human attention is harvested and sold to advertisers, with each company competing against all others (and against sleep, family, reading, and face-to-face conversation) for a share of a finite resource. This is not a particularly controversial claim — it is, as we have seen throughout this book, essentially what the business model requires. But the CHT frames it with unusual moral directness: the extraction of attention without consent and against users' interests is not merely a side effect of capitalism but a specific ethical failure that can be named and addressed.

Second, the CHT offers a set of design principles intended to replace engagement maximization. These cluster around three ideas: respecting users' time and attention, aligning platform incentives with user wellbeing, and giving users genuine agency over their own experience. We will examine each of these in detail as we work through the chapter.

Third — and this is where the CHT's work becomes most politically significant — the organization has functioned as a bridge between the tech industry's internal dissenters and the policy and legislative world. Harris's 2017 TED talk "How a handful of tech companies control billions of minds" has been viewed over 45 million times. His testimony before Congress in 2019 and 2021 helped translate technical concepts about engagement optimization into language that legislators could act on. The CHT's "The Social Dilemma" documentary (2020, Netflix) brought these ideas to a mass audience. Whatever one thinks of the CHT's specific policy prescriptions, it has substantially shifted public discourse about what platforms are doing and whether it is acceptable.


Time Well Spent: Reframing the Metric

The most consequential single idea that Harris and the CHT contributed to design discourse is what they called "Time Well Spent." It is deceptively simple: instead of measuring platform success by time-on-platform, measure it by whether that time served the user's actual goals.

This reframing matters because success metrics are not neutral descriptions. They are the targets toward which engineers, product managers, and designers orient their work. When the metric is daily active users and average session length, every design decision is evaluated against those numbers. When the metric is "did the user accomplish what they came to do, and do they feel good about how they spent their time," the entire design orientation shifts.

Consider what this means in practice. Under engagement maximization, the ideal notification is one that brings the user back to the app when they are doing something else. Under Time Well Spent, the ideal notification is one that delivers genuinely relevant information at a moment the user has specified they want to receive it. These are not just different design choices. They reflect fundamentally different answers to the question: whose interests does this platform serve?

The Time Well Spent framework generates several specific design principles that have been influential in both the design community and in policy discussions.

Aligning with users' intentions. Before showing a user something, ask whether it serves what they came to do. A user who opens Instagram to see photos from friends they care about is not necessarily served by an algorithmic feed that surfaces outrage-inducing content from accounts they followed three years ago because the engagement numbers on that content are high. Aligning with intention means asking — and this is technically achievable, though rarely implemented — "what did this user say they wanted, and are we delivering it?"

Stopping cues and natural endpoints. Infinite scroll was designed explicitly to eliminate the moments at which users might naturally pause and decide whether to continue. "You're all caught up" messages — briefly present on Instagram in 2018 before being deprioritized — served as natural stopping cues. Under Time Well Spent, platforms would build in explicit stopping cues: "You've seen everything new from the people you follow. Want to continue browsing recommended content?" That is a radically different experience from the infinite scroll, and it is a choice, not a technical necessity.

Defaulting to privacy and minimal notification. Under current design paradigms, the default is maximum data collection and maximum notification. Opting out requires deliberate navigation through settings menus designed to discourage it. Time Well Spent inverts this: the default is minimal data collection and no notifications except those the user has explicitly requested. This is not technically difficult. It is structurally difficult because it conflicts with the engagement metrics that drive advertising revenue.

Honest persuasion. There is a distinction, ethically significant if often overlooked in practice, between persuasion that helps users do what they actually want and manipulation that extracts behavior users would not endorse if they understood the mechanism. Honest persuasion — reminders that align with users' stated goals, suggestions that reflect users' expressed preferences — differs from the variable reward manipulation that drives compulsive checking. The CHT framework draws this line clearly; implementing it requires platforms to actually care about which side of the line they are on.

Social reciprocity without exploitation. The social pressures built into platform design — read receipts, seen-at timestamps, public like counts — are not features that emerge organically from social needs. They are design choices that exploit social anxiety to drive engagement. Time Well Spent asks whether each social feature genuinely enhances social connection or merely creates anxiety that drives platform use.


Design Patterns That Serve Users

The principles above are not merely theoretical. A number of specific design patterns have been developed, tested, and in some cases implemented that embody them. Some of these exist within mainstream platforms, usually as optional features navigated to through obscure settings menus. Others exist in alternative platforms designed from the ground up with different priorities. Let us walk through the most significant.

Distraction-Free Reading Modes

Apple's Safari has offered Reader Mode since 2010 — a view that strips away navigation, advertisements, related-article links, and other engagement-maximizing elements, presenting only the article text. Firefox has offered a similar feature. Instapaper and Pocket built entire businesses around the proposition that people wanted to read things without the surrounding attention-capture machinery. The fact that these tools exist and have users demonstrates demand for an alternative experience.

What is significant about these tools is not just that they are useful but what their existence implies about the default. If Reader Mode genuinely serves users, why is it not the default? The answer is that it is not the default because it removes the elements — advertising, related content, social sharing prompts — that serve the platform's engagement and revenue goals. The default is set for the platform's benefit, not the user's. Naming this choice — recognizing it as a choice rather than a technical necessity — is the first step in changing it.

Friction as a Feature

We have discussed friction throughout this book as something platforms work to eliminate: every additional tap, every loading screen, every confirmation dialog is an opportunity for the user to stop and decide not to continue. Engagement-maximizing design minimizes all such friction.

Humane design inverts this in specific, targeted ways. The goal is not to make platforms annoying to use — that is merely bad design. The goal is to introduce friction at decision points where users benefit from pausing: before posting something in anger, before spending another hour on a platform when they intended to spend twenty minutes, before sharing content they have not read. Twitter briefly tested a feature in 2020 that prompted users to read an article before sharing it — a piece of friction inserted precisely at the point where reflexive sharing (driven by outrage or tribal reinforcement) was most likely to outrun genuine consideration. The feature showed measurable effects on sharing behavior. It was not made permanent.

The principle here is what designers call "appropriate resistance" — resistance that serves the user's reflective preferences rather than their impulsive ones. A platform that adds friction before you can delete your account is using friction against you. A platform that adds friction before you post at 2 a.m. is using friction for you.

Default-Off Notifications

Apple's iOS 12, released in 2018, introduced Screen Time, which included granular notification controls. Android's Digital Wellbeing tools, developed over the same period, offered similar functionality. Both represented operating system-level attempts to give users more control over the attention-extraction machinery built into their apps.

The underlying insight is that notifications are not neutral communications. They are interruptions designed to pull users back to a platform at moments chosen by the platform, not the user. Under a Time Well Spent framework, the default would be no notifications except those the user explicitly requests — and the request process would be simple, clear, and presented in plain language that explains what the user is agreeing to.

The current reality is almost precisely the opposite. Apps request notification permissions on first launch, often before users have any basis for judging whether they want them. The permission UI is designed to encourage acceptance. Turning off notifications for specific apps requires multiple steps through settings menus. The asymmetry is not accidental.

Usage Dashboards

Apple Screen Time and Android Digital Wellbeing both offer users visibility into how they spend time on their devices. These are genuine attempts at what we might call attention transparency — giving users the information they need to make conscious choices about their own behavior.

Maya, our seventeen-year-old in Austin, experienced a version of this recognition when she first looked at her Screen Time report and saw that she had spent four hours and twenty minutes on TikTok on a Tuesday. The number was not what surprised her — she would have guessed it was high. What surprised her was the gap between the number and her experience: she had not felt like she was spending four hours. The time had passed without her noticing. That gap — between actual time spent and subjectively experienced time — is one of the most important things usage dashboards reveal.

The limitation of usage dashboards as currently implemented is that they are passive. They display information but do not help users act on it. Genuinely humane usage dashboards would go further: they would connect the data to the user's stated goals, offer concrete tools for adjusting behavior, and be integrated into the platform experience rather than buried in device settings.

User-Controlled Feed Chronology

One of the most widely requested and most consistently denied features in major social media platforms is a simple chronological feed. Instagram switched from chronological to algorithmic ordering in 2016, citing the claim that algorithmic ordering helped users see more content they cared about. The fact that algorithmic ordering also dramatically increased time-on-platform and therefore advertising revenue was, in the company's public communications, coincidental.

Twitter/X has oscillated on this question, offering a "Latest Tweets" option while defaulting to the "For You" algorithmic feed. Instagram added a "Following" feed option in 2022 under significant user pressure. But in both cases, the option is not the default — users must actively choose it, and the design consistently nudges them back toward the algorithmic feed.

Giving users genuine, persistent control over feed ordering is technically trivial. It is politically difficult because platforms have demonstrated, in their own internal research, that algorithmic feeds increase engagement metrics. The user who knows that "engagement" often means anxiety, outrage, and compulsive checking might evaluate that tradeoff differently than the product team whose bonuses depend on it.


The Business Model Problem

We have arrived at the structural obstacle that makes much of the above discussion feel aspirational rather than descriptive. Virtually every large social media platform is funded by advertising. Advertising revenue is tied to attention. Attention is maximized by engagement. Engagement is driven by features that exploit psychological vulnerabilities. This is not a series of unfortunate coincidences. It is the logic of the business model.

Any serious discussion of humane design has to grapple with this directly, because surface-level design changes — adding a usage dashboard, making notifications opt-in — cannot fix a business model whose fundamental logic runs counter to user wellbeing. You can put a slightly healthier item on the menu without changing the menu, but you cannot serve users well if your revenue depends on keeping them at the table as long as possible regardless of whether that serves them.

Subscription Models

The most structurally clean alternative to advertising revenue is direct subscription. When users pay for a platform, the platform's incentive is to deliver value that users judge worthy of continued payment. This is not a guarantee of ethical behavior — subscription models can produce their own problematic dynamics — but it removes the most direct structural conflict.

Several platforms operate on subscription models and demonstrate that the model is viable, if not necessarily scalable to every use case.

Substack (launched 2017) allows writers to build paid subscription newsletters. The platform's revenue comes from a percentage of subscription fees, not from advertising. This means Substack has no financial interest in maximizing time-on-platform or outrage engagement. Writers succeed by delivering content their readers value enough to pay for. This is not a perfect model — Substack has faced criticism for hosting extremist content and for the power dynamics between the platform and writers who build audiences there — but the incentive structure is genuinely different.

Beehiiv (launched 2021) operates on a similar model, with creators retaining more control of their subscriber relationships and Beehiiv charging for platform access rather than taking a revenue cut. The deliberate design choice to give creators ownership of their audience lists — rather than holding those relationships hostage to the platform — represents a meaningful structural difference from advertiser-supported social media.

Signal (nonprofit, founded 2013 as TextSecure) operates on a combination of donations and a foundation grant from WhatsApp co-founder Brian Acton, who contributed $50 million after leaving Facebook in 2017. Signal is explicitly anti-surveillance by design: end-to-end encrypted, minimal metadata collection, no advertising. Its user base is smaller than WhatsApp but has grown substantially, particularly following privacy concerns about WhatsApp's 2021 terms of service changes.

The Cooperative Ownership Model

A more radical structural alternative is cooperative ownership, in which the platform is owned by its users rather than by shareholders or venture capital investors. The theoretical appeal is clear: if users own the platform, the platform's incentives align with users' interests by definition.

Wikipedia is the most successful example at scale, and we will examine it in detail in Case Study 02. Wikipedia is owned by the Wikimedia Foundation, a nonprofit, and governed by a community of volunteer editors. It has no advertising, no algorithmic feed, no engagement optimization. It is one of the ten most-visited websites in the world. Its existence is a standing refutation of the claim that large-scale information platforms require advertising and engagement manipulation to function.

Mastodon and the broader ActivityPub federation represent a decentralized alternative in which no single entity owns the network. Individual instances are operated by their own communities, often with explicit community norms and governance structures. The federated architecture means no single company can impose engagement-maximizing design across the network. It also means no single company can invest the resources to develop and maintain the platform at the scale of a Facebook or Twitter. The tradeoffs are real.

Advertising Without Surveillance

A third alternative is advertising that does not require behavioral surveillance. The current digital advertising model — in which ads are targeted based on detailed profiles of individual behavior, mood, and psychology — requires both the data collection that users object to and the engagement optimization that drives the arms race for attention. But advertising existed before behavioral targeting, and some forms of it remain viable.

Contextual advertising — ads matched to the content being viewed rather than to the individual viewer's profile — does not require behavioral data and can be effective for many advertisers. DuckDuckGo, the privacy-focused search engine, operates entirely on contextual advertising and has demonstrated sustained profitability. The Guardian has moved toward a reader-supported model supplemented by contextual advertising, explicitly rejecting the behavioral surveillance model.

The honest accounting of contextual advertising is that it generates less revenue per impression than behaviorally targeted advertising. This is why no major social media platform has adopted it: it would reduce advertising revenue. But it represents a genuine alternative that is viable for platforms with different business model assumptions.


Case Studies in Humane Design

Beyond the principles and the business model arguments, the most compelling evidence that ethical design is possible comes from specific examples. Let us examine several in detail.

Wikipedia: The Anti-Platform

Wikipedia is an anomaly in the attention economy. It is the fifth most-visited website in the world (as of 2024, according to Similarweb rankings). Approximately 1.8 billion unique devices access it every month. It is available in 332 languages, with over 62 million articles across all language editions. And it does none of the things that attention economy platforms are supposed to require: no advertising, no algorithmic feed, no notifications, no engagement optimization, no variable reward mechanics, no behavioral data collection.

How does it function? The Wikimedia Foundation is funded almost entirely by small donations from readers — typically through the donation banners that appear several times a year, which many users find annoying but which have proven remarkably effective. In 2022-2023, the Foundation raised approximately $180 million from over 8 million individual donors. This is a substantial sum, though modest compared to the revenues of major social media companies, and it is sufficient to maintain one of the internet's most important resources.

The governance model is equally instructive. Wikipedia's content is created and moderated by a community of volunteer editors, operating under policies developed and refined over more than two decades. This is not a perfect system — Wikipedia has well-documented problems with editor demographics (heavily male, heavily from wealthy countries), with the difficulty of maintaining articles on niche topics, and with occasional episodes of coordinated manipulation. But it is a system in which the incentives are oriented toward accuracy and information quality rather than engagement.

We will examine Wikipedia in much greater depth in Case Study 02. For now, the key point is this: Wikipedia proves that a large-scale information platform can operate without the engagement machinery that social media companies claim is necessary. It is not growing at the rate of TikTok. It is not generating billions in advertising revenue. But it is serving its users' genuine informational needs without exploiting their psychological vulnerabilities, and it has done so for over twenty years.

Signal: Privacy as Design Principle

Signal is the most widely used privacy-focused messaging application in the world, with an estimated 40 million monthly active users as of 2024. Its architecture embodies a design principle that is the direct inverse of the surveillance capitalism model: collect as little data as possible.

Signal's technical architecture ensures that the company cannot read users' messages even if compelled to do so by legal process. It stores minimal metadata. It does not use behavioral data for advertising (it has no advertising). It does not attempt to maximize engagement — there is no algorithmic feed, no notification manipulation, no variable reward mechanics. It is, by the metrics of the attention economy, a boring platform.

It is also a platform that people trust. Signal's growth has been driven not by viral mechanics or engagement optimization but by word-of-mouth and by periodic trust crises at competing platforms. When WhatsApp updated its privacy policy in January 2021 to allow more data sharing with Facebook, Signal gained approximately 7.5 million new users in a single week. Users knew what they were getting with Signal and chose it. That is not a mechanism of manipulation. It is an alignment of platform design with user values.

Mastodon and the Fediverse: Federation as Ethics

Mastodon, launched in 2016 by Eugen Rochko, is a decentralized microblogging platform that operates on the ActivityPub protocol. Instead of a single company owning the network, Mastodon consists of thousands of independently operated servers ("instances") that can communicate with each other. Users belong to specific instances, each with its own community norms and governance, while being able to follow and interact with users on other instances.

The federated architecture has direct ethical implications. Because no single company owns the network, no single company can impose engagement-maximizing design across it. Individual instance operators make their own choices about moderation, content policies, and design. Some instances have explicitly adopted humane design principles — no algorithmic recommendation, chronological feeds by default, strong moderation norms. Others have been more permissive. The result is a network in which users can choose their governance environment rather than accepting whatever a single company imposes.

Mastodon grew dramatically in late 2022 following Elon Musk's acquisition of Twitter, gaining approximately 2.5 million new users in a single week in November 2022. The growth tested the federation model's limitations — smaller instances struggled with the load, and new users found the onboarding experience significantly more complex than centralized platforms. But the growth also demonstrated genuine demand for an alternative architecture.

The limitation of the federated model is not technical but economic. Instances are typically operated by volunteers or small organizations with limited resources. Moderation is labor-intensive and often depends on the dedication of unpaid community members. At the scale of a Twitter or Facebook, the volunteer governance model faces obvious sustainability questions. These are real constraints, not arguments against federation but constraints that any honest account must acknowledge.


The Designer's Dilemma: Individual Ethics Inside Extractive Institutions

Harris's story — the internal presentation, the praise, the lack of structural change, the eventual departure — illustrates a tension that every engineer and designer at an engagement-maximizing platform eventually faces. What can you do, ethically, inside an institution whose incentive structure runs counter to user wellbeing?

This is not an abstract question. It is the daily reality for tens of thousands of engineers, designers, product managers, and researchers at major social media companies. Many of them are not indifferent to the harms their platforms produce. Some are actively distressed by them. The question of what they can do about it — and what they cannot do — matters both practically and morally.

The honest answer has several parts.

What individual designers can do: Advocate internally. Write the uncomfortable presentations. Build optional features that embody humane principles — the stopping cue, the chronological feed option, the notification opt-in — even when they know those features will not be defaulted. Document the ethical concerns clearly, for the record. Push back in design reviews. Refuse specific assignments that are straightforwardly harmful. Mentor junior colleagues toward ethical awareness. Connect with others inside the company who share the same concerns.

None of this is nothing. The accumulation of internal pressure matters. The companies that have made even small concessions to humane design — Screen Time, "You're All Caught Up," reduced notification defaults — did so in part because internal advocates pushed for them. The researcher who documents that a feature is producing harm, even if the documentation is ignored in the short term, creates a record that may matter later — in a regulatory proceeding, in a news story, in the company's own eventual reckoning.

What individual designers cannot do: Change the structural incentives. If the company's revenue model requires engagement maximization, individual ethical advocacy will be limited to modifications at the margins, not transformations of the core. This is not cynicism; it is a clear-eyed description of how institutional incentives work. Harris understood this eventually, which is why he left Google and built an external advocacy organization rather than continuing to push internally.

The designers who have had the most significant ethical impact are, disproportionately, those who left: Harris himself, Frances Haugen (the Facebook whistleblower whose 2021 document disclosures remain the most comprehensive public account of platform harms), Justin Rosenstein (who co-created the Facebook Like button and later regretted it publicly), and others. External pressure — from former insiders who can speak plainly — has proven more structurally significant than internal advocacy, because external advocates are not constrained by institutional loyalty or by fear of being sidelined.

This does not mean internal advocacy is without value. It means individual ethics without structural change has predictable limits, and anyone doing this work should be clear-eyed about what those limits are.


Velocity Media's Ethical Inflection Point

Three years into Velocity Media's existence, Dr. Aisha Johnson has completed the ethics audit that CEO Sarah Chen commissioned in the aftermath of the Hartley incident — the case, described in earlier chapters, in which Velocity's recommendation algorithm demonstrably worsened a teenage user's mental health crisis by continuing to surface depression-related content in a feedback loop the company had not anticipated and did not initially act on.

The boardroom is small and quiet when Dr. Johnson presents her findings to Chen and Marcus Webb, Velocity's Head of Product. Marcus has reviewed the slide deck in advance. He looks, Sarah thinks, like someone preparing to argue but not entirely sure he wants to.

Dr. Johnson's audit has identified three categories of features that her analysis characterizes as exploitative: the notification timing algorithm, which she demonstrates uses machine learning to identify the specific times each individual user is most psychologically vulnerable to re-engagement; the streak mechanic in the Creator tools, which she argues functions as a variable reward system that drives compulsive content-checking; and the content moderation lag that allowed the Hartley feedback loop to continue for eleven days after the first internal flag.

"I want to be precise about what I'm arguing," Dr. Johnson says. "These features were not designed with malicious intent. But intent is not the relevant standard. Effect is. And the effect of these three features, taken together, is to produce a platform that extracts attention from users in ways that do not serve their genuine interests."

She presents three paths forward.

The first path is cosmetic adjustment: add a usage dashboard, make the streak mechanic less prominent, improve the response time on content moderation flags. These changes would be genuinely positive, Dr. Johnson says, but they would not address the underlying architecture. Velocity would remain, at its core, an engagement-maximizing platform. The Hartley incident, or something like it, would happen again.

The second path is structural redesign of the three identified features: replace the vulnerability-targeting notification algorithm with a user-controlled notification system, replace the streak mechanic with opt-in engagement metrics that users can customize, and staff the content moderation function at the level the platform's scale requires. This would involve meaningful cost — the notification algorithm change alone would, in Marcus's estimate, reduce daily active user numbers by eight to twelve percent in the first quarter — but it would genuinely alter the platform's relationship with its users.

The third path is what Dr. Johnson calls the "alignment model": a fundamental rethinking of what Velocity measures as success. Instead of daily active users and average session length, primary success metrics would include user-reported satisfaction, whether users accomplished what they came to do, and whether the platform's youngest users showed healthy rather than compulsive patterns of use. This path requires not just feature changes but a business model conversation, because some of these metrics do not translate directly into the advertising revenue numbers that Velocity's investors track.

Marcus says, carefully: "Aisha, I respect everything in this deck. But if we take path three, we're having a conversation with our Series B investors that I don't know how to win."

Sarah Chen looks at the three paths on the screen for a long moment. "What does path two cost us," she asks, "versus what does path one cost us when the next Hartley happens?"

Dr. Johnson has prepared for this question. She opens to her final slide.

The answer is not simple. But it is not, she argues, the answer Marcus thinks it is. We will return to Velocity's decision in Chapter 40.


Toward a Vocabulary of Humane Design

One of the contributions that the humane technology movement has made — one that outlasts any specific platform feature or policy proposal — is a more precise vocabulary for describing what is happening in the design of digital environments. Having better words for things helps us see them more clearly, and it helps us demand better.

Let us consolidate the key terms.

Consent architecture refers to the structural design of how platforms obtain, present, and respect user choices. A consent architecture is honest when it presents choices in plain language, when the options are genuinely symmetrical (opting in and opting out are equally easy), and when the defaults reflect what most users would choose if they understood the choice. A consent architecture is manipulative when it uses dark patterns to discourage opt-out, when it buries options in obscure settings menus, when it resets preferences without notification, or when it uses deceptive framing ("Help us improve your experience" for data collection that primarily benefits the platform).

The EU's General Data Protection Regulation, and particularly its cookie consent requirements, has driven significant changes in consent architecture — though the implementation has been uneven, and the cookie banners that now populate European web browsing are themselves often examples of dark patterns masquerading as consent.

Attention budget is a conceptual frame that treats user attention as a finite resource that the user has a legitimate interest in allocating according to their own values and goals. A humane platform helps users manage their attention budget — it provides transparency about how much time they're spending, it builds in stopping cues, it respects stated time limits. An extractive platform treats the user's attention as a resource to be maximized and depleted, without regard for the user's own priorities.

The attention budget frame has practical implications for design: it suggests that platforms should be able to answer the question, "Is this feature helping users spend their attention according to their own values, or is it overriding their preferences in favor of the platform's engagement metrics?" This is a testable question. Most current platforms have not tested it.

Autonomy-preserving defaults is a principle drawn from behavioral economics and ethics. It holds that when setting defaults — which notification settings are on, which data is collected, what appears in the feed, whether algorithmic or chronological ordering is used — the ethical choice is to default to whatever gives the user maximum control over their own experience. This is sometimes called "autonomy-preserving" because it preserves the user's ability to make genuine choices rather than defaulting them into behaviors that serve the platform.

This principle has a counterpart in the medical ethics literature: informed consent requires not just that patients are told the truth but that the structure of the decision-making process supports genuine autonomous choice. The parallels to platform design are not accidental.

Meaningful friction names the specific use of designed resistance to support reflective decision-making. As noted above, the goal is not friction for its own sake but friction at points where users benefit from slowing down. The design challenge is identifying those points with precision: where is impulsive behavior most likely to conflict with the user's reflective preferences? Where does a pause genuinely help? This requires platforms to actually care about the answer — which, under engagement maximization, they do not.

Minimum viable humane platform is a question we have been building toward throughout this chapter: what is the smallest, least costly set of design and business model choices that would produce a platform that serves users rather than exploits them? This is a useful practical question because it resists the all-or-nothing framing in which perfect humane design is contrasted with the status quo, suggesting no middle ground. There is a middle ground. Wikipedia has found it. Signal has found it. The question for the Velocity Medias of the world is whether they are willing to find it too.


Conclusion: Proof of Concept

The industry will not reform itself without pressure. We have established this across the preceding chapters. The advertising-based business model creates structural incentives that are too strong, and too well-aligned with the interests of shareholders and growth-obsessed investors, to be overcome by internal advocacy alone. Regulation, as we examined in Chapter 38, is a necessary component of any serious structural change. Individual behavioral choices, as we discussed in Chapter 36, matter but cannot substitute for structural reform.

But none of this means that the only thing to do is wait for regulation and organize for structural change — though both of those are important. It means something more specific: that the ethical design alternatives that exist right now function as proof of concept. They demonstrate what is possible. They demonstrate that the claim "there is no other way to run a large platform" is false. They create the vocabulary and the design patterns that future platforms — and reformed versions of current platforms, under regulatory pressure — can draw on.

Wikipedia at 25 years has served billions of users without manipulating them. Signal has given hundreds of millions of people private communication without surveilling them. Mastodon has built a substantial social network without a single company controlling its incentive structure. These are not small demonstrations. They are evidence, at scale, that the choice between "large platform" and "non-manipulative platform" is a false one.

What is the minimum viable humane platform? It is a platform that:

  • Charges users directly for value, or accepts donations, or relies on contextual advertising, rather than behavioral surveillance advertising
  • Defaults to user privacy and user control, requiring opt-in rather than opt-out for data collection and notification
  • Provides genuine transparency about time spent and allows users to set meaningful limits that the platform helps enforce
  • Uses chronological feeds by default, or gives users genuine persistent control over feed ordering
  • Builds in stopping cues rather than infinite scroll
  • Introduces meaningful friction at points where impulsive behavior conflicts with user wellbeing
  • Measures success by user-reported satisfaction and goal accomplishment, not exclusively by time-on-platform
  • Staffs moderation at the level the platform's scale requires
  • Treats its youngest users with particular care, recognizing their heightened vulnerability

This is not a utopian list. Every item on it is implemented, in whole or in part, by at least one existing platform. The question is not whether it is possible. It is whether the people with the power to make these choices will make them — and what combination of market pressure, regulatory requirement, and public expectation will be needed to tip the balance.

Maya is seventeen. She has been using TikTok and Instagram since she was thirteen, and she has spent the last several months developing what she now calls, with some precision, "pattern recognition" — the ability to see the design choices embedded in the platforms she uses, to name them, and to evaluate whether they serve her. She has not left social media. She has not resolved to check her phone less and felt the resolution evaporate by Tuesday afternoon. She has done something harder and more durable: she has changed the way she sees.

In Chapter 40, we will follow Maya, and Velocity Media, and you, toward what comes next. Not rescue — she does not need rescuing. Not perfection — that is not available. Something more like informed agency in an imperfect world. Which is, in the end, the best that ethical design can offer too.


This chapter has surveyed the landscape of humane technology design: its intellectual origins in the Center for Humane Technology, its practical manifestations in specific design patterns, its structural requirements at the level of business model, and its proof of concept in platforms like Wikipedia and Signal. The vocabulary developed here — consent architecture, attention budgets, autonomy-preserving defaults, meaningful friction — will recur in Chapter 40 as we draw the book's arguments toward resolution. The exercises and case studies that follow offer opportunities to apply these concepts to specific platforms and specific design decisions.