Chapter 39 Exercises: Design Ethics and Humane Technology

These exercises are designed to move you from conceptual understanding to applied analysis and original design thinking. They range from short-answer reflection questions to extended design projects and structured debates. Some exercises are best done individually; others are designed for group or class discussion.


Part A: Short Answer — Core Concepts

Exercise 1 In your own words, explain the difference between "engagement-maximizing design" and "Time Well Spent design." Use one specific platform feature as an example of each. Your answer should make clear not just what the features do but whose interests they primarily serve.

Exercise 2 Tristan Harris spent approximately two years at Google after writing his 2013 internal presentation before leaving. Drawing on the chapter's discussion of "the designer's dilemma," explain why his internal advocacy was limited in its effects. What structural factors constrained what he could accomplish inside Google, regardless of how persuasive his arguments were?

Exercise 3 Define "consent architecture" in your own words. Then identify one specific example of a manipulative consent architecture and one specific example of an honest consent architecture from your own recent experience with digital platforms. What made each one manipulative or honest?

Exercise 4 The chapter distinguishes between "friction that works against users" and "friction that works for users." Provide two concrete examples of each type. What principle allows you to tell them apart?

Exercise 5 Explain what the chapter means by "autonomy-preserving defaults." Why does the choice of default — rather than just the availability of an option — matter so much ethically? Use the example of notification settings to anchor your explanation.

Exercise 6 What is the "attention economy," and how does the advertising-based business model create structural incentives that are difficult to overcome through individual ethical design choices alone? Be specific about the mechanism — what exactly does advertising revenue incentivize, and how does that conflict with user wellbeing?

Exercise 7 The chapter states that Wikipedia is "a standing refutation of the claim that large-scale information platforms require advertising and engagement manipulation to function." Do you find this argument convincing? What would a skeptic say in response, and how would you answer that skeptic?

Exercise 8 What is "meaningful friction"? Why is the word "meaningful" important — what distinguishes it from friction that is merely annoying or obstructive? Give an example of a design that uses friction in a meaningful way and explain why it qualifies.


Part B: Feature Analysis — Humane Design Criteria

For each of the following platform features, analyze whether the feature embodies humane design principles or engagement-maximizing design. Use the vocabulary from the chapter (consent architecture, attention budget, autonomy-preserving defaults, meaningful friction, Time Well Spent). Explain your reasoning.

Exercise 9: Instagram's "You're All Caught Up" message Instagram briefly displayed a "You're All Caught Up" message in the feed in 2018, indicating that a user had seen all new posts from accounts they follow. The message appeared for a period and was then de-emphasized.

a) What humane design principle does this feature embody? b) Why might a company prioritizing engagement metrics choose to de-emphasize it? c) How would you redesign this feature to make it more effective as a stopping cue while remaining practical for a large platform?

Exercise 10: TikTok's "Take a Break" reminder TikTok allows users to set a "Take a Break" reminder that displays a message after a user-specified amount of time (20, 30, or 40 minutes). When the reminder appears, the user can dismiss it with a single tap.

a) Does this feature constitute meaningful friction? Why or why not? b) What design changes would make it more effective at its stated purpose? c) Is this feature an example of genuine user autonomy support or of "ethics theater"? Defend your position.

Exercise 11: Twitter/X's cookie consent banner (EU users) Under GDPR requirements, Twitter/X presents EU users with a cookie consent interface. The "Accept All" button is prominent and easy to tap; the "Manage Preferences" option requires multiple additional steps and is presented in smaller, lower-contrast text.

a) What does this design tell you about the platform's approach to consent architecture? b) How does this compare to the GDPR's stated intent? c) Redesign the consent interface to make it genuinely autonomy-preserving. What tradeoffs does your redesign involve?

Exercise 12: LinkedIn's "notification dot" LinkedIn displays a persistent red notification dot in its app icon on iOS and Android, even when there are no actual new notifications — the dot appears when there are only "suggested" connections or algorithmic content prompts.

a) What psychological mechanism does this feature exploit? b) Is this feature consistent with honest consent architecture? Why or why not? c) What would an autonomy-preserving alternative look like?

Exercise 13: Spotify's "Wrapped" annual feature Each December, Spotify presents users with a personalized "Wrapped" summary of their listening habits — most-played artists, total hours listened, etc. The feature is widely shared on social media and is associated with significant spikes in Spotify app opens.

a) Is "Wrapped" an example of attention transparency or engagement manipulation? b) How does it differ from Apple's Screen Time or Android's Digital Wellbeing dashboards? c) Could a version of "Wrapped" be designed that genuinely helps users reflect on their listening habits in a beneficial way? What would it look like?


Part C: Design Exercises — Proposing Humane Alternatives

Exercise 14: Redesign the Notification System Current practice: Most social media apps request notification permissions on first launch with a brief, generic prompt ("Allow [App] to send you notifications?"). Default is set to accept. Once accepted, the app sends a wide range of notifications — likes, comments, new followers, algorithmic recommendations, re-engagement prompts — with no easy way to selectively disable categories.

Your task: Design a notification system for a social media platform with 50 million users that embodies Time Well Spent principles. Your design should address:

a) When and how the user is asked about notifications b) What the user is asked (what language, what categories are presented) c) What the defaults are and why d) How users can adjust their preferences over time e) How the platform distinguishes between notifications that serve the user and notifications that serve the platform's engagement goals

Present your design as a brief description (200-300 words) plus a simple sketch or diagram of the key decision points.

Exercise 15: Redesign the Feed for User Autonomy Current practice: Instagram, TikTok, and Facebook all use algorithmic feeds optimized for engagement by default. Chronological or user-specified ordering, where it exists at all, is an opt-in feature buried in settings and periodically reset.

Your task: Design a feed ordering system for a social platform that genuinely preserves user autonomy. Your design should address:

a) What the default feed ordering is and why b) What options users are offered and how prominently they are presented c) How the system handles users who have not explicitly expressed a preference d) What information is provided to help users make an informed choice about their feed e) Whether and how algorithmic recommendations can be included in a way that is transparent and user-controlled

Exercise 16: Design a Humane Onboarding Flow The first-time user experience ("onboarding") is where platforms make their most consequential design choices about consent and defaults. Most current onboarding flows request maximum permissions (notifications, contacts, location) and set defaults that maximize engagement and data collection.

Design a first-time user onboarding flow for a social platform that embodies autonomy-preserving principles. Your design should:

a) Be completable in under five minutes b) Honestly explain what data will be collected and why c) Set defaults that reflect what a user who understood the full implications would choose d) Make opting out of data collection as easy as opting in e) Explain what the user gains and loses from different choices

Write out the text of the key screens and decisions in your onboarding flow.

Exercise 17: Design an Attention Budget Tool You are the product designer at a social media company that has decided to implement genuine attention budget features — tools that help users allocate their time according to their own values, not the platform's engagement goals.

Design this tool. Your design should:

a) Display time-spent data in a way that is meaningful and actionable, not just statistical b) Allow users to set time preferences that the platform actively helps enforce (not just records) c) Connect usage data to the user's stated goals (you may design the goal-setting feature as part of this exercise) d) Handle the tension between user-stated preferences and in-the-moment impulsive behavior in a way that respects autonomy e) Avoid being paternalistic — the tool should help users do what they say they want, not what the platform thinks is good for them


Part D: Research Exercises

Exercise 18: Wikipedia's Governance Model Research Wikipedia's governance structure in depth. Your investigation should address:

a) How does the Wikimedia Foundation's nonprofit structure affect Wikipedia's design choices? What would be different if Wikipedia were a for-profit company? b) How do Wikipedia's volunteer editors make decisions about content? What are the formal and informal governance structures? c) How does Wikipedia fund itself? What is the annual revenue, where does it come from, and what does it pay for? d) What are Wikipedia's stated policies on neutrality, verifiability, and conflict of interest? How are these policies enforced? e) What are Wikipedia's documented failures and limitations? How do these compare to the failures of advertising-supported information platforms?

Write a 600-800 word analysis based on your research. Your analysis should address: does Wikipedia's governance model prove that large-scale platforms can operate without engagement manipulation? What are the conditions that make this possible, and are those conditions replicable?

Exercise 19: Signal's Privacy Architecture Research Signal's technical architecture and business model.

a) What specific technical design choices ensure that Signal cannot read users' messages, even under legal compulsion? b) How is Signal funded? What was Brian Acton's $50 million donation, and what strings (if any) came with it? c) What is Signal's approach to metadata minimization? What metadata does Signal retain, and why? d) How did Signal's user base respond to WhatsApp's January 2021 privacy policy changes? What does this response suggest about the demand for privacy-preserving alternatives? e) What are Signal's limitations as a model for humane design? What would need to be different for Signal's model to scale to a billion users?

Write a 400-600 word analysis of Signal as a case study in privacy-as-design-principle.

Exercise 20: Mapping the CHT's Policy Proposals Investigate the Center for Humane Technology's current policy recommendations (available at humanetech.com).

a) What are the CHT's current primary policy recommendations? b) Which of these recommendations target design choices versus business model incentives versus regulation? c) Which of the CHT's recommendations, if any, have been adopted by platforms? By legislators? d) What criticisms have been made of the CHT's approach? Who makes these criticisms, and are they persuasive?

Write a 400-600 word analysis of the CHT's policy agenda, evaluating its strengths and limitations.


Part E: Debate and Discussion

Exercise 21: The Subscription Model Debate Debate question: "It is possible, and desirable, to run a large-scale social platform (100 million+ users) on a subscription model without advertising."

Divide into two groups: one arguing for this proposition, one against.

The affirmative team should address: - Evidence from existing subscription platforms (Substack, Signal, etc.) - What users would pay and why - How a subscription model changes design incentives - How to ensure accessibility for users who cannot afford subscriptions

The negative team should address: - Why advertising-supported platforms have outcompeted subscription platforms historically - What happens to equity and access under a subscription model - Why network effects create winner-take-all dynamics that make subscription models structurally disadvantaged - Whether there is genuine consumer demand for subscription social media at scale

After the debate, discuss: is this a binary choice? What hybrid models might address the limitations of both approaches?

Exercise 22: The Individual Ethics Discussion Discussion question: "Individual engineers and designers at engagement-maximizing platforms bear meaningful moral responsibility for the harms those platforms produce."

In your discussion, address: a) What is the strongest version of this claim? What would it mean for individual engineers to bear "meaningful moral responsibility"? b) What are the structural constraints that limit individual ethical agency inside large institutions? c) Is there a meaningful difference between engineers who build obviously harmful features (the vulnerability-targeting notification algorithm) and engineers who build neutral infrastructure that enables those features? d) What obligations, if any, do engineers have to blow the whistle, resign, or take other forms of public action? e) How does your answer change if you imagine an engineer who is the primary income earner for their family, or an engineer early in their career, or an engineer who believes their internal advocacy is making a marginal difference?

Exercise 23: The "Ethics Theater" Debate Some critics argue that features like Apple Screen Time, Instagram's "You're All Caught Up" message, and Twitter's "Read Before You Share" prompts are examples of "ethics theater" — superficial gestures toward user wellbeing that do not address the underlying extractive architecture and may actually serve as cover for continued exploitation.

Write a 400-600 word response to this criticism. Your response should: a) Define what "ethics theater" means and what would distinguish genuine ethical design from it b) Evaluate the specific features mentioned — are they ethics theater, genuine improvements, or something more complicated? c) Address the question: even if these features are incomplete, do partial improvements have value? What is the risk that praising them lets platforms off the hook for more fundamental reform?


Part F: Applied Analysis — The Velocity Media Scenario

Exercise 24: Evaluate Dr. Johnson's Three Paths Review the three paths Dr. Aisha Johnson presents to Sarah Chen and Marcus Webb in the chapter:

  • Path 1: Cosmetic adjustments (usage dashboard, reduced streak prominence, faster moderation response times)
  • Path 2: Structural redesign of the three identified exploitative features (notification algorithm, streak mechanic, moderation staffing)
  • Path 3: Alignment model (fundamental change in success metrics from DAU/session length to user satisfaction and healthy usage patterns)

Write a 500-700 word analysis that: a) Evaluates each path against the humane design principles from the chapter b) Identifies what each path would require in terms of costs, organizational change, and business model implications c) Argues for which path you believe Velocity Media should take, with specific reasoning d) Addresses Marcus Webb's concern that Path 3 is "a conversation with our Series B investors I don't know how to win"

Exercise 25: The Hartley Incident Design Post-Mortem The chapter references the "Hartley incident" — a case in which Velocity Media's recommendation algorithm worsened a teenage user's mental health crisis by continuing to surface depression-related content in a feedback loop the company did not initially address.

Conduct a design post-mortem on this incident. Your post-mortem should: a) Identify the specific design choices that enabled this outcome (you may extrapolate from what the chapter tells you and from your knowledge of how recommendation algorithms work) b) Identify the points at which the system could have intervened differently c) Propose specific design changes that would prevent this type of outcome d) Address the tension between the changes you're proposing and Velocity's engagement metrics

Exercise 26: Design Velocity's Minimum Viable Humane Platform Using the chapter's concept of the "minimum viable humane platform," design a version of Velocity Media that: a) Retains enough of what makes the platform valuable to users that they continue to use it b) Embeds humane design principles in its core architecture, not just as optional features c) Has a viable business model (you may choose from subscription, cooperative, contextual advertising, or a hybrid) d) Measures success with metrics that include but go beyond engagement

Present your design as a brief product specification (400-600 words) that could be presented to Velocity's board.


Part G: Synthesis

Exercise 27: The Vocabulary Test Using all five terms from the chapter's vocabulary section (consent architecture, attention budget, autonomy-preserving defaults, meaningful friction, minimum viable humane platform), write a single coherent analysis (500-700 words) of one real platform of your choice. Your analysis should demonstrate that you understand not just what each term means in isolation but how they relate to each other as components of a design philosophy.

Exercise 28: The Designer's Letter You are a mid-level product designer at a major social media platform. You have read this chapter. Write a 400-600 word internal memo to your product leadership team arguing for one specific design change that would embody humane technology principles. Your memo should: - Name the specific feature you are proposing or changing - Make the ethical case for the change using the chapter's vocabulary - Acknowledge the business model tension your proposal creates - Propose how to measure whether the change serves its intended purpose - Anticipate and address the most likely objections

Exercise 29: The Investor Pitch You are founding a new social platform based on humane design principles. Write a 300-400 word investor pitch that: a) Describes what the platform does b) Explains how it embodies humane technology principles c) Makes the business case: why will users pay, or what will advertisers pay for, and how does this generate a sustainable revenue model? d) Addresses the network effects problem: how do you compete with established platforms that have billions of users? e) Explains what metrics you will use to measure success

Exercise 30: The Chapter Synthesis Essay Write a 600-800 word essay responding to the following question: "The chapter argues that ethical platform design is possible, pointing to Wikipedia, Signal, and Mastodon as proof of concept. But none of these platforms has achieved the scale or social influence of Facebook, TikTok, or Instagram. Does the existence of ethical alternatives actually change anything, or do they remain marginal demonstrations while the extractive platforms continue to dominate?"

Your essay should engage with: - The specific evidence the chapter presents - The structural arguments about business models and network effects - Your own assessment of whether "proof of concept" at small or medium scale is sufficient or whether scale is itself the point - What combination of design change, regulatory pressure, and market dynamics would be required to shift the dominant model