Capstone Project 3: Designing a Humane Feature
From Concept to Ethical Analysis
Project Overview
The critique is not the hardest part. Cataloging what is wrong with platform design — identifying the dark patterns, tracing the mechanisms, noting the harms — is demanding work, but it operates in the register of analysis. The harder work is the follow-up question: What would better look like?
That question is harder because it requires you to hold multiple constraints simultaneously. A humane design cannot simply ignore the fact that platforms need to be financially viable. It cannot pretend that users do not have genuine social needs that platforms serve, however imperfectly. It cannot assume that the simple removal of manipulation automatically produces something good. And it cannot afford the luxury of abstraction — "design for wellbeing" is a slogan, not a specification.
This project asks you to design something specific: a single feature, for an existing or fictional platform, that genuinely serves user wellbeing rather than engagement maximization. You will define the user need it addresses, research how real users experience that need (or design a plan for how you would research it), develop the feature concept in enough detail to evaluate it, analyze it against ethical frameworks, stress-test it against business model realities, and present it for critique.
The goal is not a perfect design — there is no such thing. The goal is a rigorous process for getting closer to one.
What "Humane Design" Actually Means
The phrase gets used loosely. For the purposes of this project, a feature qualifies as humane if:
- It serves a user need the user can recognize and articulate.
- Its primary mechanism of value to the user does not depend on exploitation of cognitive biases or reward prediction error.
- It is honest about what it is doing — there is no hidden mechanism producing an effect the user did not consent to.
- Its long-term effect on the user's relationship to the platform, and to technology generally, is neutral or positive.
- It does not transfer harm — removing one manipulation tactic while increasing others, or improving one user group's experience at the expense of another's.
A feature does not have to be unengaging to be humane. Engagement is not the problem; the problem is engagement produced through manipulation. A feature that keeps users engaged because it genuinely delivers value is exactly the right outcome.
Learning Goals
By the end of this project, you will be able to:
- Define "humane design" with enough specificity to apply it evaluatively.
- Conduct or design a user research plan grounded in genuine user needs.
- Develop a feature concept that addresses a documented user need without manipulative mechanisms.
- Apply the ethical frameworks from Chapter 39 to systematically evaluate a design proposal.
- Analyze the financial viability of a humane feature concept without engagement maximization.
- Present and defend a design concept against substantive critique.
Phase 1: Problem Definition (Days 1–4)
Choosing Your Problem
The most common failure in design projects is starting with a solution and working backward to a problem. Resist this. Start with a genuine friction, limitation, or unmet need in how people use social platforms. Your feature will be far stronger if it addresses something real.
Sources for Problem Identification
Draw on any combination of the following:
Your own experience. What frustrates you about the platforms you use? Not in a generic "social media is bad" sense, but specifically — what specific thing do you find yourself wanting that the platform does not give you, or wanting to avoid that the platform makes difficult?
The book's analysis. The research and case studies throughout this book document specific user harms and unmet needs. Chapter 30 documents mood impacts; Chapter 31 documents adolescent identity struggles; Chapter 32 documents exposure to polarizing content users report not wanting; Chapter 34 documents the specific pressures on creators. Any of these can be reframed as a design problem.
Publicly available user research. Many platforms publish user research or researchers publish it independently. Qualitative studies of platform experience routinely surface specific frictions and unmet needs.
News and regulatory filings. Internal platform documents that have become public through whistleblowers, litigation, or regulatory proceedings often contain user research that did not result in design changes — frequently because the change conflicted with engagement metrics. These documents sometimes describe user needs directly.
Problem Definition Statement
Write a 300-word problem definition statement that answers four questions:
-
Who experiences this problem? Be specific about the user population. "All users" is not a useful answer. "Users aged 13–17 who use Instagram as their primary social communication channel" is useful.
-
What is the specific experience of the problem? Describe the user experience concretely — what happens, how it feels, what the user wants to do but cannot, or what they are pushed toward but do not want.
-
What current design features produce or worsen this problem? Connect to specific platform design choices (and where relevant, to the dark pattern taxonomy from Part III).
-
What would it mean to solve this problem? Not the solution — just the success condition. If you solved this problem, what would be different about the user experience?
Precedents and Prior Attempts
Before designing, research whether this problem has been addressed before — either by the platform itself (A/B tests that were not deployed, features that were launched and retracted) or by alternative platforms, nonprofit design initiatives, or academic prototypes. What happened? Why did it succeed, stall, or fail? This context will sharpen your thinking.
Phase 2: User Research Plan (Days 4–7)
The Role of User Research in Humane Design
One of the most consistent criticisms of engagement-maximizing design is that it uses user data in aggregate — optimizing for population-level metrics — without understanding individual user experience. Humane design inverts this priority. It starts from how users actually experience the platform and what they actually want, not from what the engagement data says they respond to.
This phase asks you to design a user research plan. Depending on your course context, you may execute part of it (conducting interviews or surveys with fellow students, for instance) or develop it as a specification for hypothetical future research. Either is acceptable; your plan should be clearly labeled as executed research or proposed research.
Research Method 1: Interviews
Design a semi-structured interview protocol for 5–8 participants who fit your target user population. A semi-structured interview has a defined set of key questions but leaves room for follow-up based on what participants say.
Your interview guide should: - Open with questions about general platform use (not the specific problem yet) - Move to the specific friction or unmet need you identified - Ask participants to describe specific incidents rather than general patterns ("tell me about a time when..." produces richer data than "do you ever feel...") - Include at least one question that challenges your assumptions (if you believe users find a feature frustrating, ask one question that gives them space to explain why they might value it) - Close with a forward-looking question about what they would want to be different
If you conduct these interviews, document: the interview protocol, a brief summary of each participant (demographic details, platform use context), and the key themes across the interviews.
If this is proposed research, document the protocol and explain what you would expect to find and why.
Research Method 2: Survey or Observational Study
Design a brief survey (8–12 questions) or observational study protocol that would complement the qualitative interviews with broader data. The survey or observation should: - Address the same core problem but from a different angle - Be administrable to a larger sample than the interviews - Include at least one quantitative measure (frequency, duration, ratings) alongside qualitative items
If you conduct the survey (even informally with classmates), report the results. If this is proposed research, document the instrument and the expected analytic approach.
Synthesizing Research Findings
Write a 400-word synthesis of what your research (executed or proposed) reveals about the user need. This synthesis should: - State the need clearly and specifically, in language derived from research rather than designer assumptions - Note any ways the research complicated or revised your initial problem definition - Identify any tensions between different users' needs (rarely is a problem uniform across all users) - State the design implication: given this need as you now understand it, what kind of feature would address it?
Phase 3: Design Development (Days 7–14)
The Design Brief
Before developing the feature in detail, complete the Design Brief Template (see templates section). The brief is a one-page specification that forces clarity before you invest in detailed design. Completing it early and returning to it often will keep your design grounded in the problem it is meant to solve.
Feature Concept Development
Develop your feature concept through three iterations, each more detailed than the last.
Iteration 1: Concept Sketches (Day 7–8) Generate three to five different approaches to addressing the user need. These do not need to be detailed — they are divergent ideas meant to explore the solution space before converging. For a textual treatment, describe each in two to three sentences. The goal is to avoid fixating on the first good idea.
For each concept sketch, note: - The core mechanism: how does this feature serve the user need? - The key assumption: what would have to be true for this to work? - The key risk: what could go wrong?
Iteration 2: Developed Concept (Days 9–11) Choose one concept to develop in more detail. At this stage, describe:
The feature at a high level: What is it? How does a user encounter it? What does it do?
The user flow: Walk through the user experience step by step. What does the user see first? What actions do they take? What does the system do in response? How does the interaction end or continue?
Interface description: Since this is a text-based project rather than a visual design project, describe the interface elements in enough detail that a designer could sketch them. Where does this feature appear in the platform hierarchy? What controls does the user have? What information does the system display?
Key design decisions and rationale: Every significant design choice involves a tradeoff. For each major decision in your design, explain what alternatives you considered and why you chose what you chose.
Iteration 3: Refined Design with Humane Design Criteria (Days 12–14) Return to the humane design criteria from the Project Overview and evaluate your developed concept against each one:
- Does it serve a user need the user can recognize and articulate?
- Is its mechanism of value free from exploitation of cognitive biases or reward prediction error?
- Is it honest about what it is doing?
- Is its long-term effect on the user's relationship to technology neutral or positive?
- Does it transfer harm to other users or other contexts?
For any criterion where your design does not fully meet the standard, either revise the design or document clearly why the tradeoff is acceptable.
Phase 4: Ethical Analysis (Days 14–18)
The Chapter 39 Framework
Chapter 39 (Design Ethics and Humane Technology) presents a multi-lens ethical framework for evaluating design decisions. Apply each lens to your feature.
Consequentialist Analysis Who benefits from this feature? To what degree? Who might be harmed, and how significantly? Are there second-order effects — ways the feature could produce unintended consequences at scale? Consider: what would happen if this feature were deployed to ten million users? To a hundred million? Does it scale well or do second-order effects become problematic at scale?
Deontological Analysis Does this feature treat users as ends in themselves rather than means to the platform's financial goals? Does it respect user autonomy — their capacity to make informed choices about their own behavior? Does it require deceiving users, even benevolently, about how it works?
Virtue Ethics Analysis If a platform consistently designed features with this feature's properties, what kind of institution would it become? What virtues does this design embody or cultivate in users?
Justice Analysis Who benefits most from this feature and who benefits least? Are there users who cannot access it, or for whom it is less effective? Does it widen or narrow existing inequalities in how platform benefits and harms are distributed?
Identifying Potential Harms
Even well-intentioned designs can produce harm. Using the harm categories from Chapter 39, identify and evaluate potential harms:
Direct user harms: Could this feature, in any usage scenario, make a user worse off in a way the feature was not designed to produce?
Population-level harms: Could widespread adoption produce effects that individual use would not?
Displacement harms: Does this feature improve one experience by degrading another (for the same user or for other users)?
Misuse potential: Could a bad actor — the platform, a harassing user, a government — misuse this feature in ways you did not intend?
For each potential harm you identify, assess its likelihood, severity, and whether a design modification could mitigate it.
Phase 5: Business Model Analysis (Days 18–22)
The Viability Question
The most common objection to humane design proposals is the business model objection: this would reduce engagement, and engagement is how we make money. This objection deserves engagement rather than dismissal. Some humane design proposals genuinely cannot be reconciled with current platform revenue models. Acknowledging this is more useful than pretending otherwise.
But the objection is also often deployed too broadly — used to foreclose any consideration of change rather than to evaluate specific proposals. Many humane design changes are not in fundamental tension with financial viability. Some actually improve it.
Analyzing Your Feature's Business Model Implications
Answer each of the following questions for your feature:
Effect on engagement metrics: Would this feature increase, decrease, or not meaningfully affect the platform's core engagement metrics (daily active users, time on platform, interaction rate)? Be specific about which metrics and why.
Effect on user trust: Chapter 4 established that user trust is itself a business asset — eroded trust produces user departure, regulatory scrutiny, and advertiser pressure. Would this feature affect user trust in this platform? How significantly?
Effect on advertiser relationships: If the platform is advertising-dependent, would this feature change anything about advertiser value? Could it make the platform's audience more or less valuable to advertisers? (Note: a more trusting, less burned-out user base is often more valuable per-hour to advertisers, even if total hours decline.)
Direct revenue potential: Could this feature itself generate revenue? Subscription upsell? Enhanced user data quality? Creator tools that generate platform revenue sharing?
Alternative revenue models: Are there alternative revenue models that would make this feature more — rather than less — rational to deploy? Chapter 38 and Chapter 39 both discuss structural alternatives to the attention economy business model.
Precedent: Has any platform made a similar humane design choice? What were the business outcomes? This is the most powerful argument available — evidence that a specific type of humane design is compatible with financial viability.
The Honest Assessment
End this phase with a 200-word honest assessment: given everything above, how commercially viable is your feature? Who would need to be convinced, by what arguments, for it to be deployed? If it is not commercially viable under current platform economics, under what conditions would it become viable?
Intellectual honesty here is worth more than optimism.
Phase 6: Presentation and Critique (Days 22–28)
Preparing the Presentation
Your final deliverable is a written design document — the consolidated output of all five previous phases. But for many course implementations, this phase will also involve an oral presentation and critique session. The guidance here addresses both.
Design Presentation Structure (oral, 15–20 minutes)
Opening — The Problem (3 min): State the problem and why it matters. Ground it in research, not assertion.
The Feature (5 min): Walk through the feature — what it is, how users encounter it, what it does, why key design decisions were made the way they were.
The Analysis (5 min): Present the ethical analysis highlights. Not every lens in full — the most important findings. What potential harms did you identify and how did you address them?
The Business Case (3 min): The viable path to deployment. Honest about obstacles.
The Invitation (2 min): What critique are you most uncertain about? What aspects of the design do you most want the audience to challenge?
Responding to Critique
The final phase of this project is a critique session. This is not a defense — you are not trying to protect the design from criticism. You are trying to understand what criticism reveals about the design's limitations and whether those limitations require fundamental revision or can be addressed through modification.
When you receive a critique, your first response should be to understand it fully before evaluating it. Ask clarifying questions. Then: - If the critique reveals a genuine flaw you had not seen, acknowledge it directly and think out loud about what revision would address it. - If the critique is based on a factual misunderstanding, correct the misunderstanding clearly but without dismissiveness. - If the critique raises a tradeoff you had considered and decided to accept, explain the reasoning for that decision. - If the critique raises a tradeoff you had not considered, say so.
Defensiveness in a critique session is a sign that the design has become more important than the problem. The problem is always more important than the design.
Design Brief Template
DESIGN BRIEF
Feature Name: _________________________
Platform (existing or fictional): _________________________
Designer: _________________________ Date: _________________________
THE USER:
(Who is the specific user population this feature serves?)
___________________________________________________________________________
THE PROBLEM:
(What specific friction, harm, or unmet need does this address?)
___________________________________________________________________________
___________________________________________________________________________
THE DARK PATTERN BEING REPLACED:
(What manipulative design element does this feature substitute for or displace?)
Pattern name: _________________________
Chapter source: ____
THE FEATURE IN ONE SENTENCE:
___________________________________________________________________________
HOW IT SERVES THE USER:
(What does the user get from this that they don't have now?)
___________________________________________________________________________
___________________________________________________________________________
HOW IT AVOIDS MANIPULATION:
(What makes this different from the current design in its mechanism of value?)
___________________________________________________________________________
___________________________________________________________________________
SUCCESS CRITERIA:
(How would we know if this feature is working? What would we measure?)
1.
2.
3.
KEY ASSUMPTIONS:
(What would have to be true for this to work?)
1.
2.
3.
BIGGEST RISK:
(What is the most likely way this feature fails or causes unintended harm?)
___________________________________________________________________________
Ethical Analysis Framework Worksheet
ETHICAL ANALYSIS WORKSHEET
Feature Name: _________________________
CONSEQUENTIALIST ANALYSIS:
Direct benefits (who, what, how much):
___________________________________________________________________________
Potential harms (who, what, likelihood, severity):
___________________________________________________________________________
___________________________________________________________________________
Second-order effects at scale:
___________________________________________________________________________
Net assessment: Positive / Negative / Mixed / Uncertain
Reasoning: _________________________
DEONTOLOGICAL ANALYSIS:
Does this feature treat users as ends rather than means? Y / N / Partially
Explanation: _________________________
Does it respect user autonomy and informed consent? Y / N / Partially
Explanation: _________________________
Does it require any deception? Y / N / Partially
Explanation: _________________________
Deontological assessment: Passes / Fails / Requires modification
Modification needed (if any): _________________________
VIRTUE ETHICS ANALYSIS:
What virtues does this design embody?
___________________________________________________________________________
What kind of institution would consistently make this kind of design choice?
___________________________________________________________________________
Does this design cultivate or diminish virtues in users?
___________________________________________________________________________
Virtue ethics assessment: _________________________
JUSTICE ANALYSIS:
Who benefits most? _________________________
Who benefits least? _________________________
Does this widen or narrow existing inequalities? _________________________
Specific affected populations: _________________________
Justice assessment: _________________________
HARM IDENTIFICATION:
| Harm Type | Specific Harm | Likelihood | Severity | Mitigation |
|-------------------|---------------|------------|----------|------------|
| Direct user harm | | | | |
| Population-level | | | | |
| Displacement | | | | |
| Misuse potential | | | | |
OVERALL ETHICAL ASSESSMENT:
Does this feature meet the humane design criteria? Y / N / Partially
Primary ethical strength: _________________________
Primary ethical concern: _________________________
Design revision recommended: Y / N
If yes, describe: _________________________
___________________________________________________________________________
Evaluation Rubric
Criterion 1: Problem Definition and Research (15 points)
| Score | Description |
|---|---|
| 13–15 | Problem definition is specific, user-centered, and grounded in evidence; identifies the user population precisely; connects to specific platform design features and Part III dark patterns; user research plan is methodologically sound and addresses design implications; synthesis is specific and revision-aware. |
| 10–12 | Problem definition is specific; user research plan is reasonable; synthesis connects research to design direction. |
| 7–9 | Problem definition present but vague; user research plan is thin or not connected to design; synthesis largely restates the problem. |
| 3–6 | Problem definition generic; user research minimal or absent. |
| 0–2 | Problem definition missing or entirely designer-centered rather than user-centered. |
Criterion 2: Feature Design Quality (25 points)
| Score | Description |
|---|---|
| 22–25 | Feature concept clearly addresses the defined user need; user flow is fully described and coherent; interface elements specified in sufficient detail; key design decisions explained with explicit tradeoff reasoning; three iterations show genuine development and refinement; humane design criteria evaluated specifically. |
| 17–21 | Feature addresses the user need; user flow mostly complete; interface described adequately; most design decisions explained; evidence of iteration. |
| 12–16 | Feature relevant to problem but may not fully address it; user flow incomplete; limited design decision rationale; limited evidence of iteration. |
| 6–11 | Feature concept underdeveloped; user flow sketchy or absent; design decisions unexplained. |
| 0–5 | Feature concept missing or not related to defined problem. |
Criterion 3: Ethical Analysis Depth (25 points)
| Score | Description |
|---|---|
| 22–25 | All four ethical frameworks applied specifically to the feature; potential harms identified across all four categories; likelihood and severity assessed; design modifications proposed for non-trivial harms; analysis demonstrates genuine engagement with tensions rather than confirmation-seeking. |
| 17–21 | Three or more frameworks applied with reasonable specificity; most harm categories addressed; design modifications proposed for identified harms. |
| 12–16 | Two frameworks applied; some harm identification; limited design modification. |
| 6–11 | Ethical analysis present but surface-level; frameworks named but not applied. |
| 0–5 | Ethical analysis absent or pro forma. |
Criterion 4: Business Model Analysis (20 points)
| Score | Description |
|---|---|
| 18–20 | Effect on each relevant business metric analyzed specifically; trust and advertiser relationship considered; alternative revenue potential evaluated; precedent cited if available; honest assessment of viability acknowledges genuine obstacles without either dismissing the concept or dismissing the obstacles. |
| 14–17 | Main business metrics addressed; viability assessed with some specificity; honest about at least major obstacle. |
| 10–13 | Business model analysis present but incomplete or superficial; viability either over-optimistic or dismissed without analysis. |
| 5–9 | Business model analysis thin; viability not seriously engaged. |
| 0–4 | Business model analysis absent. |
Criterion 5: Presentation and Critique Response (15 points)
| Score | Description |
|---|---|
| 13–15 | Presentation clear, structured, and appropriately timed; critique responses demonstrate genuine engagement — acknowledging flaws, distinguishing genuine critiques from misunderstandings, thinking aloud about revisions; final document reflects learning from critique process. |
| 10–12 | Presentation coherent; most critique responses substantive; some evidence of critique-informed revision. |
| 7–9 | Presentation adequate; critique responses mostly defensive or dismissive. |
| 3–6 | Presentation disorganized; minimal engagement with critique. |
| 0–2 | Presentation not completed or critique session not engaged. |
Total: 100 points
A footnote on the history of this work. The humane technology movement is not new. Researchers have been arguing for user-centered design since the 1970s. What is new is the scale of the manipulation that has emerged from engagement-maximizing platforms, and the urgency of the alternatives. The designers who will change this landscape are, right now, learning to think through exactly the process this project asks of you: starting from user needs, holding ethical constraints alongside business constraints, proposing specific workable improvements, and defending them honestly against genuine critique. That is the training. The problem it prepares you for is real.