23 min read

Estimated time: 3–4 weeks | Deliverables: Critical analysis report, design reform proposal, class presentation

Capstone 2: Deconstruct a Dating App

Critical Technology Analysis

Estimated time: 3–4 weeks | Deliverables: Critical analysis report, design reform proposal, class presentation


Overview

In Chapter 20, Nadia, Sam, and Jordan sat down and compared their dating app experiences — and what they found was striking. They were nominally using the same platform, the same interface, the same algorithm. But their experiences were categorically different. Nadia navigated questions about her ethnicity and religious background that appeared within minutes of matching. Sam encountered racialized hierarchies that the platform's interface did nothing to discourage. Jordan found that the binary gender options available did not adequately represent their identity, and that the app's matching logic appeared to treat them as an edge case — an afterthought.

The app, in other words, was not neutral. It never is.

This capstone asks you to do something that is increasingly urgent in a world where a large percentage of new relationships begin online: take a dating app apart and look at what is actually inside it. Not the marketing copy, not the testimonials, not the algorithm as the company describes it — but the actual design logic, the incentive structures, the representation choices, the safety architecture, and the assumptions about human desire baked into every swipe.

You will apply five analytical frameworks developed across the course, produce a substantial written analysis, and close with a concrete design reform proposal. The goal is not to produce a consumer review ("I gave it three stars because the interface is clunky"). The goal is to produce a piece of critical scholarship — to use the tools of psychology, sociology, and ethics to understand what this technology does to the people who use it, and why.


Learning Objectives

By completing this capstone, you will be able to:

  1. Apply course frameworks systematically to a real-world technology artifact, moving from theoretical concepts to concrete analytical claims about a specific platform.
  2. Conduct a design logic analysis, identifying how interface choices shape user behavior and what assumptions about desire those choices encode.
  3. Analyze questions of identity and representation in a technology context, identifying who is served well and who is marginalized by a platform's design.
  4. Apply the commodification framework developed in Chapter 20 and elsewhere to a specific platform's business model, identifying how market logic shapes the experience of intimacy.
  5. Evaluate safety and consent architecture as design decisions, not afterthoughts.
  6. Produce a design reform proposal grounded in critical analysis — moving from diagnosis to prescription.
  7. Communicate scholarly analysis publicly in a class presentation.

Part I: Background — Why Technology Critique Matters for Attraction Science

We did not always meet our romantic partners through apps. As recently as the early 2000s, the majority of couples met through mutual friends, at work, at school, or through family networks. The shift to online and app-based meeting has been rapid, significant, and not well understood.

What we do know is that the shift matters. Research by Bruch and Newman (2018), using data from a major dating platform, found that desirability on dating apps is highly concentrated — a small number of profiles attract the vast majority of messages — and that this concentration maps onto racialized and gendered hierarchies in ways that are statistically robust and socially troubling. Research on racial preferences in dating, examined in Chapter 25 through the Swipe Right Dataset and the Okafor-Reyes supplemental study, shows that racial filtering and preference expression that would be considered discriminatory in other social contexts is not only permitted but facilitated by app design.

At the same time, apps have undeniably expanded access to romantic connection for many people. They have made it easier to find other LGBTQ+ individuals in contexts where they might be invisible in everyday life. They have connected people across geographic distances. They have, for some users, been genuinely transformative.

The question is not "are dating apps good or bad?" That is too simple a question for a course that has spent a semester learning to distrust simple answers. The question is: What specific design choices, made by whom, for what economic reasons, produce which effects, for which populations? That question is tractable. That question has answers.


Part II: How to Choose Your App

You may analyze any real, currently operating dating or relationship app. However, some apps are better suited to this project than others.

Apps that work particularly well for this analysis: - Tinder — The industry standard; enormous literature; highly visible design logic; well-documented business model; raises all five analytical frameworks in stark form. - Hinge — Interesting design philosophy (explicitly designed to be "deleted"); raises questions about whether anti-addictive design is compatible with a growth business model. - Bumble — Gender-role interventions in design (women initiate in heterosexual matches); complex feminist claims that repay critical scrutiny. - Grindr — LGBTQ+ specific; racial filtering controversy; safety concerns; very different design assumptions about the nature of the encounters it facilitates. - OkCupid — Extensive profile system; algorithmic transparency (relative to competitors); interesting history of experimentation and controversy. - Her — Queer women and nonbinary users; identity representation questions; smaller market; interesting contrast with mainstream apps. - Coffee Meets Bagel — Asian American founding team; explicit counter-positioning against Tinder's swipe-volume model; interesting design philosophy. - Feeld — Designed for ethically non-monogamous and kink-adjacent users; raises questions about normative assumptions in mainstream design.

Apps to avoid or use with caution: - Apps with very limited public information about their design, business model, or user experience (you need sources). - Niche apps with very small user bases that have not received scholarly or journalistic attention. - Apps that have been discontinued.

What you will need: - Firsthand experience with the app, or documented accounts from users. You do not need to be actively dating to use an app for analytical purposes — many people create accounts specifically to study the platform. - Published reporting, scholarship, or company documents about the app. - The course's frameworks (this capstone guide and the relevant chapters).

You do not need to interview real users for this project, though you may draw on published accounts, journalism, or academic studies that include user testimony.


Part III: Step-by-Step Analysis Framework

Your critical analysis report should work through each of the following five frameworks in order. The frameworks build on each other: design logic sets up representation, which sets up commodification, which sets up safety, which sets up the algorithmic question. Think of them as nested layers of analysis, not five separate topics.

Each framework section in your report should be approximately 500–700 words.


Framework 1: Design Logic

What behaviors does the interface incentivize? What assumptions about desire does the design encode?

This framework asks you to treat the app as a designed artifact — a set of intentional choices made by engineers and product designers — and to ask what those choices produce in the people who use them.

Core questions: - What is the primary user action the interface is built around? (Swipe? Like? Message? "Star"? "Boost"?) What does that action assume about how people evaluate potential partners? - What information does the app foreground, and what does it bury? (Is the first thing you see a photo? An income range? A compatibility score? Height?) What theory of attraction does that hierarchy reflect? - How does the interface handle uncertainty and ambiguity? Dating involves enormous ambiguity; does the design acknowledge this or does it try to resolve it with false precision? - What does the app do to keep you using it? What behavioral design patterns — notifications, reward schedules, artificial scarcity — does it employ? (See Chapter 20 for discussion of variable reward schedules in app design.) - Does the interface make it easy or hard to end interactions gracefully? What does the "ghosting problem" look like as a design failure?

💡 Key Insight: Drawing on what you learned in Chapter 20, consider the concept of "dark patterns" in UX design — interface choices that exploit psychological biases to keep users engaged in ways that may not serve their stated goals. The question "does this app help me find a partner?" and "does this app maximize my engagement time on the platform?" have very different answers, and those answers may be in conflict.

What strong analysis looks like here: Rather than "the app uses photos," you write: "By foregrounding a single profile photo before any other information, Tinder encodes a theory of initial attraction that prioritizes visual assessment over all other compatibility signals. This design choice is consistent with evolutionary accounts of mate selection emphasizing physical appearance (Buunk et al., 2008), but obscures evidence — from both Okafor-Reyes preliminary data and attachment research in Chapter 11 — that long-term compatibility is predicted more strongly by values alignment and attachment style than by initial physical response."


Framework 2: Identity and Representation

Who does this app serve well? Who does it serve poorly? Whose experience is treated as the default?

This framework asks you to apply intersectionality — the framework introduced in Chapter 23 and developed throughout Part V — to the design of the platform. All technology has an implicit model of its user. Whose experience was assumed in the design of this app?

Core questions: - What gender options does the app provide? Are they binary or more expansive? What matching logic does it use, and does that logic serve non-binary and transgender users? Does the app require users to specify a gender, and what happens if they do not fit the available categories? - What racial and ethnic representation is present in the app's marketing materials, design examples, and published descriptions? Who is the implicit "default user"? - Does the app allow racial filtering? (Some allow users to filter potential matches by race or ethnicity; others do not.) What is the platform's stated rationale? What does the academic literature on racial preference in dating (Chapter 25) suggest about the consequences of allowing or prohibiting this feature? - How does the app handle disabilities, body size, and other forms of appearance-based identity? - Is the app designed primarily for hookups, casual dating, or long-term partnership? What does that implied relationship goal assume about its users?

🔵 Ethical Lens: Sam's experience in Chapter 20 — encountering racialized desirability hierarchies that the platform did nothing to discourage — is not unusual. Hutson, Taft, Barocas, and Levy (2018) have documented how dating app design can both reflect and reinforce racial hierarchies. Your analysis should engage with this literature.


Framework 3: Commodification Analysis

How does this app apply market logic to desire? What does it sell, and who is the product?

Chapter 20 introduced the concept of commodification of intimacy: the process by which market logic transforms romantic connection from a social relationship into an economic transaction. Dating apps are perhaps the clearest current example of this process. Your job here is to make that process specific and concrete for your chosen app.

Core questions: - What is the app's revenue model? Does it charge users directly (subscription), indirectly (advertising), or through a freemium model where basic access is free but premium features cost money? What are those premium features, and what does the existence of a pay tier imply about the "free" experience? - What data does the app collect about users, and how does it use that data? (Check the app's privacy policy — this is a legitimate research source.) Is the user a customer or a product? - Does the app create artificial scarcity? (Tinder's "Super Likes," Hinge's "Roses," and many other premium features work by making an apparently more emphatic signal of interest available only for purchase — what does this do to the experience of expressing genuine interest?) - How does the app's business model align with or conflict with its stated mission? Hinge's stated mission is to "be deleted" — to get users into relationships and off the platform. How is that compatible with a subscription business model that benefits from ongoing usage? - Drawing on Chapter 3's discussion of the PUA industry, how does your app's premium tier relate to the commercial exploitation of social anxiety? (Many apps sell features explicitly marketed at increasing your "desirability score" or "match rate" — what does that framing do?)

📊 Research Spotlight: In the Swipe Right Dataset (Appendix C), subscription_tier is significantly correlated with match_rate and message_response_rate, but the effect sizes are modest. This raises the question: do premium features actually produce better outcomes, or do they primarily monetize the anxiety of users who feel their organic results are inadequate?


How does this app handle safety and consent — as fundamental design priorities or as afterthoughts?

Dating apps carry specific safety risks that distinguish them from other social platforms. They facilitate meetings between strangers who may have romantic or sexual intentions. They aggregate location data. They create asymmetric information environments where one user may know much more about another than the other realizes. How does your app handle these risks?

Core questions: - What identity verification does the app require or offer? Can users authenticate their identity with a phone number, government ID, or social media link? What are the trade-offs between verification (greater safety, less privacy) and anonymity (greater privacy, less safety)? - How does the app handle reports of harassment, assault, or abusive behavior? What reporting tools are available? What does the app say it does with those reports? (Check published policy documents and journalism about moderation practices.) - Does the app have any proactive safety features — tools that help users make safer meeting decisions, detect suspicious patterns, or access safety resources? (Tinder introduced a safety check-in feature in 2021; several apps have partnered with organizations that run background checks.) Are these features prominent in the interface or buried? - What does the app do about consent? Are there any interface-level nudges toward asking for consent, communicating expectations about a meeting, or establishing shared understanding? Or does the app treat what happens outside the platform as simply not its concern? - Consider the concept of "context collapse" from Chapter 20: dating apps aggregate people from many different social contexts into a single space. What are the consent implications of this aggregation?

⚠️ Critical Caveat: Safety concerns are not equally distributed. Research consistently shows that women, LGBTQ+ users (particularly transgender women), and users of color face disproportionate harassment and safety risks on dating platforms. Your analysis should attend to whether your app's safety architecture is equally protective across its user base.


Framework 5: Algorithmic Logic

What do we know about how this app matches users? What assumptions about desire does the matching algorithm encode?

Every dating app uses some algorithmic logic to determine which profiles are shown to which users, in what order, and how prominently. These algorithms are among the most influential matchmakers in contemporary society — and they are almost entirely opaque.

Core questions: - What does the app say publicly about how its algorithm works? (Most apps provide some level of public description; the accuracy and completeness of these descriptions varies enormously.) - What does independent research or investigative journalism reveal about the algorithm? (Journalists and researchers have successfully reverse-engineered or documented algorithmic behavior for several major platforms.) - What signals does the algorithm appear to use? Profile completeness? Swipe patterns? Response rates? Activity frequency? What does each of these signals imply about who gets promoted and who gets buried? - Does the app use what it calls a "compatibility score" or "desirability score"? (Tinder's "Elo score" — discontinued in its original form but replaced by a similar system — is the most documented example.) What are the consequences of algorithmic desirability scoring? - How does the algorithm interact with the demographic patterns you identified in Frameworks 2 and 3? Does the algorithm amplify existing racial or gender hierarchies, or does it mitigate them? What would it mean to design an algorithm that actively disrupted those hierarchies?

⚖️ Debate Point: There is a genuine philosophical tension here. Algorithms that simply reflect user preferences ("show people what users respond to") will tend to amplify existing biases, because existing preferences are shaped by existing social hierarchies. Algorithms that actively correct for bias (e.g., by boosting matches across racial lines that users have not actively filtered out) are paternalistic in a way that many users object to. There is no easy answer. Your job is to articulate the tension, not to resolve it.


Part IV: Design Reform Proposal

Deliverable: 500–800 words

The final section of your project asks you to move from analysis to prescription. Based on your critical analysis, what would you change?

Your design reform proposal should:

  1. Identify your highest-priority concern — the single most important problem your analysis revealed. (You may have found many problems; the reform proposal asks you to prioritize.)
  2. Propose a specific design change. Not "the app should be more ethical" or "the algorithm should be fairer." A specific, implementable change to the interface, the business model, the safety architecture, the identity options, or the algorithmic logic.
  3. Justify the change using the course's frameworks. Why would this change address the problem you identified?
  4. Acknowledge the trade-offs. Every design change has costs. A business model change may reduce revenue. A safety feature may reduce usability. An identity expansion may require significant engineering. What are you asking the company to give up, and is it worth it?
  5. Consider feasibility. Is this change actually possible given the app's business model and technical architecture? If it requires the company to act against its financial interest, how do you think that change could be compelled — through regulation, market pressure, user advocacy?

This is not a utopian exercise. The best reform proposals are concrete, defensible, and realistic about what they require.

💡 Key Insight: Imagine Nadia, applying the course's frameworks to Bumble. Her highest-priority concern might be that the "women initiate" design rule — while intended to reduce harassment — may actually recapitulate gender binaries and create asymmetric burdens for queer and non-binary users. Her reform proposal might be specific: replace the rule with a system in which either user can set their own preferred initiation style during profile setup, with matching logic that respects both users' preferences. She would then need to explain the trade-off (this is more complex UX; some users liked the clarity of the original rule) and the feasibility (this is technically achievable; the question is whether Bumble sees it as central to their brand identity).


Part V: Grading Rubric

Your project will be evaluated across five dimensions. Each dimension is worth 20 points, for a total of 100 points.


Dimension 1: Analytical Depth and Framework Application (20 points)

Score Description
18–20 All five frameworks are applied with genuine depth and specificity. Claims are grounded in concrete details about the app's actual design, not generic statements. The analysis reveals non-obvious insights that go beyond what a regular user would notice.
14–17 Most frameworks are applied with reasonable depth. Some sections are more developed than others. Claims are generally specific and grounded.
10–13 All frameworks are addressed but some feel superficial. Several claims are generic ("the app could do more to protect users") rather than specific to this platform.
6–9 Only some frameworks are addressed, or most sections are too thin to constitute genuine analysis.
0–5 Analysis is absent, incoherent, or largely reproduces the app's self-description rather than critically examining it.

Dimension 2: Course Framework Integration (20 points)

Score Description
18–20 Explicitly and accurately draws on at least five distinct concepts or frameworks from the course, with correct attribution to specific chapters. The course frameworks genuinely illuminate the analysis — they are not just name-dropped.
14–17 Draws on at least three course concepts with mostly accurate application.
10–13 References course material but applications are sometimes superficial or imprecise.
6–9 Little engagement with course frameworks. Analysis could have been written without taking this course.
0–5 No meaningful engagement with course frameworks.

Dimension 3: Evidence and Sources (20 points)

Score Description
18–20 Analysis is grounded in a combination of firsthand observation, published scholarship, and credible journalism. At least six distinct sources are cited. Evidence directly supports the claims being made.
14–17 At least four sources cited; most evidence is relevant and accurately used.
10–13 At least two sources cited, or sources are present but used superficially.
6–9 Minimal sourcing. Analysis is largely based on personal impression without external evidence.
0–5 No sources.

Dimension 4: Design Reform Proposal (20 points)

Score Description
18–20 Reform proposal is specific, clearly tied to the highest-priority problem identified in the analysis, grounded in course frameworks, and honest about trade-offs and feasibility. Demonstrates genuine understanding of design and business constraints.
14–17 Reform proposal is mostly specific and grounded. Some aspects of trade-offs or feasibility are underdeveloped.
10–13 Reform proposal is present but vague ("they should do better on consent") or disconnected from the analysis.
6–9 Reform proposal is absent or is simply a wish list without grounding or trade-off analysis.
0–5 Reform proposal is missing.

Dimension 5: Presentation (20 points)

Score Description
18–20 Presentation is 10–12 minutes. Presenter clearly articulates the central analytical claim, walks through the most important evidence, and presents the reform proposal compellingly. Demonstrates command of the material. Engages with audience questions.
14–17 Presentation meets time requirements. Central claim and reform proposal are communicated. Audience engagement is adequate.
10–13 Presentation is significantly under or over time, or the analytical argument is unclear.
6–9 Presentation is largely a summary of the report rather than a genuine analytical argument.
0–5 Presentation is missing or substantially unprepared.

Part VI: Example Application

The following abbreviated example applies Frameworks 1 and 2 to a hypothetical app called "ConnectNow" — a fictional platform — to demonstrate the kind of analysis expected.


Example: Framework 1 (Design Logic) — ConnectNow

ConnectNow's primary user action is a binary swipe: right to express interest, left to decline. This interface choice has consequences that its designers may not have fully considered. By forcing a binary decision on incomplete information — typically a single photograph and a four-word tagline — the design encodes a theory of attraction as immediate, categorical, and primarily visual. This is consistent with what evolutionary psychologists call "thin slice" judgments (Ambady & Rosenthal, 1992), but it systematically disadvantages people whose attractiveness is more relational or contextual — those who "grow on you," whose wit or warmth is the main signal, or whose photographs do not capture their actual presence.

ConnectNow also employs a variable reward schedule (discussed in Chapter 20) in the form of delayed match notifications. Rather than notifying users immediately when a mutual match occurs, the app batches notifications and delivers them at intervals. This design choice — familiar from slot machine theory — exploits the psychology of intermittent reinforcement to maximize time spent checking the app. Users report checking for matches even in contexts where they know a match is unlikely. This is not a bug; it is a documented behavioral design pattern that the app's product team almost certainly implemented intentionally.

What makes this strong: The analysis connects specific interface details to specific theoretical frameworks (thin slice judgments, variable reward schedules). It makes non-obvious claims ("delayed notifications are intentional design") and supports them with course-relevant concepts.


Example: Framework 2 (Identity and Representation) — ConnectNow

ConnectNow offers five gender options at profile setup: Man, Woman, Non-binary, Transgender Man, and Transgender Woman. While this is more inclusive than the binary-only gender options still offered by several competitors, the design creates a problem for users who identify as genderfluid, agender, or with other non-Western gender categories. More significantly, ConnectNow's matching logic still requires users to specify which gender(s) they are interested in matching with — and the available options mirror the gender selection options, meaning that users interested in "people of any gender" must individually select all five options. This creates a friction cost for users who do not wish to sort their potential partners by gender, effectively treating gender-inclusive attraction as an edge case requiring extra effort.

ConnectNow's promotional materials — website, app store listing, social media — consistently depict couples who read as cisgender and heterosexual. The absence of visibly LGBTQ+ representation in promotional contexts is a signal, even if unintentional: it communicates who the "default user" is assumed to be.


Part VII: Research Resources

Core Scholarly Sources

  • Bruch, E.E., & Newman, M.E.J. (2018). "Aspirational Pursuit of Mates in Online Dating Markets." Science Advances, 4(8). — Quantitative analysis of desirability concentration on a major dating platform.

  • Hutson, J.A., Taft, J.G., Barocas, S., & Levy, K. (2018). "Debiasing Desire: Addressing Bias and Discrimination on Intimate Platforms." Proceedings of the ACM on Human-Computer Interaction, 2(CSCW). — Directly relevant to Frameworks 2 and 5.

  • Tyson, G., et al. (2016). "A First Look at User Activity on Tinder." IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. — Empirical study of behavioral patterns on Tinder.

  • Finkel, E.J., Eastwick, P.W., Karney, B.R., Reis, H.T., & Sprecher, S. (2012). "Online Dating: A Critical Analysis from the Perspective of Psychological Science." Psychological Science in the Public Interest, 13(1), 3–66. — Comprehensive scholarly review of online dating research.

Journalism and Investigation

  • Revealing long-form journalism about algorithmic logic has been published by The New York Times, The Atlantic, Wired, and BuzzFeed News. Search for your specific app — most major apps have been the subject of substantial investigative reporting.

  • App company blog posts, earnings calls, and public statements are legitimate primary sources for business model analysis. These can often be found by searching "[app name] press" or "[app name] investor relations."

  • App privacy policies are publicly available and constitute primary sources for your commodification and data analysis.
  • The FTC (ftc.gov) has published several reports on dating app safety and privacy that are relevant to Framework 4.

A Final Word

The goal of this capstone is not to conclude that dating apps are bad. They are not uniformly bad, and that conclusion would not be interesting anyway. The goal is to understand what they are — specifically, not in the abstract. What choices did a team of engineers and product designers make, in the context of specific business incentives, that produced a specific set of effects for a specific range of users?

That kind of analysis — specific, grounded, attentive to context and consequence — is exactly what the science of attraction requires. Human desire does not happen in a vacuum. Increasingly, it happens inside interfaces designed by other people for their own reasons. Understanding those interfaces is part of understanding ourselves.


This capstone project draws most directly on Chapter 20 (dating apps and the Swipe Right Dataset), Chapter 25 (racial preference and algorithmic logic), Chapter 23 (gender scripts), and Chapter 30 (commodification of intimacy). Appendix C (Swipe Right Dataset codebook) is also relevant for Framework 5.