Chapter 33 Exercises: Technology and Harm — Catfishing, Revenge Porn, and Algorithmic Discrimination
Exercise 33.1 — Policy Analysis: NCII Legislation (Individual Research, 45–60 min)
Research the NCII (non-consensual intimate image sharing) law in your state or jurisdiction. Use the Cyber Civil Rights Initiative's legislative map (cybercivilrights.org) as your starting point.
Then answer the following questions in 400–500 words:
- Does your jurisdiction criminalize NCII? If so, what elements are required for prosecution (intent, harm, type of image, type of distribution)?
- Does your jurisdiction include deepfake or synthetic NCII in its definition?
- What is the penalty range? Is this a misdemeanor or felony?
- What civil remedies (if any) does the law provide?
- Based on what you learned in the chapter about the limitations of NCII law, what gaps do you identify in your jurisdiction's approach?
Reflection question: If you were a state legislator, what one change would you make to your jurisdiction's NCII law, and why?
Exercise 33.2 — Algorithm Audit: Dating App Policies (Small Group, 45 min)
Working in groups of three to four, examine the public documentation for two major dating apps (Tinder, Hinge, Bumble, OkCupid, or another of your choice). Look specifically for:
- Algorithmic transparency: Does the platform explain how its matching algorithm works? What do users know about what determines who they see?
- Demographic data: Does the platform collect information about race/ethnicity? How is this data used?
- Anti-discrimination policies: Does the platform explicitly prohibit discrimination in user behavior? How is this enforced?
- NCII policies: Does the platform have a specific NCII reporting process? How quickly does it claim to act?
- Harassment reporting: How does the platform handle reports of harassment?
Write a 300–400 word comparative report. Which platform demonstrates better design ethics, in your assessment, and why? What is missing from both?
Exercise 33.3 — Design Challenge: A Safer Dating App (Small Group, 60 min)
You are a design team at a dating app company. Your CEO has given you an unusual mandate: redesign the app with dignity and safety as primary values alongside engagement. You have a $2 million feature budget.
Step 1 (10 min): List the three most significant harms your app currently enables or facilitates, drawing on the chapter.
Step 2 (15 min): For each harm, generate two possible design interventions. Be specific — describe what the feature looks like, who can use it, and how it reduces the identified harm.
Step 3 (15 min): Consider tradeoffs. For each proposed feature: Does it reduce harm AND maintain or increase engagement? Does it reduce harm at some cost to engagement? Is there a way to frame it that aligns business interests with safety interests?
Step 4 (20 min): Present your three most important proposed features to the class, along with the tradeoffs. Be prepared to defend your choices.
Exercise 33.4 — Case Scenario: Romance Scam Red Flags (Individual, 30 min)
Read the following fictional scenario and answer the questions.
Scenario: Your aunt, recently widowed, mentions that she's been chatting online for three months with "David," a structural engineer working on a bridge construction project in Morocco. He's American, divorcee, one adult son. He video-called her twice but the video was "glitchy." He's mentioned that he has feelings for her and hopes to visit when his contract ends. This week, he texted that a supplier has frozen his bank account due to a contract dispute, and he needs $4,000 to cover payroll for his crew — he promises to repay as soon as the dispute resolves.
Questions:
- List every specific "red flag" in this scenario that corresponds to documented romance scam patterns described in the chapter.
- How would you approach the conversation with your aunt? What psychological dynamics of romance scams does the chapter describe that are relevant to her situation?
- Your aunt says, "I'm not stupid — I know this could be a scam. But what if it's real and I'm ruining something genuine?" How do you respond?
- What practical steps (verification tools, resources, reporting mechanisms) could you help her use?
Exercise 33.5 — Ethical Analysis: Algorithmic Correction (Discussion, 45 min)
The chapter describes three approaches to algorithmic racial bias in dating apps: (1) optimize for expressed preference; (2) implement algorithmic correction; (3) use radical transparency.
Before class: Write 150–200 words arguing for the approach you find most defensible, and 100 words identifying the strongest objection to your chosen approach.
In class: Hold a structured discussion. The instructor should ensure all three positions are represented. The discussion should address:
- Who has the right to "correct" users' preferences, and under what conditions?
- Is there a meaningful distinction between a platform choosing what to show (which all platforms do constantly) and a platform correcting for racial bias?
- If algorithmic correction were implemented and users weren't told, is that deception? What if they were told?
- What would you need to see (research evidence, policy change, corporate commitment) to change your position?