Chapter 15 Exercises
Instructor Note: Several exercises in this chapter ask students to engage analytically with harassment scenarios. Establish clear classroom norms about respectful discussion before beginning these exercises, and remind students of the content note at the chapter's opening. Students who have personal experience of online harassment should not be required to share those experiences.
Exercise 15.1 — Spectrum Classification (Individual, 30 minutes)
Using the five-level spectrum defined in section 15.1 (passionate engagement → aggressive advocacy → pile-on behavior → targeted harassment → threat and violence facilitation), classify each of the following scenarios and explain your classification:
Scenario A: A fan posts a Twitter thread arguing that a specific fan fiction writer consistently mischaracterizes a beloved character, with links to examples.
Scenario B: The same Twitter thread is retweeted 2,000 times, and the fan fiction writer receives 800 direct messages, most of them negative.
Scenario C: Multiple users post the fan fiction writer's real name (which they have researched from her profile information) in comments on fan community forums.
Scenario D: A user posts a YouTube video titled "Why [Fan Fiction Writer]'s Work is Dangerous" that presents her writing as evidence of problematic values and asks viewers to leave comments on her fic pages.
Scenario E: A group of users coordinates in a private Discord server to mass-report the fan fiction writer's AO3 account for terms of service violations, with the explicit goal of getting her account suspended.
For each scenario: (1) classify the behavior, (2) explain your classification, (3) identify any contextual factors that would change your assessment.
Exercise 15.2 — Platform Response Analysis (Pairs, 45 minutes)
Research the harassment reporting mechanisms on two of the following platforms: Twitter/X, Tumblr, Reddit, Discord, AO3, TikTok.
For each platform, document: 1. What types of harassment the platform's terms of service explicitly prohibit 2. How a user reports harassment (the reporting process) 3. What outcomes the reporting process can produce (account warning, suspension, content removal, etc.) 4. What limitations you can identify in the platform's approach, using the framework from section 15.5
Report: 600-word written comparison of the two platforms' approaches, with your assessment of their relative adequacy.
Exercise 15.3 — Safety Protocol Design (Small Groups, 60 minutes)
You are a moderation team for a 5,000-member Discord server centered on a popular media fandom. Your community has experienced two recent incidents: (1) a member was doxxed after a shipping dispute, and (2) a high-follower member's post on Twitter triggered a pile-on directed at a newer server member.
Design a Community Safety Protocol that addresses: 1. Preventive measures (what community norms and practices should exist before incidents occur) 2. Response measures (what moderators should do when an incident occurs) 3. Support measures (how the community supports targeted members) 4. Documentation practices (what to preserve and how) 5. Escalation criteria (when to involve platform reporting, when to consider legal options)
Your protocol should be practical — something a volunteer moderator team could realistically implement. Present as a 2-page document formatted as actual community guidelines.
Exercise 15.4 — Intersectionality and Targeting (Discussion, Full Class)
Section 15.3 documents that harassment disproportionately targets women, fans of color, LGBTQ+ creators, disabled fans, and fans who violate "real fan" norms.
Preparation (individual, before class): Find and review one documented case of harassment targeting a fan or creator from one of these groups (reputable journalism, academic case studies, or documented community accounts). Bring your notes to class.
Discussion questions: 1. What specific forms did the harassment take in the case you researched? 2. How did the target's identity shape both the nature of the harassment and the community's response? 3. What does the pattern of disproportionate targeting tell us about the implicit norms of the community in which it occurred? 4. What is the relationship between "real fan" gatekeeping and harassment?
Instructor note: Be prepared to facilitate discussion sensitively. Students from targeted groups may have personal connections to this material.
Exercise 15.5 — Personal Safety Audit (Individual, Take-Home)
This exercise asks you to conduct an audit of your own online presence from a safety perspective. Note: this exercise is not about inducing fear — most fans never experience serious harassment. It is about developing awareness of what information about you is publicly accessible and what practices would provide better protection if you were to face targeted harassment.
Audit tasks: 1. Google your primary online username. What information appears? 2. Search your username across platforms you use (Twitter, Reddit, Discord, AO3, etc.). Is it consistent? What identifying information is visible in your profiles? 3. Review what personally identifying information (location, school, workplace, family members) you have shared on platforms associated with fan activity. 4. Check whether your accounts use unique passwords and two-factor authentication.
Reflection (200–300 words): What did the audit reveal? What, if anything, would you change about your current online presence based on what you found? What are the costs and benefits of the privacy practices described in section 15.7?
Note: This reflection is for your own use and will not be collected, to protect your privacy. If your instructor asks for a version of this reflection, share only general observations, not specific security information.