Case Study 15.2: Online Safety Tools and Resources for Fan Creators — A Practical Guide
Overview
This case study provides practical information about online safety tools and strategies specifically relevant to fan creators. Unlike the other case studies in this chapter, which apply sociological frameworks to documented events, this case study is designed to be practically useful — a reference resource that fan creators, fan community members, and community moderators can use to improve their safety practices.
The information here is current as of the publication date of this textbook but should be verified against current platform policies, as platform features and policies change frequently. The case study is organized around the major safety challenges described in Chapter 15.
Section 1: Account Security Fundamentals
Account security is the foundation of all other online safety measures. A compromised account can undermine all other protective measures and is itself a harassment vector — attackers who gain access to your account can impersonate you, damage your reputation, and access your direct messages.
Password Security
Use a unique, strong password for every online account. "Strong" means: at minimum 12 characters, combining letters (upper and lower case), numbers, and symbols; not based on dictionary words, names, or easily guessable personal information; not shared between accounts.
The practical challenge is remembering unique strong passwords for many accounts. The solution is a password manager — software that stores encrypted passwords and generates them automatically. Reputable options include 1Password, Bitwarden (open-source), Dashlane, and LastPass. Most password managers are available for free or at low cost. Using a password manager is the single most impactful individual security decision most internet users can make.
Two-Factor Authentication (2FA)
Two-factor authentication requires a second verification step beyond your password when logging in — typically a code sent to your phone, generated by an authenticator app, or provided by a hardware key. Enable 2FA on every platform that offers it, prioritizing accounts with large followings, accounts connected to payment information, and accounts with access to community moderation tools.
Authenticator apps (Google Authenticator, Authy, Microsoft Authenticator) are more secure than SMS-based 2FA because SMS can be intercepted through SIM swapping attacks. If a platform offers the choice, use an authenticator app rather than SMS.
Hardware security keys (YubiKey) are the most secure 2FA option for high-risk accounts but require physical hardware and may not be supported by all platforms.
Email Account Security
Your email account is often the recovery path for all other accounts. Compromising your email account gives an attacker potential access to every other account. Apply the strongest available security to your primary email account: unique password, authenticator-app 2FA, and where available, review of which apps have access to your email.
Section 2: Platform-Specific Safety Features
Twitter/X
- Block: Prevents a user from seeing your posts, messaging you, or following you. Blocked users cannot see your profile when logged in.
- Mute: Silences a user's posts in your feed without blocking them. The muted user doesn't know they've been muted.
- Quality filter: Reduces visibility of tweets from accounts with lower trust signals. Found in notifications settings.
- Privacy settings: Option to protect tweets (make your account private, requiring follow approval). This severely limits harassment exposure at the cost of limiting your reach.
- Reporting: Report individual tweets or accounts for harassment, threats, or doxxing. Twitter's "report a coordinated inauthentic behavior" pathway is more effective than individual-content reports for coordinated campaigns.
Limitations: Twitter's automated harassment detection is known to misfire. Documentation before reporting significantly improves outcomes. Mass-reporting by harassers can result in account restrictions even for victims.
Discord
- Block: Prevents the user from messaging you.
- Server-specific banning: Moderators can ban users from servers.
- Slowmode: Limits how frequently users can post in a channel, reducing pile-on speed.
- Phone verification requirement: Servers can require verified phone numbers to join, reducing the ease of creating sockpuppet accounts.
- Verification levels: Server administrators can set verification levels that require email verification, phone verification, or account age minimums before joining.
Discord's closed-server architecture provides meaningful harassment protection. Mireille Fontaine's ARMY server uses phone verification and invitation-only joining to reduce coordinated infiltration.
Archive of Our Own (AO3)
- Blocking: Blocks a user from leaving comments on your works or viewing your profile.
- Comment screening: Requires your approval before comments appear publicly on your works.
- Pseud system: Using different pseudonyms for different works on the same account provides content separation.
- Content/tag system: AO3's tagging system allows creators to flag content that may be triggering, including content that may attract hostile attention.
AO3 is operated by the Organization for Transformative Works (OTW), a nonprofit, and has different governance than commercial platforms. The OTW's Terms of Service and Policy committee can be contacted directly for harassment involving AO3 accounts.
Tumblr
- Block: Prevents the user from messaging you, reblogging your content, or sending asks.
- Safe mode: Can filter specific content types.
- Anonymous ask settings: Turning off anonymous asks eliminates anonymous ask harassment.
Tumblr's harassment response has historically been inconsistent. The platform's architecture (reblog chains) means that content circulates widely before moderation can respond. Documenting reblog chains before reporting is particularly important on Tumblr.
- Block: Prevents the user from messaging you.
- Subreddit rules: Community moderators set rules that can include anti-harassment provisions.
- Reporting: Report individual posts and comments; subreddit moderators and Reddit's site-wide moderation handle reports.
Reddit's primary harassment protection is through subreddit-level governance. For harassment occurring outside a subreddit (direct messages, profile posts), Reddit's site-wide reporting system is less responsive.
Section 3: Documentation Practices
Thorough documentation of harassment, before you take any other action, is essential for platform reporting and potential legal action.
What to Document
- Screenshots of all harassing content, including the sender's username, the date and time, and the URL of the post or message where possible
- The full thread or conversation context, not just the most offensive individual messages
- Evidence of coordination: multiple accounts posting similar content, posts in other communities organizing the campaign, accounts explicitly discussing coordinating
- Your attempts to report and platform responses (or non-responses)
How to Document
- Screenshot with timestamp visible. On most devices: timestamp is visible in the system clock in the screenshot.
- Use a browser extension or service to archive web pages (archive.org's "Save Page Now" function, or archive.ph). Archived pages cannot be deleted by the harasser after the fact.
- Organize documentation chronologically in a dedicated folder. Use clear file naming: YYYY-MM-DD_platform_username_description.
- Keep backups in at least two locations (local storage and cloud storage, or two different cloud services).
What Not to Do
- Do not delete harassing messages before documenting them. Platform reporting requires evidence.
- Do not engage with harassers in ways that could be misrepresented. Responses to harassment can be screenshot and circulated out of context.
- Do not rely on the platform's notification system to preserve evidence — notifications can expire and platforms can take down content.
Section 4: Platform Reporting Strategies
Platform reporting is most effective when it is specific, documented, and where possible coordinated.
For Individual Incidents
Report the specific content using the platform's most relevant reporting category. For harassment, this is typically a category specifically for harassment, threats, or abusive behavior (as distinguished from spam, copyright violations, or other categories). Provide context in the free-text field if available.
For Coordinated Campaigns
Most platforms have specific reporting pathways for coordinated inauthentic behavior or coordinated harassment. These are separate from individual content reports and are processed differently. When reporting a coordinated campaign: - Report multiple accounts and pieces of content - In the free-text field, explain that this is coordinated (multiple accounts, similar content, documented evidence of coordination) - Provide links to the documentation of coordination if available
Follow-Up
Platform responses to harassment reports are often slow and outcomes are uncertain. Following up on reports (some platforms allow you to check the status of a report) and appealing non-responses is sometimes effective. Public documentation of platform non-response — posting about the harassment and the platform's failure to respond — creates reputational pressure that can accelerate response.
Section 5: Community Support Resources
Mutual Aid in Fan Communities
Community members who have experienced harassment can benefit from direct support: help documenting the campaign, coordinated reporting alongside them (many individual reports of the same harassing content are more likely to be acted on than a single report), and emotional support during a distressing event.
If you witness harassment of a community member: - Notify community moderators immediately - Offer to help document (if the target is comfortable) - Coordinate with the target before taking action — they should lead their own response - Follow the target's lead on whether to amplify or contain (public amplification may help or worsen the situation depending on context)
External Organizations
Crisis Text Line: Text HOME to 741741 (US, UK, Canada, Ireland). Provides free, 24/7 crisis support via text message.
National Alliance on Mental Illness (NAMI): 1-800-950-6264. Mental health support and resources.
PEN America Online Harassment Field Manual (pen.org/online-harassment-field-manual): Comprehensive, free guide to documentation, platform reporting, and legal options for targeted online harassment. Oriented primarily toward journalists and writers but broadly applicable.
Crash Override Network: Founded by Zoë Quinn, who was targeted in Gamergate; provides resources specifically developed from experience with coordinated online harassment campaigns.
Cyber Civil Rights Initiative (cybercivilrights.org): Resources for non-consensual image sharing and related harassment.
Section 6: When to Consider Legal Options
Legal options in harassment cases are limited but real. Consider consulting with an attorney if: - You have received explicit threats of violence (these may be prosecutable under state criminal law) - You have been doxxed (publishing personal information with intent to intimidate may constitute harassment under state law) - Your account has been hacked (unauthorized access to computer systems is a federal crime) - You have been subjected to false DMCA takedowns (bad-faith DMCA abuse may support legal action)
Many attorneys who specialize in online harassment offer initial consultations at low or no cost. Organizations like the Cyber Civil Rights Initiative can provide referrals.
Legal action is not a realistic option for most harassment situations — it is slow, expensive, and jurisdictionally complex. But documentation practices that support legal options should be followed regardless, because you cannot know at the start of an incident how serious it will become.
Summary Table
| Platform | Key Safety Feature | Key Limitation |
|---|---|---|
| Twitter/X | Quality filter, coordinated-behavior reporting | Algorithm rewards engagement; misfire risk |
| Discord | Server verification levels, invitation control | Limited to server; external platforms unaffected |
| AO3 | Comment screening, blocking | Harassment often originates off-platform |
| Tumblr | Anonymous ask disable | Reblog chain architecture spreads content widely |
| Subreddit moderation | Site-wide moderation less responsive |
Discussion Questions
-
What does it reveal about platform power dynamics that fan communities have developed such sophisticated collective knowledge about navigating platform safety systems — knowledge that shouldn't be necessary if those systems worked effectively?
-
The documentation practices described here require significant time and emotional labor from harassment targets. Who bears the cost of platform inadequacy? Is this fair?
-
What would a genuinely adequate platform response to fan community harassment look like? What would it cost platforms to implement?
-
Mireille Fontaine's server developed its own safety protocol after a member's crisis. What does this grassroots safety infrastructure development tell us about the distribution of responsibility for online safety between platforms, communities, and individuals?