It is 11 PM on a Tuesday. KingdomKeeper_7 — whose real name is Marcus, though almost no one in the Kalosverse knows this — is sitting at his desk with the r/Kalosverse moderation queue open in one tab and a half-finished plate of reheated pasta in...
Learning Objectives
- Apply Elinor Ostrom's principles for governing commons to fan community governance structures, explaining which principles fan communities typically satisfy and which they struggle with.
- Distinguish between explicit written rules, implicit community norms, and platform policy, and explain how each layer of the multi-layer governance system operates and interacts with the others.
- Analyze the labor demands and structural vulnerabilities of volunteer fan governance, including moderator burnout, mod capture, and mod abdication, using the Kalosverse and ARMY Discord cases.
- Evaluate the Organization for Transformative Works' governance model as a case of fan democratic institution-building, including its achievements and the costs it imposes on participants.
- Assess the structural power imbalance between platform governance and community governance and explain why platform governance always takes precedence over community decisions.
In This Chapter
- Opening: 11 PM on a Tuesday
- 13.1 Why Communities Need Governance
- 13.2 Rule Types and Norm Layers
- 13.3 The Moderator as Institution
- 13.4 The AO3 Model — Fan Governance at Scale
- 13.5 Discord Server Governance
- 13.6 When Governance Fails
- 13.7 Platform Governance vs. Community Governance
- 13.8 Chapter Summary
Chapter 13: Community Governance — Rules, Norms, and Enforcement
Opening: 11 PM on a Tuesday
It is 11 PM on a Tuesday. KingdomKeeper_7 — whose real name is Marcus, though almost no one in the Kalosverse knows this — is sitting at his desk with the r/Kalosverse moderation queue open in one tab and a half-finished plate of reheated pasta in another. He has been moderating this subreddit for six years. He has read, conservatively, tens of thousands of posts. He has written, conservatively, thousands of moderator comments, warnings, removal explanations, and rule clarifications. He has banned hundreds of users, appealed dozens of platform decisions, and participated in dozens of late-night mod team discussions about cases where the rules didn't quite fit the situation.
Tonight there are 47 posts in the queue. Most are routine — low-effort, off-topic, obvious spam. He can process these in twelve minutes. Then there are the three hard ones.
Case one: A post titled "Should Riri Williams replace Iron Man in the mainline MCU?" The post is respectful, cites specific comics and film evidence, and asks a genuine question. But KingdomKeeper_7 knows from six years of experience that this specific question — not whether Riri Williams is a compelling character (broad consensus: yes) but whether she should replace Iron Man — reliably generates a specific kind of heated argument in the Kalosverse, one that tends to involve racist undertones that are individually difficult to action but collectively create an unsafe environment. The post is technically within the rules. The discussion it will generate may not stay that way.
Case two: A post celebrating an instance of racist trolling targeting a creator of color who works adjacent to the MCU fandom. This one is easy: clear violation, immediate removal, one-week ban for the poster. KingdomKeeper_7 takes exactly forty-five seconds.
Case three: A post that is not, technically, against any written rule. It links to a fan project and the link is fine, the post is fine, the poster is a regular community member in good standing. But KingdomKeeper_7 recognizes the poster, the project, and the pattern: three times in the past year, posts by this user linking to projects by this particular circle of fans have been the catalysts for organized harassment of fan accounts not affiliated with their circle. Nothing in the individual post is actionable. The pattern is the problem.
KingdomKeeper_7 removes all three. He writes a comment explaining the removal of Case Three — a brief, measured explanation citing the community norm against posts that have historically preceded harassment, without accusing the poster of intent. He hits post.
He knows, from experience, that this comment will generate somewhere between 80 and 250 replies over the next 48 hours, the majority of them arguing that he is wrong, that he is abusing moderator power, that he has no right to remove posts based on pattern rather than content. He is not wrong. He is not abusing his power. He is also doing something for which there is no written rule, no clear procedure, and no institutional support — something that he does, unpaid, in his personal time, at 11 PM on a Tuesday, because someone has to and he is the person who chose to.
This is fan community governance. Not in the sense of formal policy documents and democratic procedures (though those exist, and this chapter examines them), but in the sense of the daily, unglamorous, contestable, necessary work of maintaining the conditions under which a fan community can function.
13.1 Why Communities Need Governance
The question of whether fan communities need governance might seem to have an obvious answer: of course they do. But the obvious answer obscures a set of genuine tensions that governance must navigate, and understanding those tensions is essential for understanding why governance looks the way it does.
The Tragedy of the Commons
Garrett Hardin's 1968 concept of the "tragedy of the commons" describes a fundamental problem in shared-resource management: when a resource is held in common and each individual actor can benefit from using it beyond its optimal level, rational self-interest drives overuse until the resource is exhausted. Classic examples involve grazing commons, fisheries, and the atmosphere.
Fan communities face analogous dynamics. The "resource" in question is the quality of community discourse — the sense that the community is a welcoming, interesting, intellectually stimulating, emotionally supportive space where genuine engagement with the source text and with other fans is possible. This resource is what makes the community worth participating in. It is also susceptible to tragedy-of-the-commons dynamics:
- Individual actors can benefit from posting low-effort, provocative, or inflammatory content (they get engagement, attention, emotional gratification) while the community pays the collective cost (increased noise, decreased quality, hostile atmosphere)
- Individual actors can benefit from harassment (they express hostility, intimidate opponents, protect subcultural capital from perceived threats) while the community pays the cost (reduced safety, departure of targeted members, reputational damage)
- Individual actors can benefit from free-riding on community infrastructure (using moderation, archiving, organization) without contributing to it, while contributors pay the cost in time, energy, and eventual burnout
Without governance, these dynamics tend to produce one of two failure modes: the 4chan model, in which the absence of moderation allows the most extreme, hostile voices to dominate; or the ghost-town model, in which community quality declines gradually until participation drops below the threshold needed to sustain community life.
🔵 Key Concept: The 4chan Model vs. The AO3 Model The 4chan model of online community governance is essentially no governance: an explicit or de facto policy of minimal moderation, in which nearly anything is permissible, and community norms are enforced through social pressure (mockery, pile-ons) rather than moderator action. This model produces communities with high energy and low barriers but also high rates of harassment, extreme speech, and exclusion of vulnerable members. It systematically advantages the most aggressive and shameless participants.
The AO3 model is extensive, deliberate governance organized around specific principles: a "don't like, don't read" philosophy that permits wide content diversity while using tags to allow member choice, plus a volunteer-based governance structure with democratic participation in policy. This model produces communities with lower barriers to entry for diverse content but requires substantial ongoing labor investment and has its own failure modes (bureaucratic rigidity, volunteer burnout).
Most fan communities operate somewhere between these poles, choosing different trade-offs based on their size, composition, and values.
Ostrom's Principles for Governing Commons
The political economist Elinor Ostrom won the 2009 Nobel Prize in Economics for demonstrating that the tragedy of the commons is not inevitable — that communities can and do develop effective governance for shared resources without either privatization (market solution) or state regulation (government solution). She identified eight design principles that characterize successful commons governance:
- Clearly defined boundaries: Who is a member and what resource they have access to must be clear.
- Proportionality: Rules match local conditions; costs of governance are proportionate to benefits received.
- Collective choice arrangements: Members affected by rules have a meaningful role in modifying them.
- Monitoring: Members and rule compliance are monitored, by members themselves.
- Graduated sanctions: Violations receive graduated responses, from mild to severe.
- Conflict resolution mechanisms: Low-cost mechanisms for resolving disputes are available.
- Minimal recognition of rights: Governing institutions of the commons are recognized as legitimate by external authorities.
- Nested enterprises: Large commons are organized as nested layers of smaller entities.
Fan communities map onto this framework with both resonance and friction. Most successful fan communities satisfy Ostrom's first four principles reasonably well: they have membership criteria (often through platform account requirements), they have proportionality in the sense that governance costs are borne primarily by those who benefit most from the community, they have some form of collective input on rule changes (through community discussion, even if not formal democracy), and monitoring is extensive. Principle 5 (graduated sanctions) is also generally present — warnings before bans, bans before permanent bans.
Principles 6 and 7 create more friction. Fan communities do not consistently have good conflict resolution mechanisms: much conflict is resolved through informal social pressure, withdrawal, or community schism rather than through formal dispute resolution. And principle 7 — recognition by external authorities — is where platform governance intervenes most disruptively. Fan communities' governance structures are not recognized as legitimate by their host platforms in any formal sense. A subreddit's moderator team has authority within Reddit's framework only because Reddit allows it, and Reddit can revoke that allowance at any time.
📊 Research Spotlight: Question: Do successful online communities follow Ostrom's design principles? Method: Matias (2019) analyzed governance policies and moderation practices across 2,500 Reddit communities, measuring community health (retention, participation, harassment rates) against governance structure. Finding: Communities that explicitly articulated membership norms, had visible graduated sanctions, and had rule-change discussions with community input showed significantly higher retention of quality contributors and lower rates of harassment than communities with either no governance or rigid, top-down governance. Significance: Ostrom's framework, developed for physical commons, generalizes to online community governance — supporting the argument that fan community governance is a real form of commons management. Limitations: Reddit-specific data may not generalize to other platforms; community health is difficult to measure and Matias's metrics (retention, harassment complaints) capture some but not all relevant dimensions.
13.2 Rule Types and Norm Layers
Fan community governance is not a single system but a multi-layered one, with each layer having different origins, legitimacy, and enforcement mechanisms. Understanding the layered structure is essential for understanding why governance is complicated, why well-intentioned rules sometimes fail, and why moderator judgment is irreducible.
The Four-Layer Structure
Layer 1: Platform Rules. Every fan community exists on a platform, and the platform's terms of service, community standards, and content policies apply to everything hosted there. Reddit's content policies govern r/Kalosverse. Discord's Terms of Service govern Mireille's server. AO3's Terms of Service govern Vesper_of_Tuesday's published fiction.
Platform rules are the highest-authority layer: they override community rules in any conflict. They are also the least customizable: a fan community can add rules to platform policy, but it cannot subtract from it. If Reddit's content policy prohibits a type of content, r/Kalosverse cannot permit it. If Discord's Terms of Service prohibit something in Mireille's server, Mireille's own rules cannot protect it.
Layer 2: Community Written Rules. The explicit, codified rules that appear in subreddit sidebars, server #rules channels, and wiki pages. These rules are the public face of fan governance — the formal statement of what the community allows and prohibits. They typically address: content categories (what types of posts are permitted), conduct standards (how members must treat each other), spam and self-promotion, and specific community norms (lore accuracy requirements, spoiler policies, etc.).
Written rules have important functions: they provide legitimacy for moderator actions (moderation decisions can be justified by reference to stated rules), they communicate expectations to new members, and they are a form of collective agreement that members tacitly accept by joining the community. But written rules are necessarily incomplete: no set of written rules can anticipate every situation, and many of the most important community norms are not captured in writing.
Layer 3: Moderator Judgment. The discretionary decisions that moderators make in applying written rules to specific cases — and in acting on community norms that are not written. This is the layer of KingdomKeeper_7's three difficult cases: all three required judgment that went beyond rule application, involving pattern recognition (Case Three), community knowledge (Case One), and swift categorical decision-making (Case Two).
Moderator judgment is necessary precisely because rules cannot cover everything. But it is also the most contestable layer of governance: unlike written rules (which members can at least read) or platform policy (which has institutional authority), moderator judgment decisions are the individual choices of specific people acting in specific moments. They can be wrong, biased, inconsistent, or captured by factional interests. This contestability is why KingdomKeeper_7 expects 80–250 replies arguing against his Case Three decision — and why his willingness to explain his reasoning publicly matters.
Layer 4: Community Norms. The implicit, unwritten expectations that shape community behavior without being formally codified. Every fan community has norms: expectations about how to credit creative inspiration, how to welcome newcomers, when to use spoiler tags (in communities where they are not formally required), how to engage with BNFs, how long to wait after a major canon event before posting certain types of analysis. These norms are learned through participation, enforced through social pressure rather than moderator action, and often invisible to newcomers until they are violated.
💡 Intuition: The four layers are like concentric circles of authority, each one containing the one inside it. Platform rules contain and constrain community rules; community rules contain and constrain moderator judgment; moderator judgment operates within and expresses community norms. But the direction of daily influence is partly reversed: community norms shape moderator judgment, which shapes written rules, which — only sometimes — influence platform rules. The formal authority structure runs top-down; the cultural formation of what rules mean runs bottom-up.
Sanctions: From Soft to Hard
Governance without enforcement is aspiration, not governance. Fan communities use a range of sanctions proportionate to violation severity:
Soft sanctions (informal, social): Downvotes, community disapproval in reply threads, social ostracism, being ignored. These are decentralized — any community member can deploy them, not just moderators. They are the primary enforcement mechanism for community norms that are not written rules.
Medium sanctions (moderator-applied, reversible): Comment removal, post removal, temporary ban (24 hours, 3 days, 1 week, 1 month). These are the bread-and-butter of subreddit moderation and Discord server management. They are reversible, proportionate, and provide a graduated response.
Hard sanctions (permanent or platform-escalating): Permanent ban from the community, reporting to platform trust and safety, platform-level account suspension or ban. These are reserved for severe or repeat violations and have effects that extend beyond the specific community.
A well-functioning governance system uses the least severe sanction adequate to the situation, escalating only when necessary. KingdomKeeper_7's removal of Case Two (the racist trolling post) without extended deliberation is appropriate — this is a clear hard-sanction case. His removal of Case One (the potentially inflammatory post) with a comment explaining the removal is a medium sanction designed to achieve a community norm outcome (discouraging reliably-inflammatory discussion) while acknowledging the post's technical legitimacy.
13.3 The Moderator as Institution
Fan community moderators are among the most consequential and least studied workers in digital culture. They are, in a sociological sense, an institution — a structured role with defined functions, expectations, and constraints that persists across the individuals who occupy it. Understanding moderators as an institution helps explain both their power and their vulnerability.
Who Becomes a Moderator?
Research on the demographics of online community moderators consistently finds patterns that are at least partly counterintuitive. Contrary to the stereotype of the authoritarian power-seeking moderator, the demographic profile of moderators in fan communities skews toward young women who became moderators through gradual escalation of volunteer contribution rather than through deliberate pursuit of formal authority.
In most fan communities, moderators are not elected or appointed through formal processes. They are recruited from the ranks of highly active, trusted community members who have demonstrated both commitment and judgment. This recruitment process tends to select for people who are already doing significant community maintenance work — who are already welcoming newcomers, already flagging problematic posts, already mediating conflicts in comments — and who are invited to formalize that work through a moderator role.
The Kalosverse's moderator team of five includes KingdomKeeper_7 (male, six years of activity, recruited in his second year by the founding moderator who was burning out) and four others: two women, one non-binary person, one person whose gender is not publicly known. The team estimates collectively spending approximately 20–25 hours per week on moderation across the five of them — an average of 4–5 hours each. For KingdomKeeper_7, the actual hours are higher: as the most senior moderator, his queue load is larger and his judgment calls are more complex.
None of them are compensated. Reddit does not pay subreddit moderators. The moderators' compensation, to the extent it exists, is a mixture of subcultural capital (recognized status in the community), the intrinsic satisfaction of community service, and early access to community information and events.
⚖️ Ethical Dimensions: The reliance of platform companies on volunteer moderator labor is one of the most significant labor issues in digital culture. Reddit's user base and market valuation depend substantially on the quality of its communities, and the quality of its communities depends substantially on volunteer moderator work. The financial value created by this unpaid labor — which would cost tens of millions of dollars annually if compensated at any professional rate — flows entirely to the platform company, not to the moderators. Fan community moderators are performing a form of the fan labor analyzed in Chapter 3, with all of its characteristic features: it is unwaged, it is affective (driven by love of community rather than financial calculation), and it is extracted by commercial platforms that profit from it.
The Decision Tree for Difficult Cases
How do moderators make decisions in the hard cases — the cases where rules are silent, where judgment is required, where any decision will generate community conflict? Drawing on interviews with moderators in multiple fan communities, researchers have identified several heuristics that experienced moderators apply:
Intent vs. Impact: Does the moderator decision hinge on what the poster intended, or on the likely effect of the post? KingdomKeeper_7's Case Three decision is an impact-based decision: he does not know whether the poster intends to precipitate harassment (probably not), but he knows the likely impact. Impact-based decisions are harder to explain but more accurate predictors of community harm.
Pattern recognition: Has this type of post, from this type of user, in this type of context, previously led to harm? This is a form of institutional memory that accumulates over years of moderation — it is why experienced moderators make better decisions than new ones. It is also a potential source of bias: pattern recognition can reflect prejudice as well as experience.
Community trust calibration: How much trust does this moderation decision require the community to extend to the moderator team? High-trust decisions (removing posts that seem fine to most observers because the moderator has inside knowledge of what will happen) require the community to extend significant trust; they are appropriate when the expected harm is clear and severe, but they are not sustainable as a daily practice. Low-trust decisions (removing clear rule violations with written explanations) require less community trust and are more easily accepted.
Reversibility: Is this a decision that can be reconsidered if the moderator is wrong? Removal of a post can be reversed; a permanent ban is much harder to undo. Experienced moderators err toward reversible decisions in ambiguous cases.
Moderator Burnout
Moderator burnout is one of the best-documented phenomena in online community research. The pattern is consistent across platforms and community types: a highly invested, skilled community member takes on moderation responsibilities, performs extensive volunteer labor, gradually experiences emotional exhaustion and reduced capacity, and either steps back (allowing other moderators to carry more burden) or leaves entirely (causing either a governance gap or a rushed recruitment of a less-experienced replacement).
The factors driving burnout in fan community moderation are specific:
Emotional labor demands: The most difficult moderation cases — harassment, threatening content, community crises — require sustained emotional engagement with distressing material. Moderators must read, evaluate, and respond to content that is designed to cause harm, that depicts harm, or that causes harm through its presence in the community. There is no professional buffer (no HR department, no supervisor, no therapy benefit) between the moderator and this material.
Community conflict: Moderators are the visible face of governance decisions that inevitably make some community members unhappy. The 80–250 replies that KingdomKeeper_7 expects in response to his Case Three removal are not emotionally neutral: they include accusations of bias, conspiracy, power abuse, and personal failing. Sustaining this over years requires either emotional detachment that moderators often do not feel (they are, after all, members of a community they care about) or the emotional costs that eventually produce burnout.
Asymmetric labor: The most visible moderation work — the complex cases, the community crises, the difficult explanations — falls disproportionately on the most experienced moderators. KingdomKeeper_7 carries more than his mathematical share of the team's complex cases because his judgment is trusted and his experience is recognized. This is efficient for the community but personally costly for him.
Mireille Fontaine's Manila ARMY server has developed several practices specifically designed to reduce moderator burnout:
Rotation protocols: Certain types of high-stress moderation tasks (reviewing flagged content, responding to harassment reports) are rotated across the moderation team rather than assigned to the same person consistently.
Deliberation norms: The most difficult cases are not decided unilaterally; the whole mod team deliberates in a private channel before action is taken. This distributes emotional burden and produces better decisions.
Explicit time limits: Moderators are explicitly encouraged to set personal time limits for moderation activity. Mireille caps her own daily moderation time at 90 minutes except during server crises, and this cap is known to the team and respected as a boundary rather than treated as a failure of commitment.
13.4 The AO3 Model — Fan Governance at Scale
The Organization for Transformative Works (OTW) is one of the most ambitious fan governance projects in history: a fan-run, legally incorporated nonprofit organization that operates Archive of Our Own and other fan resources, governed through democratic elections, sustained by volunteer labor and fan donations, and committed to a specific philosophy about the legitimacy of fan creative work.
Understanding the OTW requires understanding what it was founded to do, what governance structures it built to do it, and what costs those structures impose.
Origins
The OTW was founded in 2007 by fans who had observed the recurrent problem of fan archives being taken down by commercial interests — fan fiction archives that were shut down because of hosting costs, legal threats, or platform company decisions, with no institutional continuity and no legal protection for the fan creative work they hosted.
The founding insight was that fan creativity needed institutional infrastructure with durability: something that could outlast any individual fan's involvement, that had legal standing to engage with copyright questions, and that could maintain consistent archival standards over time. The OTW was incorporated as a nonprofit in the United States, with the capacity to receive tax-deductible donations, employ paid staff, and enter into legal agreements.
AO3 launched in beta in 2009 and has since grown to over 10 million registered users and over 10 million posted works — making it the world's largest fan fiction archive and one of the largest literary archives of any kind. Its governance structure is democratic: the OTW Board of Directors is elected by OTW members (individuals who have donated $10 or more in the previous calendar year), and major policy decisions are made through board deliberation with community input.
Governance Philosophy: "Don't Like, Don't Read"
The foundational governance philosophy of AO3 is encapsulated in the phrase "don't like, don't read" — the principle that readers are responsible for managing their own content consumption, and that the archive's role is to provide robust tagging and filtering tools rather than to curate content at the editorial level.
This philosophy has specific implications for governance:
Permissiveness: AO3 permits a very wide range of content, including adult content, dark themes, and content that explores morally difficult territory without resolution. This permissiveness is deliberate: the OTW's legal team and policy staff have concluded that the editorial alternative — deciding which content is valuable enough to host — would require the OTW to make aesthetic and ideological judgments that would inevitably reflect the biases of whoever made them.
Tags as governance: Because AO3 does not curate content editorially, its primary governance tool is the tag system: a comprehensive, searchable set of content warnings, genre markers, pairing tags, and character tags that allow readers to find content they want and avoid content they don't want. This is governance by metadata rather than governance by exclusion.
The distinction between allowed and endorsed: AO3 explicitly maintains the distinction between permitting content to exist and endorsing its values. A fan fiction story that depicts abusive dynamics, that explores racist historical settings, or that engages with violent themes is not endorsed by AO3's decision to host it; it is simply not excluded by AO3's decision to permit it. This distinction is important for understanding OTW governance but is also a site of ongoing community debate.
🔗 Connection: The "don't like, don't read" principle has significant implications for the "real fan" problem discussed in Chapter 12. If content is governed by tagging rather than editorial curation, then the gatekeeping function — deciding what counts as legitimate fan creative work — is partly distributed to readers through tag use rather than concentrated in an editorial board. This redistribution of the gatekeeping function is not neutral: it has its own biases (toward content that is easily tagged, toward readers with the literacy to use tags effectively), but it is structurally less centralized than editorial curation would be.
The Tag Wrangling Committee
One of the most distinctive elements of AO3's governance is the Tag Wrangling Committee: a group of approximately 500 volunteer tag wranglers whose job is to manage the archive's tag system — ensuring that synonymous tags are linked, that canonical tag forms are established and consistently applied, and that the tag infrastructure that makes AO3's "don't like, don't read" governance philosophy functional is maintained.
Tag wrangling is unglamorous, technically demanding, and enormously consequential. If "Iron Man" and "Tony Stark" are not wrangled as synonymous tags, a reader filtering for Tony Stark stories will miss all the stories tagged with "Iron Man" and vice versa. If "Dean Winchester/Castiel" and "Castiel/Dean Winchester" are not wrangled as canonical forms, the Destiel tag infrastructure fractures and becomes unreliable. The tag wrangling work is what makes AO3's tagging-as-governance viable at scale.
Vesper_of_Tuesday has been an AO3 tag wrangler for four years. She wrangles primarily in the Supernatural fandom, which means she is responsible for maintaining the tag infrastructure for the archive she primarily uses as an author. This is a specific form of contributory capital — labor that benefits the community and that requires technical knowledge, institutional understanding, and consistent time investment — that most community members are unaware of.
"People don't think about tag wrangling when they're reading," Vesper has said. "They just type a tag and the archive finds what they want. But that happens because someone sat down and connected three hundred synonymous versions of the same tag to a canonical form. That someone is usually me, or one of my colleagues. It's the infrastructure work that keeps the lights on."
The Tag Wrangling Committee's governance is a model of volunteer administrative governance: committee leads with defined responsibilities, regular training for new wranglers, documented policies, and mechanisms for escalating disagreements about canonical tag decisions. It is less democratic than the OTW Board (tag wrangling decisions are made by the committee, not by the broader community) but it is accountable to the OTW's overall governance structure through the committee's reporting relationship to the OTW Board.
What the OTW Costs
The OTW model works, and it works remarkably well for an all-volunteer organization of AO3's scale. But it works at significant cost.
Financial cost: AO3 runs on fan donations. The archive receives approximately $1.5–2 million annually from fan donations, which funds server infrastructure, paid staff, and legal defense. This is a significant ongoing financial ask of the fan community, and it requires continuous fundraising.
Labor cost: The OTW operates through approximately 700 volunteers in addition to a small professional staff. These volunteers include tag wranglers, translators, policy committee members, Board of Directors candidates and officials, communications staff, technical staff, and others. The labor demands on serious OTW volunteers are substantial — comparable to a part-time job — and volunteer burnout affects the OTW at institutional scale as well as at individual level.
Governance cost: Democratic governance is slower and more cumbersome than top-down governance. When the OTW needs to make a significant policy decision — whether to implement a new content policy, how to respond to a legal challenge, whether to make technical changes to the archive — the consultation process (community input, board deliberation, vote) takes weeks or months. During periods of controversy, the consultation process can consume enormous volunteer bandwidth and community emotional energy.
The limits of voluntarism: The OTW's commitment to voluntarism as a governance value has occasionally created tension with the operational needs of an organization managing a multi-million-dollar archive. Decisions that would benefit from professional expertise have sometimes been delayed or complicated by the need to route them through volunteer governance structures that were not designed for that scale.
13.5 Discord Server Governance
Discord, the platform Mireille's Manila ARMY server runs on, provides a specific governance infrastructure that shapes how community governance can be organized. Understanding Discord's governance architecture helps explain both the possibilities and constraints of governance in Discord-based fan communities.
Discord's Built-In Governance Tools
Role hierarchy: Discord allows server administrators to create a hierarchy of roles, each with defined permissions. In Mireille's server, the hierarchy runs: Admin (Mireille and one other) → Moderator (five people) → Senior Member (regular participants who have been in the server for more than 6 months and participated consistently) → Member (anyone who has agreed to the server rules and completed the verification process) → New Member (provisional status, limited channel access, for first two weeks). Each level has specific channel access, posting permissions, and interaction permissions.
The role hierarchy is a governance tool: it encodes the community's trust hierarchy in a form that the platform enforces automatically. A Senior Member can access channels that Members cannot; a Moderator can take actions that Members cannot. This automatic enforcement reduces the labor of governance by making permission levels a structural rather than a negotiated feature.
Channel architecture: Discord's organization of community interaction into distinct channels is itself a governance mechanism. When Mireille separates streaming coordination, general discussion, fan art sharing, and off-topic conversation into distinct channels, she is governing what happens in each space through architectural choices rather than rule enforcement. A conversation about streaming strategy does not contaminate the fan art channel; off-topic conversation does not clutter the announcement channels.
Channel architecture has evolved over three years of server operation. Mireille began with 8 channels; the server now has 34, including specialized channels for different member demographics, different BTS album periods, a channel for server alumni who are less active but still want connection, and a private channel for moderators. Each expansion of channel architecture was a governance decision: recognizing a type of community interaction that was happening in an undifferentiated space and giving it a dedicated, well-governed home.
Bot automation: Mireille's server uses several Discord bots for automated governance functions: an anti-spam bot that automatically removes duplicate messages and rate-limits new members' posting speed, a verification bot that requires new members to read and agree to the server rules before gaining Member status, a logging bot that records all moderation actions for auditing purposes, and an activity bot that tracks participation patterns to inform governance decisions.
Bot automation handles the high-volume, low-judgment governance tasks, freeing the human moderation team for the cases that require judgment. The anti-spam bot processes thousands of cases a week that would otherwise require manual review; the human moderators process perhaps fifty cases a week that require actual judgment. This division of labor is essential for governance at the 40,000-member scale.
The Evolution of Mireille's Rules
The evolution of Mireille's server's written rules over three years is a documented history of governance learning:
Year 1 (2019): 5 rules. The original five rules were simple: be kind, no spam, no sharing personal information without consent, no explicit sexual content, and no discussions of illegal activities. These rules were adequate for a 2,000-member server in which Mireille knew most members personally.
Year 2 (2020): 12 rules. The server crossed 15,000 members during BTS's global peak. The original 5 rules were inadequate for the scale and for the diversity of members who no longer shared Mireille's cultural and social context. New rules addressed: how to report harassment (process was unclear), how to handle disagreements about BTS content interpretations (no norm existed), how to navigate language barriers in a multilingual server (no guidance), how to handle members who were experiencing personal crises in the server space (raised by a moderator who had encountered several such cases), and how to credit fan creative work shared in the server (no attribution norm existed).
Year 3 (2021-present): 23 rules plus an FAQ. The server's rules now address a substantially broader range of situations, including: specific anti-gatekeeping provisions (Section 12.7 in this text), a community-negotiated norm about discussing BTS members' personal lives (restricted to fan-relevant public information), a detailed harassment reporting process, a rule about respecting moderator decisions in public channels and taking appeals to DMs, and provisions specific to ARMY's global organizing context (how to handle differing national laws that affect community content, e.g., different censorship regimes in members' home countries).
⚠️ Common Pitfall: The expansion of written rules from 5 to 23 can look like governance improvement — more rules means more sophisticated governance. This is not necessarily true. Rules proliferation can produce a legalistic atmosphere in which members focus on technical rule compliance rather than on the community values the rules are designed to protect. Mireille is aware of this risk: "We want members to understand why the rules exist, not just what they say. If someone follows the letter of every rule but violates the spirit of being a good community member, the rules haven't done what I need them to do." This is why the FAQ, which explains the reasoning behind key rules, is as important to Mireille as the rules themselves.
13.6 When Governance Fails
Fan community governance fails in predictable and documented ways. Understanding failure modes is as important for governance design as understanding what makes governance work.
Mod Capture
Mod capture occurs when a moderation team begins enforcing rules selectively — consistently in ways that benefit their own faction, social circle, or ideological perspective rather than the community as a whole. It is a form of governance corruption.
Mod capture does not typically involve dramatic or obvious corruption. It manifests in subtle patterns: posts by fans in the moderators' social circle receive more lenient treatment in ambiguous cases; posts by fans who have criticized the moderators receive closer scrutiny; moderation decisions in factional conflicts consistently favor one side.
The difficulty of diagnosing mod capture is that the behaviors it produces (inconsistent enforcement, selective scrutiny) are also the behaviors produced by unconscious bias without corrupt intent. A moderation team that is genuinely trying to be fair can still produce systematically biased outcomes if all its members share the same demographic background, social circle, and ideological assumptions. The difference between corrupt mod capture and biased-but-good-faith moderation may matter morally but not functionally: the community experiences the same harm either way.
Community responses to perceived mod capture typically escalate in steps: complaints in public threads (which are often removed by the moderators being complained about), complaints to platform trust and safety (which may or may not respond), and community exit/migration. When enough high-value community members exit or when the story becomes publicly visible, the platform may intervene — but platform intervention in subreddit or Discord moderation is uncommon and unpredictable.
Mod Abdication
Mod abdication is the failure mode of governance through insufficient enforcement: moderators who are present (their names appear in the mod list) but who are no longer consistently applying rules or maintaining community standards. It is often the endpoint of burnout — moderators who have not formally resigned but have reduced their engagement to near-zero.
The consequences of mod abdication unfold gradually. Without consistent enforcement, the community norms that rules were designed to protect begin to erode. Community members who had been deterred by the expectation of sanctions begin testing boundaries. Early norm violations attract neither enforcement nor community disapproval, signaling to others that enforcement is unavailable. The quality of community discourse declines, and members who came for quality discourse reduce their participation or exit.
Mod abdication is particularly damaging in communities that depend on experienced moderation for complex judgment calls — exactly the communities where experienced moderators are most valuable. A community with strong norms that needs complex judgment calls for edge cases suffers more from mod abdication than a community with simple, clear-cut rules where automated tools can handle most enforcement.
Community Rebellion Against Governance
Communities do not always accept governance decisions passively. When significant community factions believe that governance is unjust — whether through mod capture, arbitrary rule-making, or disconnection from community values — organized resistance can form.
The forms community rebellion takes vary by platform and community scale: - Thread-based contestation: Extended public argument in comment threads challenging specific decisions - Rule-lawyering: Using the literal text of written rules to argue that specific actions are technically permitted regardless of moderator intent - Organized exit: Coordinated migration of community members to a new space, explicitly in protest of governance decisions - Platform appeals: Coordinated reporting of the moderation team to platform trust and safety - Counter-modding: Creating an unofficial community-run information hub that documents governance decisions and builds alternative community infrastructure
The Kalosverse governance crisis referenced in the chapter opening will develop through Chapter 14's conflict material, but its basic shape is available here: KingdomKeeper_7's decision on a controversial moderator action splits the community along pre-existing fault lines, generating sustained contestation that the moderator team's existing tools cannot resolve.
🤔 Reflection: Think about a community governance conflict you have observed — online or offline. Which of these failure modes does it most closely resemble? What did it reveal about the limits of the governance structure? What would a better-designed governance system have done differently?
The Specific Vulnerabilities of Volunteer Governance
Volunteer fan governance has vulnerabilities that would not affect paid professional governance:
Succession fragility: When a founding moderator (like KingdomKeeper_7 or Mireille) leaves the community, the institutional knowledge and relational capital they held leaves with them. Paid professional governance builds institutional memory into documented processes; volunteer governance often holds it in the heads of specific people.
Legal exposure: Fan community moderators who make governance decisions in good faith may have personal legal exposure in specific cases — for example, if their community inadvertently becomes a platform for legally actionable content. Paid platform staff operate under institutional legal protection; volunteer moderators typically do not.
Resource constraints: Volunteer governance cannot easily scale up resources in response to crises. A sudden spike in moderation demand (a celebrity post drives a traffic surge to the Kalosverse; a BTS album release floods Mireille's server) cannot be addressed by hiring additional moderators — it must be managed with the existing volunteer team, however exhausted.
13.7 Platform Governance vs. Community Governance
The multi-layer governance structure described in Section 13.2 is asymmetric: platform governance is always the highest layer, and when platform policy conflicts with community governance, platform policy wins. This asymmetry has structural consequences that fan communities must navigate regardless of how well their community governance works.
Three Modes of Platform-Community Governance Conflict
Mode 1: Platform restricts what community allows. The Tumblr December 2018 NSFW ban is the paradigm case. Tumblr's community of fan communities had developed extensive, nuanced norms around adult content: clear community standards, age-gating mechanisms (informal in many cases), tagging systems, and opt-in structures. When Tumblr banned all NSFW content, these community governance decisions were overridden instantly by platform policy. The communities' self-governance of adult content — developed over years of careful norm-building — became irrelevant in 24 hours.
Mode 2: Platform permits what community prohibits. When Mireille's server prohibits a type of content that Discord's Terms of Service allow, she can enforce that prohibition within her server. Discord will not override her decision. But the asymmetry means that Discord could, at any time, change its Terms of Service to require her to permit the content. Her prohibition exists at the platform's sufferance.
Mode 3: Platform fails to enforce what community needs. Fan communities are often better positioned than platforms to understand the specific harms that require governance in their context. When r/Kalosverse's moderators identify a specific harassment pattern that does not trigger Reddit's automated content moderation, they can sometimes take community-level action — but they cannot force Reddit to update its systems to catch it systematically. Community governance can partially fill gaps in platform governance, but it cannot compel platforms to improve their governance on behalf of the community.
The Power Imbalance and Fan Responses
The power imbalance between platforms and communities is structural and significant. Fan communities have tried several strategies to address it:
Platform diversification: As discussed in Chapter 11, distributing community activity across multiple platforms reduces dependence on any single platform and therefore reduces vulnerability to any single platform's governance decisions.
Organizational formalization: The OTW's approach — formalizing fan governance as a legally incorporated nonprofit with institutional continuity independent of any platform — is the most ambitious response to platform dependency. AO3 runs on OTW-owned servers; platform governance decisions (except for internet infrastructure regulation) do not directly affect it.
Platform advocacy: Fan communities sometimes organize to influence platform governance decisions that affect them. The OTW participates in public consultations on copyright law and digital governance. Large fan communities have submitted organized feedback to platform companies during policy change consultation periods. This advocacy rarely succeeds in changing platform decisions, but it has occasionally influenced the implementation of changes.
Documentation and institutional memory: Maintaining records of platform governance changes and their effects on fan communities — as fan studies researchers, fan wikis, and community historians do — creates a body of evidence that can inform future governance discussions and serves as institutional memory for communities planning their governance structures.
🌍 Global Perspective: The platform-governance power imbalance has different implications for fan communities in different national contexts. ARMY communities in South Korea, where HYBE is headquartered, interact with platform governance in the context of Korean law, which affects what content HYBE's own platforms can host. ARMY communities in Brazil and the Philippines — Mireille's and TheresaK's contexts — interact with platform governance in the context of national laws that differ from US law in significant ways, including around copyright, defamation, and digital privacy. When platforms make governance decisions based primarily on US legal requirements (as most major platforms do), fan communities in non-US jurisdictions may find that those decisions fail to protect them from harms that would be governed differently under their national legal frameworks.
13.8 Chapter Summary
Community governance is not a peripheral feature of fan community life — it is constitutive of it. Without governance, the commons that fan communities share (quality discourse, safety, creative space, institutional memory) are subject to tragedy-of-the-commons dynamics that produce either chaotic hostility or gradual decline. With governance, communities can sustain the conditions under which genuine fan engagement — the kind that has produced the Kalosverse's years of quality discussion, Mireille's ARMY server's global organizing capacity, and Vesper_of_Tuesday's fifteen years of creative production — is possible.
Governance is necessary but labor-intensive. The labor that sustains fan community governance is primarily volunteer labor — unglamorous, underpaid, often invisible. KingdomKeeper_7's 11 PM moderation queue, Mireille's rule evolution over three years, Vesper_of_Tuesday's tag wrangling — all of this is governance labor, and its costs are borne by specific people who have chosen to do it for reasons that are not primarily financial. Understanding this labor as labor — as a form of work with real costs and real value — is essential for thinking about the sustainability of fan governance.
Volunteer governance has specific vulnerabilities. Mod capture, mod abdication, succession fragility, and resource constraints are not accidents or individual failures; they are structural properties of governance systems that depend on voluntary effort without professional infrastructure. Communities that design their governance to mitigate these vulnerabilities — through rotation protocols, deliberation norms, succession planning, and explicit burnout prevention — are more likely to sustain governance quality over time.
Multi-layer governance creates productive and destructive friction. The four-layer structure (platform rules → community written rules → moderator judgment → community norms) creates friction because each layer has different legitimacy, different origin, and different enforcement mechanisms. This friction can be productive: the need to explain moderator judgment against written rules, and written rules against community norms, creates accountability and transparency. It can also be destructive: when layers conflict, the resulting uncertainty can undermine governance legitimacy and generate community rebellion.
Platform governance always trumps community governance. The structural power imbalance between platforms and communities means that even well-designed, democratically legitimate fan governance structures are ultimately dependent on platform companies' decisions. The Tumblr 2018 ban demonstrated this in its starkest form: years of carefully developed community governance became irrelevant overnight. The responses available to fan communities — platform diversification, organizational formalization, advocacy — are partial mitigations rather than solutions. The power imbalance is structural and persistent.
Chapter 14 examines the failure mode toward which this chapter has been pointing: fan community conflict. If Chapter 13 describes how governance maintains community order, Chapter 14 examines what happens when governance is insufficient, overwhelmed, or itself becomes a source of conflict — when the battle over rules becomes inseparable from the battle over who belongs.
Chapter 15 extends the analysis to toxic fandom: the specific conditions under which fan community governance breaks down completely and fan collective behavior becomes organized harassment, coordinated abuse, and the weaponization of community structures against specific targets.
Chapter 32 returns to the AO3 governance model in detail, examining its democratic elections, its policy history, and the specific governance challenges it has faced as it has scaled to become one of the largest literary archives in existence.
End of Chapter 13
Related Reading
Explore this topic in other books
Fandom How Fan Communities Form Creator Economy Community Architecture Why They Watch Community and Fandom