Case Study 34-2: Content Moderation Failures — The Rohingya Genocide and Facebook in Myanmar
Overview
In August 2017, Myanmar's military launched a systematic campaign of mass killings, sexual violence, and the burning of villages against the Rohingya Muslim minority in Rakhine State. Approximately 700,000 Rohingya fled to Bangladesh in the weeks following the attacks; thousands were killed; systematic rape and torture were documented by numerous independent investigators. The United Nations Fact-Finding Mission on Myanmar concluded in 2018 that Myanmar military officials should face investigation and prosecution for genocide, crimes against humanity, and war crimes.
The UN report also found that Facebook had played a "determining role" in the violence, describing how the platform had been used to spread hate speech and incitement that "contributed to the atrocity." This finding placed Facebook at the center of one of the most devastating real-world consequences of platform content moderation failure. The Rohingya case has since become the most extensively documented example of how inadequate content moderation, in a context of ethnic tension and state-sponsored incitement, can contribute to mass violence.
This case study examines how Facebook's systems failed in Myanmar, the specific moderation shortcomings that allowed incitement to circulate unchecked, the company's response, and what this case reveals about the systemic requirements for content moderation to function in high-risk, non-English-language markets.
Background: Facebook as the Internet in Myanmar
Myanmar's experience with Facebook requires context about its unique relationship with the platform. Until 2010, Myanmar was under military junta rule with minimal civilian internet access. The rapid expansion of mobile internet following democratic reforms occurred simultaneously with Facebook's global expansion. For tens of millions of Burmese users, Facebook and the internet arrived together — Facebook was not a supplement to existing information infrastructure but was itself the primary information environment.
By 2014, Facebook had become the dominant source of news, information, and public discourse in Myanmar. Facebook was available free (zero-rated) on many mobile data plans, meaning users without unlimited data could access it without cost, while other websites and news sources were behind a data paywall. This zero-rating arrangement gave Facebook a market position that was not merely dominant but effectively monopolistic for information access.
In this context, Facebook's content moderation policies effectively served as the information governance framework for a country of 55 million people going through an extraordinarily tense period of political transition, ethnic conflict, and information scarcity. The stakes for content moderation errors — in either direction — were existential.
The Information Environment: Rohingya Dehumanization Online
The Rohingya have long been subject to discrimination and statelessness in Myanmar, where they are denied citizenship and described by authorities as illegal immigrants from Bangladesh. Anti-Rohingya sentiment, including the characterization of Rohingya as racially and religiously threatening to Buddhist Burmese society, had deep roots predating Facebook.
What Facebook's arrival changed was the scale and speed at which hateful content could spread. Before social media, hate speech circulated through pamphlets, sermons, and community networks. On Facebook, content could reach millions of people in hours. The platform's engagement algorithm — which surfaced content that generated strong emotional reactions — had the predictable effect of amplifying the most provocative and emotionally charged content, including dehumanizing posts about the Rohingya.
Documented content on Facebook in the years preceding the 2017 attacks included:
- Posts describing Rohingya as "dogs," "snakes," and "kalar" (a derogatory term for Muslims in Myanmar)
- Calls for violence against Rohingya communities and for the exclusion of Rohingya from Myanmar
- False claims about Rohingya violence and criminal behavior (fabricated stories attributed to them mass rapes, murders, and attacks on Buddhist communities)
- Coordinated pages spreading misinformation about Rohingya and inciting communal violence
- Content from military accounts and affiliated pages promoting the military's narrative of Rohingya as security threats
Several organizations, including the human rights organization Fortify Rights and researchers at Harvard University and the Oxford Internet Institute, documented extensive anti-Rohingya hate speech on Facebook from at least 2013 onwards and raised concerns with Facebook.
Facebook's Moderation Failures: A Detailed Account
The failure of Facebook's content moderation in Myanmar was not a single event but a pattern of failures extending over several years.
Inadequate Burmese-Language Capacity
The most fundamental failure was the absence of adequate Burmese-language content moderation capacity. Facebook's moderation systems — both automated and human — were developed primarily for English-language content. When Facebook expanded into Myanmar, the company did not scale its moderation infrastructure proportionally.
Reports from journalists and researchers in 2014-2017 consistently described Facebook as having only a handful of Burmese-speaking staff involved in content review. In a market where tens of millions of users were communicating in Burmese daily, this was an absurdly small investment. Automated systems for Burmese were also underdeveloped — Burmese uses a complex script that was not uniformly supported, and classifiers for detecting hate speech were not developed in Burmese.
This meant that hate speech, incitement, and coordinated inauthentic behavior in Burmese could circulate without effective review. Content that would have been removed under the Community Standards as applied to English-language posts circulated unchecked because no system existed to identify it.
Unresponsiveness to Civil Society Warnings
Beginning at least in 2013, civil society organizations, journalists, and researchers raised concerns with Facebook about anti-Rohingya content and its potential contribution to real-world violence. Organizations including Fortify Rights, Human Rights Watch, and the International State Crime Initiative documented their communications with Facebook.
Facebook's responses were consistently slow and inadequate. The company acknowledged concerns but did not commit resources commensurate with the documented risk. Internal communications from Facebook employees (disclosed through subsequent litigation) showed that some employees were aware of concerns and advocated for greater investment in Myanmar moderation, but these requests did not translate into proportionate action.
Facebook's posture in this period reflected a broader organizational culture that prioritized growth metrics and expansion into new markets over the safety infrastructure necessary to make those markets safe.
Platform Architecture Effects
Beyond content moderation specifically, Facebook's recommendation algorithm played a role in amplifying problematic content. Content generating strong emotional engagement — including fear and outrage — received more algorithmic amplification than less engaging content. In a context of ethnic tension, this meant that the most provocative anti-Rohingya posts reached larger audiences than they would have through organic sharing alone.
The algorithm did not distinguish between engagement driven by outrage and dehumanization versus engagement driven by legitimate interest. In this sense, the platform's core business logic — maximize engagement — contributed to the problem even without any individual moderation failure.
The State Actor Problem
Some of the most consequential hate speech and incitement on Facebook was produced by state actors: the Myanmar military (Tatmadaw) and entities affiliated with it. Military-affiliated Facebook pages disseminated disinformation about Rohingya attacks on Buddhist communities, glorified military operations, and promoted the narrative of Rohingya as invaders.
State actor content creates particular challenges for content moderation. Platforms have historically been reluctant to take aggressive action against state actors, including because state actor content may be the subject of newsworthiness considerations, because platforms rely on state actors for certain regulatory compliance, and because state actors may retaliate against platform access. Facebook's policies on "dangerous organizations" did not clearly cover state military forces.
In 2018, following the genocide, Facebook did remove the accounts of senior Myanmar military commanders, including Commander-in-Chief Senior General Min Aung Hlaing, in one of the largest state actor enforcement actions in the platform's history. But this enforcement came years after the documented incitement.
The UN Fact-Finding Mission's Findings on Facebook
The UN Fact-Finding Mission's 2018 report addressed Facebook's role explicitly and at unusual length for a human rights report focused on governmental atrocities:
"The Mission is particularly concerned about the role of social media, in particular Facebook, in inflammatory and divisive discourse. [...] Facebook has been a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the internet. [...] The speed with which rumours and fake news spread in a context of weak and biased traditional media, as well as widespread illiteracy, creates a particularly fertile environment for divisive content. [...] Facebook and the authorities need to do more."
The Mission found that Facebook: - Was used to spread hate speech and incitement targeting Rohingya - Provided a platform for coordinated inauthentic behavior campaigns promoting military narratives - Had failed to act adequately on reports of harmful content - Had failed to invest in Burmese-language moderation capacity proportionate to the risk
The Mission recommended that Facebook take remedial action, engage civil society in Myanmar, and invest substantially in safety infrastructure for the country.
Facebook's Response
Facebook's public response evolved significantly as the documentation of its failures accumulated.
In 2018, following the UN Mission report and extensive media coverage, Facebook acknowledged that it had not done enough to prevent hate speech in Myanmar. The company:
- Removed military-affiliated accounts and pages (including the Commander-in-Chief's account)
- Announced significant hiring of Burmese-speaking reviewers
- Invested in Burmese-language content moderation tools
- Engaged external organizations to help develop content review guidelines for Myanmar-specific context
- Commissioned an independent human rights impact assessment (completed in 2018) that found Facebook had not met its responsibility to respect human rights under the UN Guiding Principles on Business and Human Rights
The response was widely characterized as too late — coming after the mass atrocities had already occurred — and as insufficient in its treatment of the underlying architectural and investment decisions that had produced the failure.
The Lawsuit: Rohingya Survivors vs. Meta
In 2021 and 2022, Rohingya survivors filed lawsuits against Facebook/Meta in US courts and in the UK and EU. The lawsuits alleged that Facebook's algorithms knowingly amplified hate speech and incitement, contributing to the genocide, and sought billions of dollars in damages.
In the United States, Meta argued that Section 230 of the Communications Decency Act immunized it from liability for user-generated content and from claims based on its algorithmic decisions. US courts have broadly accepted this argument in early proceedings.
In the UK, the lawsuit faced different legal standards. Rohingya plaintiffs in the UK argued that Facebook's active algorithmic amplification of hate speech went beyond passive hosting of user content and should not receive the same liability protections. These cases remained in early proceedings as of 2024.
The cases raise fundamental questions about the relationship between Section 230 immunity and algorithmic amplification: should a platform be protected from liability not only for hosting harmful user content but also for actively promoting it to wider audiences through algorithmic recommendation?
Broader Implications: The Non-English Market Problem
The Myanmar case is the most extreme documented example of a pattern that researchers have identified across multiple markets: platforms are systematically under-invested in non-English language content moderation, creating disparate safety outcomes for non-English speaking users.
Researchers and civil society organizations have documented similar moderation gaps — though with less extreme real-world consequences — in:
- Ethiopia: Anti-Tigrinya and anti-Amhara incitement circulating during the Tigray conflict
- India: Anti-Muslim and anti-Dalit content in Hindi, Bengali, and other Indian languages
- Pakistan: Political manipulation through social media in Urdu
- Sahel region: Disinformation in French and local languages contributing to political instability
The common structural explanation across these cases is that platform investment in moderation — human reviewers with language and cultural competency, automated system development, civil society engagement — is not proportional to the risk profile of specific markets. Investment follows advertising revenue, which follows economic development. High-risk markets in the Global South often combine large, politically volatile user populations with limited advertising revenue, producing systematic under-investment.
This market structure problem does not have a purely market solution. Platforms that treat moderation investment as a cost center, determined by advertiser revenue, will systematically underinvest in markets where the harm potential most exceeds the economic motivation to prevent harm. Addressing this requires either regulatory requirements mandating investment proportional to user population and risk, or a fundamental shift in platform governance philosophy that treats safety as infrastructure rather than as a margin item.
Discussion Questions
-
The UN Fact-Finding Mission found that Facebook "played a determining role" in the incitement environment contributing to the Rohingya genocide. How should "determining role" be understood? Is this equivalent to saying Facebook caused the genocide? What causal account is most accurate?
-
Facebook's moderation investment in Myanmar was inadequate because Myanmar was not a high-revenue advertising market, even though it had a large and at-risk user population. Is this an acceptable basis for resource allocation in a company that provides essential public communication infrastructure? What ethical framework should govern how platforms allocate moderation resources?
-
Section 230 has been argued to immunize Facebook from liability for its role in the Rohingya crisis. Should algorithmic amplification of user-generated incitement content be treated the same as passive hosting for Section 230 purposes? What legal or policy reform would address this?
-
Facebook's response — investing in Burmese-language moderation, removing military accounts, commissioning a human rights impact assessment — came after the mass atrocities. What would earlier warning systems have needed to look like to trigger adequate response before the violence? What role could external actors (UN, civil society, governments) have played?
-
Researchers have documented similar moderation gaps in multiple high-risk, low-revenue markets. What global regulatory framework — national law, international treaty, or other mechanism — could most effectively address the structural under-investment problem? What obstacles would each approach face?
-
The Myanmar case involved both content moderation failure (failure to remove incitement) and algorithmic amplification (active promotion of incitement content through recommendation). How should responsibility be allocated between these two failure modes? Does the answer affect legal or regulatory analysis?
Key Takeaways
- Facebook's inadequate content moderation in Myanmar — particularly the absence of Burmese-language capacity — allowed hate speech and incitement against the Rohingya Muslim minority to circulate unchecked for years.
- The UN Fact-Finding Mission on Myanmar (2018) found that Facebook had played a "determining role" in creating an environment for incitement, contributing to conditions for mass atrocities characterized by the UN as genocide.
- The specific moderation failures included: inadequate Burmese-speaking reviewer workforce, underdeveloped Burmese-language automated tools, unresponsiveness to civil society warnings, and failure to address state-actor (military) accounts disseminating disinformation and incitement.
- Facebook's algorithmic recommendation system, which amplified emotionally engaging content regardless of its harm potential, compounded the moderation failure by giving incitement content greater reach than it would have achieved organically.
- Facebook's remedial response — hired more reviewers, removed military accounts, commissioned a human rights audit — came after the mass atrocities and was insufficient to prevent them.
- The case illustrates a structural problem in platform governance: moderation investment follows advertising revenue, creating systematic under-investment in high-risk, low-revenue markets in the Global South. Addressing this requires regulatory intervention or a fundamental shift in platform governance philosophy.
- Lawsuits by Rohingya survivors raised unresolved questions about whether Section 230 immunizes platforms from liability for algorithmic amplification of hate speech — a legal question with significant implications for platform accountability.