Case Study 16-1: Facebook and the Rohingya Genocide in Myanmar
Chapter 16: Digital Media, Social Networks, and Viral Spread Propaganda, Power, and Persuasion: A Critical Study of Influence, Disinformation, and Resistance
Overview
Between 2016 and 2017, the Myanmar military (Tatmadaw) conducted what the United Nations later determined was a campaign of ethnic cleansing against the Rohingya Muslim minority population in Rakhine State. More than 700,000 Rohingya fled to Bangladesh. UN investigators documented mass killings, systematic rape used as a weapon of war, and the burning of hundreds of villages. In August 2018, the UN Fact-Finding Mission on Myanmar published its report. Among its findings was one that concentrated attention on a technology company headquartered in Menlo Park, California: Facebook had played "a determining role" in spreading the hate speech and dehumanizing propaganda that provided the ideological environment for the genocide.
This case study examines how that happened, what Facebook knew and when, and what it did and did not do. It is not a case study about whether Facebook caused the genocide — the Tatmadaw's violence had roots in decades of state-sponsored discrimination and ethnic nationalist ideology. It is a case study about how a social media platform's architecture, inadequate moderation infrastructure, and algorithmic amplification system functioned as a force multiplier for genocide-enabling propaganda.
Country Context: Facebook as the Internet
To understand why Facebook's role in Myanmar was uniquely consequential, it is necessary to understand the specific context of Myanmar's digital transition.
Myanmar was, until 2011, one of the most communication-isolated countries in the world. Under military rule, internet access was restricted, expensive, and heavily monitored. Civilian mobile phone ownership was negligible — SIM cards in 2010 cost as much as $2,000 on the black market. The country's media landscape was entirely state-controlled, and independent journalism was illegal.
Beginning in 2012, as part of the political liberalization process under President Thein Sein's quasi-civilian government, Myanmar began opening its telecommunications sector. In 2014, the government issued licenses to foreign telecommunications providers, and within two years, millions of Burmese citizens obtained their first mobile phones. The price of SIM cards collapsed from thousands of dollars to less than $1.50. By 2016, Myanmar had gone from near-zero internet penetration to approximately 10 million active Facebook users in a population of roughly 54 million.
The critical fact: for most of these new internet users, Facebook was not one application among many. Facebook was the internet. In a country with no established digital media ecosystem, no tradition of online news publishing, and a population just encountering smartphones for the first time, Facebook's design of presenting itself as a unified communications and information environment — news, messaging, community groups, all in one place — meant that Facebook's content was effectively the entire digital information landscape.
Facebook's market research was well aware of this dynamic. The company actively pursued the Southeast Asian market as a growth opportunity, deploying its Free Basics initiative (which provided free data access to Facebook while charging for access to the rest of the internet) to entrench its dominant position. The 2018 genocide lawsuit against Meta specifically cited the company's deliberate effort to maximize Facebook penetration in Myanmar despite being warned about the consequences.
The Anti-Rohingya Propaganda Campaign
The propaganda that circulated on Facebook in Myanmar from approximately 2012 onward drew on a pre-existing ideological tradition of anti-Rohingya Buddhist nationalism, but it was qualitatively amplified by Facebook's architecture.
The Nationalist Buddhist Monk Network
The most prominent single figure in the online anti-Rohingya propaganda campaign was Ashin Wirathu, a Buddhist monk from Mandalay who led a nationalist movement called Ma Ba Tha (the Association for the Protection of Race and Religion). Wirathu had previously served time in prison for inciting anti-Muslim violence. After his release, he used Facebook as his primary platform. Time magazine's 2013 cover story on him bore the headline "The Face of Buddhist Terror," and his followers adopted the phrase "the Buddhist bin Laden" as a badge of honor.
Wirathu's Facebook content deployed the full range of propaganda techniques examined throughout this course:
Dehumanization: Rohingya people were consistently described as "kalar" — a deeply offensive Burmese slur for South Asian Muslims — and compared to dogs, snakes, and vermin. The dehumanization did not distinguish between Rohingya individuals; it was categorical.
Fabricated atrocity stories: posts and shared content routinely included false stories of Rohingya men raping Buddhist women. These fabricated stories followed a precise template: named victim (sometimes real people whose photographs were stolen from unrelated contexts), named perpetrators described as Rohingya, graphic details designed to maximize outrage, and a call to action ("share this with every Buddhist you know"). Multiple human rights organizations documented specific fabricated rape allegations, traced the origin of photographs used to "support" them, and found the photographs had been taken in different countries at different times. The corrections rarely reached the audience that had received the original fabricated content.
Community identification: content identifying specific geographic communities as "Muslim-controlled" and providing location information, combined with calls to drive Muslims out or worse.
The Amplification Dynamic
Facebook's algorithm did not create this content. It amplified it. The same engagement-optimization logic that the chapter's main text examines in the American context applied equally in Myanmar: content that generated high emotional arousal — fear, outrage, disgust, righteous indignation — received more algorithmic distribution than content that was informational, measured, or corrective. In Myanmar, the content that generated highest arousal was anti-Rohingya content. The algorithm amplified what it was designed to amplify: what people engaged with.
The result was that users who engaged with anti-Rohingya content — even if only to argue against it — received more anti-Rohingya content in their feeds. Users who shared it saw their social networks receive it. The virality mechanics that Chapter 16 describes in the American context were identical in Myanmar, with consequences that were not primarily electoral but lethal.
What Facebook Knew and When
The documentary record of what Facebook knew about anti-Rohingya content in Myanmar, and when it knew it, is damning.
Early Warnings (2012–2014)
Civil society organizations and human rights workers in Myanmar began raising concerns about Facebook hate speech in 2012, coinciding with the first major anti-Muslim violence in Rakhine State that year. Aung San Suu Kyi's own party — the National League for Democracy — raised concerns about the platform's role in spreading anti-Muslim content. Burmese civil society groups wrote letters to Facebook. Journalists covering Myanmar documented specific examples of false content spreading through the platform.
Facebook's response in this period was essentially absent. The company had no Burmese-language content moderation capability. It had no team with specific Myanmar expertise. It was, by the company's later public admissions, effectively flying blind in a market it had deliberately entered.
The Moderation Capacity Gap
The single most significant documented failure was not a policy decision but a capacity failure: for years into the crisis, Facebook had no full-time Burmese-speaking content moderators. Burmese uses a unique script with Unicode encoding issues that meant automated keyword detection was particularly unreliable. The human capacity to review flagged Burmese content was negligible — reportedly, a handful of part-time contractors reviewing content for a country of 54 million Facebook users in an active ethnic conflict.
Facebook's global moderation infrastructure was, throughout this period, heavily weighted toward English-language markets. The engineering and policy resources devoted to the Myanmar crisis were not commensurate with either the platform's penetration in that market or the documented risks of ethnic violence.
By the time Facebook hired its first dedicated Myanmar policy staff and began the process of building Burmese-language moderation capacity in 2015 and 2016, the anti-Rohingya propaganda ecosystem on the platform was several years old and had hundreds of thousands of active participants.
The 2018 Crisis and Response
The August–September 2017 military crackdown in Rakhine State — the operation that the UN would later characterize as bearing the hallmarks of genocide — was preceded by, and coincided with, a spike in anti-Rohingya content on Facebook that multiple researchers subsequently documented. The content included specific location information about Rohingya communities, calls for violence, and disinformation claiming Rohingya attacks on Buddhist communities that had not occurred.
Facebook's public response to the UN report came in 2018. The company acknowledged "we weren't doing enough to help prevent our platform from being used to foment division and incite offline violence." It announced a series of measures: expanded Burmese-language moderation, expanded use of technology to detect hate speech in Burmese, removal of Wirathu's page and several other nationalist figures, and creation of a transparency center to document actions taken in Myanmar.
By August 2018, when these measures were implemented at scale, approximately 700,000 Rohingya had already fled to Bangladesh.
The 2021 Lawsuit
In December 2021, Rohingya refugees — individuals and a class of plaintiffs — filed a lawsuit against Meta in multiple jurisdictions, including the United States and United Kingdom. The lawsuit alleged that Meta's algorithm had amplified anti-Rohingya hate speech and that the company had been warned repeatedly about the consequences but had failed to act due to prioritization of its business interests.
A separate action was filed in Myanmar by the Burmese Rohingya Organisation UK, seeking $150 billion in damages. This action relies in part on documents — including internal Facebook communications — suggesting that the company was aware of specific risks and failed to act in proportion to them.
As of this writing, the litigation is ongoing. The legal questions are complex: Section 230 of the Communications Decency Act may provide Facebook with immunity in U.S. courts, while other jurisdictions have different liability frameworks. But the factual record assembled by plaintiffs, UN investigators, and academic researchers is not in serious dispute: Facebook was a primary infrastructure of anti-Rohingya propaganda in Myanmar, and its response to documented warnings was inadequate to the scale of the crisis.
Analytical Framework: What This Case Demonstrates
The Myanmar case demonstrates several things that are not unique to Myanmar but are illustrated with unusual clarity there.
Platform Reach Plus Ethnic Conflict
Facebook's architecture in Myanmar converged with an active ethnic conflict that had deep historical roots. The platform did not create the conflict; it dramatically accelerated and intensified the propaganda environment that sustained it. The combination of nearly universal platform penetration (Facebook was the internet) with an active ethnic nationalist movement with existing propaganda capacity and a military with documented genocidal intent produced a catastrophic outcome. Any one of these factors without the others would have been less dangerous.
Algorithmic Amplification Without Moderation
The Myanmar case is, among other things, a controlled experiment in what happens when engagement-optimization algorithms operate in an ethnic conflict environment without adequate moderation. The result — systematic algorithmic amplification of the most emotionally intense content, which in that context was the most dehumanizing anti-minority content — was predictable from first principles. This is not hindsight; researchers and civil society organizations predicted exactly this dynamic and communicated those predictions to Facebook.
The Inadequate Moderation Problem
The moderation capacity gap in Myanmar was not primarily a technical problem. It was a resource allocation problem. Facebook chose not to invest moderation resources in Burmese at scale during the years when anti-Rohingya content was documented as dangerous. This choice is explicable through the same business logic that the chapter's main text documents in the American context: the costs of moderation are direct and reduce revenue; the benefits of moderation are diffuse and accrue primarily to people who are not Facebook's core commercial market.
The Speed Asymmetry
Perhaps the most enduring lesson of the Myanmar case is speed. The propaganda spread in real time, at the speed of algorithmic amplification. The correction, the investigation, the policy response, the lawsuits — all of these operated at the speed of institutional processes. By the time the institutional responses arrived, the genocide had already happened. This asymmetry between the speed of algorithmically amplified propaganda and the speed of any corrective response is not a fixable bug; it is a structural feature of the architecture. It suggests that post-hoc responses to propaganda in active conflict situations are insufficient, and that pre-emptive architecture and capacity investments are the only interventions that can operate at the relevant speed.
Discussion Questions
-
Facebook's leaders have stated that the company "wasn't doing enough" in Myanmar. Is this an adequate characterization of what the documentary record shows? What distinctions would you draw between negligence, recklessness, and intent in assessing corporate responsibility for platform-enabled harm?
-
The Myanmar case involved a relatively small market (by global standards) and a language with limited Facebook moderation capacity. Facebook invested heavily in moderation for English-language content and for major Western markets. What does this resource allocation pattern tell you about whose safety platforms' current design and investment choices prioritize?
-
The UN Fact-Finding Mission found that Facebook played a "determining role" in spreading hate speech. "Determining role" is a specific legal and analytical term. What evidence would you need to evaluate that characterization? Does the evidence in this case study support it?
-
How does the Myanmar case complicate simple narratives about social media as a democratizing force? Facebook's initial entry into Myanmar genuinely increased access to information and communication for millions of people who had been isolated under military censorship. How do you hold both of these things — genuine benefit and documented contribution to catastrophic harm — simultaneously?
-
If you were a Facebook policy director in 2014, with the information that was available at that time, what actions would you have taken? What constraints — legal, technical, financial, political — would you have faced? Does imagining those constraints change your assessment of the company's moral responsibility?
Chapter 16 | Propaganda, Power, and Persuasion