Case Study 35.1: Facebook and the Rohingya Genocide in Myanmar (2017-2018)

Background

The Rohingya are a Muslim minority ethnic group concentrated in the Rakhine (Arakan) state of Myanmar, a predominantly Buddhist country. The Rohingya have faced systematic discrimination, statelessness, and periodic violence for decades. Myanmar's government stripped the Rohingya of citizenship in 1982, rendering them effectively stateless. Ultra-nationalist Buddhist movements, particularly the 969 movement and later MaBaTha (the Association for the Protection of Race and Religion), had been organizing against the Rohingya and other Muslim minorities since the early 2010s, framing them as existential threats to Buddhist Burmese identity and culture.

This pre-existing tension and infrastructure of anti-Rohingya hatred was the context into which Facebook was deployed as Myanmar experienced explosive growth in mobile internet connectivity, beginning in earnest in 2014 when a government telecommunications liberalization program ended the previous monopoly and brought affordable SIM cards and smartphones to tens of millions of Burmese citizens for the first time.

The convergence of these two factors — pre-existing organized hatred and a new, algorithmically optimized communication and content distribution platform — produced conditions for catastrophe. The platform was Facebook, which by 2018 had approximately 18 million users in a country of 54 million people. For most of those 18 million users, Facebook was not a website they visited on the internet; it was the internet. They received their news, their entertainment, their community information, and their political communication through Facebook. And Facebook's algorithmic systems — designed and trained in contexts with no experience of Myanmar's specific conditions — rewarded the most engaging content: content that was emotional, novel, surprising, and outrage-inducing.

Timeline

2014: Myanmar government telecommunications liberalization brings affordable smartphones and SIM cards to mass market. Facebook grows rapidly as the primary digital platform. The platform has essentially no Burmese-language content moderation infrastructure.

2015 — 2016: Anti-Rohingya content spreads on Facebook without meaningful moderation. Ultra-nationalist Buddhist leaders, including the monk Wirathu (who was featured on the cover of Time magazine as the "Face of Buddhist Terror" in 2013), build large Facebook followings by posting content that the engagement algorithm rewards: emotionally charged, communal, outrage-generating. False claims about Rohingya attacks on Buddhist women circulate widely.

August 2016: The Arakan Rohingya Salvation Army (ARSA), a Rohingya militant group, conducts attacks on Myanmar border police. The attacks are used to justify military operations in Rakhine state that the UN and human rights organizations characterize as collective punishment against the Rohingya civilian population. Anti-Rohingya content on Facebook intensifies.

October 2016: UN human rights officials begin warning of the potential for genocide in Myanmar, citing the systematic dehumanization of the Rohingya in public discourse, including social media. The warnings are issued to governments and platform companies.

2016 — 2017: Facebook receives detailed briefings from researchers and civil society organizations documenting the spread of anti-Rohingya content and the absence of adequate Burmese moderation. Facebook acknowledges the problem but moderation infrastructure investment remains inadequate. The platform has a handful of Burmese speakers working on content issues for a market of 18 million users.

August 25, 2017: ARSA conducts attacks on police posts and a military base in Rakhine state, killing approximately 70 security personnel. The Myanmar military's response is immediate and disproportionate: the Tatmadaw (military) launches a "clearance operation" against Rohingya villages in northern Rakhine state.

August — September 2017: The military operation, which UN investigators subsequently characterize as "genocidal acts," proceeds with extreme violence. Rohingya villages are burned, mass killings occur, and sexual violence is used as a weapon. Social media — particularly Facebook — is used in multiple ways: to coordinate and incite violence, to spread false narratives justifying the military operation, and (in some cases) by Rohingya survivors to document what is happening. Facebook's algorithmic systems continue operating without adequate Burmese moderation.

September 2017: The UN High Commissioner for Human Rights, Zeid Ra'ad Al Hussein, calls the military operation a "textbook example of ethnic cleansing." Over 400,000 Rohingya have crossed into Bangladesh, with the number eventually reaching 725,000.

October 2017: International media investigations detail the specific role of Facebook in spreading anti-Rohingya content. The Guardian, Reuters, and other outlets publish documentation of Facebook posts calling for violence against the Rohingya and of the platform's failure to remove them.

November 2017: Facebook announces it has hired additional Burmese-speaking content reviewers and improved Burmese language capabilities in its automated detection systems. Critics note that this investment, which should have occurred before the crisis, comes after the ethnic cleansing has largely been completed.

March 2018: The UN Human Rights Council establishes the Independent International Fact-Finding Mission on Myanmar to investigate human rights violations since 2011.

August 2018: The UN Fact-Finding Mission publishes its report. The findings on Facebook are among the most significant. The report states: "Facebook has been a useful instrument for those seeking to spread hate, in a context where for most users Facebook is the internet." The mission concludes that Facebook played a "determining role" in spreading hate speech that contributed to ethnic cleansing. The mission recommends that Facebook take immediate action and calls for international investigation of Myanmar military commanders for crimes against humanity and genocide.

September 2018: Facebook CEO Mark Zuckerberg is asked by U.S. Senator Cory Booker about Myanmar during a Senate hearing. Zuckerberg acknowledges that Facebook was "too slow" to prevent the use of its platform to spread hate speech and says the company "needs to do more."

November 2018: Facebook commissions an independent human rights audit by Business for Social Responsibility (BSR), specifically to address its role in Myanmar. The audit, published in November 2018, finds that "Facebook has become a useful instrument for those seeking to spread hate speech and incite violence" in Myanmar and identifies specific failures in content moderation investment.

2019 — Present: The Rohingya diaspora and human rights organizations file multiple legal actions against Facebook (and its successor Meta) in multiple jurisdictions. A class action in California, a complaint before the UK's data regulator, and an arbitration process under the UN Guiding Principles on Business and Human Rights are among the proceedings seeking accountability for the platform's role in the crisis.

Analysis

The Structural Failure

The Myanmar case represents a failure not of individual judgment or even of specific policy decisions but of the structural assumptions built into Facebook's global deployment model. Those assumptions were:

  1. Scale before safety: The model was to expand to new markets rapidly, driven by network effects and competitive first-mover advantage, and to address safety and moderation issues as they arose. This sequence — which the internal Facebook culture described as "moving fast" — meant that in rapidly growing markets, the window of maximum vulnerability (large user base, minimal moderation capacity) was extended for as long as possible.

  2. English-first moderation: Content moderation infrastructure, both automated and human, was developed for the platform's core English-speaking markets and scaled globally with inadequate adaptation. The assumption that moderation systems adequate for one language and culture would be adequate for others was never tested before deployment; it was tested by deployment, with irreversible consequences.

  3. Engagement optimization without context: The recommendation and feed algorithm was designed to maximize engagement. In Myanmar's context — deep ethnic tensions, history of anti-Muslim violence, new and media-illiterate user base, no competing information sources — engagement optimization reliably surfaced inflammatory ethnic and religious content because that content generated the strongest engagement signals. The algorithm had no mechanism to recognize context-specific catastrophic risk; it was doing exactly what it was designed to do.

  4. Revenue-driven investment: Moderation investment followed advertising revenue. Myanmar was not a major advertising market. Burmese moderation investment was economically unattractive and was not made until after international pressure and public catastrophe required it.

The Role of Facebook-as-Internet

The Myanmar case is distinguished from other misinformation crises by the specific condition that Facebook was not merely dominant in Myanmar's information environment but was effectively the totality of that environment for most users. There was no independent Burmese digital journalism to provide competing narratives. There was no widely used search engine providing alternative sources. There was no Wikipedia-equivalent in Burmese for fact-checking. There was no equivalent of developed-country media literacy infrastructure.

In this context, the consequences of Facebook's algorithmic choices — which content was surfaced, which was suppressed, which was recommended — were not merely choices about content distribution within a broader information ecosystem. They were choices about what was knowable, what was believed, and what was possible. When Facebook's algorithm systematically surfaced dehumanizing content about the Rohingya, that content was not competing with corrective journalism; it was competing with nothing.

The UN Fact-Finding Mission's finding — that Facebook played a "determining role" in spreading content that contributed to ethnic cleansing — is the most significant finding of institutional responsibility for platform-mediated violence in the history of social media. But it has not yet been translated into legal accountability.

Section 230 of the Communications Decency Act protects Facebook from legal liability for user-generated content in the United States. The Rohingya legal proceedings testing whether this protection applies extraterritorially, or whether the platform's algorithmic amplification of content (as opposed to the content itself) can be the basis for liability, represent important frontier questions in platform law.

The absence of adequate international legal frameworks for holding platform companies accountable for harms in countries outside their home jurisdictions is one of the most significant gaps in contemporary platform governance. The Myanmar case has accelerated development of regulatory thinking in the EU's Digital Services Act and in the UN Guiding Principles context, but accountability for the harms that have already occurred remains elusive.

What Was Known and When

A critical dimension of the Myanmar case is the documented evidence that Facebook was warned, repeatedly and specifically, about the risks in Myanmar, by researchers, civil society organizations, and UN officials, before the 2017 crisis. A 2018 report by Fortify Rights documented specific warnings given to Facebook and the company's inadequate response.

The question of what Facebook knew and when it knew it is legally and morally significant. If Facebook received specific, credible warnings of genocide risk in Myanmar and failed to make adequate investment to address those warnings, the failure cannot be characterized as a good-faith mistake.

Discussion Questions

  1. The UN Fact-Finding Mission found that Facebook played a "determining role" in spreading hate speech that contributed to ethnic cleansing. Does "determining role" constitute sufficient grounds for legal liability? What legal framework would be necessary to establish platform liability for harms of this nature?

  2. Facebook received specific warnings from researchers and civil society organizations about the genocide risk in Myanmar before the 2017 crisis. What ethical obligation does such a warning create? What response would have been adequate?

  3. The Myanmar case is characterized by the specific condition that Facebook was the totality of the accessible information environment for most users. How does this "Facebook IS the internet" condition change the platform's ethical responsibility compared to contexts where it is one of many available information sources?

  4. Investment in Burmese content moderation was made after the ethnic cleansing rather than before, largely because Myanmar was not a significant advertising revenue market. What governance mechanisms — regulatory, market-based, or organizational — could change this investment calculus to require adequate safety investment before deployment rather than after catastrophe?

What This Means for Users

The Myanmar case has implications that extend far beyond Myanmar itself:

Context determines harm: The same platform and the same algorithmic design that produces manageable harms in one context — a media-literate, high-income environment with competing information sources and some regulatory accountability — can produce catastrophic harms in another. The harm potential of any platform is not fixed by its design but by the interaction of its design with its deployment context.

Attention economy harms are not equally distributed: The most severe harms from engagement-optimization algorithms fall on communities with the least resources to manage them: communities with high pre-existing social tensions, communities where the platform is the only information source, communities with no regulatory protection, and communities whose languages are underserved by content moderation systems. Users in high-income, English-speaking markets benefit from platform investment in safety infrastructure that they assume is universal. It is not.

Advocacy matters: The evidence that Facebook received specific warnings before the Myanmar crisis — and that inadequate response to those warnings was a factor in the catastrophe — suggests that external pressure, from researchers, civil society, and governments, can and does affect platform behavior. The advocacy did not prevent the crisis; it came too late and with insufficient force. But the counterfactual — earlier, louder, better-organized advocacy for adequate Myanmar moderation investment — might have produced different outcomes.

Data on risk is not automatically acted upon: Organizations that collect evidence of platform risk — academic researchers, civil society organizations, journalists — face the challenge that gathering evidence and surfacing it to platforms is necessary but not sufficient to produce platform action. Understanding the conditions under which platforms respond to evidence of risk is essential for effective platform accountability advocacy.