Case Study 29-1: Facebook, Algorithms, and the Myanmar Genocide

Overview

Between 2016 and 2018, Myanmar experienced a rapid escalation of ethnic and religious violence targeting the Rohingya Muslim minority, culminating in military operations that the United Nations, US State Department, and multiple international bodies have characterized as genocide or ethnic cleansing. The violence drove approximately 700,000 Rohingya to flee to Bangladesh in one of the largest forced migration events in recent history.

Facebook was the primary internet platform in Myanmar during this period — for many users, Facebook was the internet, accessed through cheap smartphones subsidized by telecommunications liberalization. On that platform, a coordinated campaign of hate speech, incitement, and disinformation targeting the Rohingya spread through Facebook's networks, amplified by the platform's recommendation and sharing algorithms. Military and nationalist actors used Facebook to spread fabricated atrocity accusations, dehumanizing imagery, and incitement to violence.

This case examines what Facebook knew about these dynamics, what it did and did not do, and what the Myanmar genocide reveals about the responsibilities of AI-powered platforms for harms that occur when their systems amplify harmful content.


The Context: Myanmar's Digital Transformation

Rapid Digitization Without Infrastructure

Myanmar's digital transformation in the mid-2010s was remarkably rapid and lopsided. Prior to 2012, Myanmar was among the least connected countries in Asia: SIM card prices were prohibitively high (reportedly $250 or more), the telecommunications sector was heavily state-controlled, and internet access was largely confined to urban elites. The political transition following military government reform opened the country to foreign telecommunications investment; by 2014-2015, cheap SIM cards and affordable smartphones (particularly from China) were flooding the market. Internet penetration went from near zero to over 25 million users in a few years — almost entirely via mobile, and almost entirely via Facebook.

This digital transformation occurred without the infrastructure that older democracies had developed for managing mass media: professional journalism norms, media literacy education, independent content standards bodies, or regulatory frameworks calibrated to digital media. For many Myanmar users, Facebook was not a supplement to existing information sources — it was the information source. The capacity to critically evaluate information encountered on Facebook, to understand the distinction between posts by verified news organizations and posts by nationalist propaganda accounts, or to recognize coordinated inauthentic behavior was minimal.

The Political Context: Religious Nationalism and Military Power

The information environment on Facebook in Myanmar during 2016-2018 did not exist in a political vacuum. The Arakan Rohingya Salvation Army (ARSA) attacks on police posts in August 2017 provided the military with the pretext for "clearance operations" against Rohingya communities that went far beyond legitimate security responses. But the conditions for those operations — the societal acceptance of extreme violence against the Rohingya — had been prepared over years through a combination of historical discrimination, Buddhist nationalist movement organizing, and media disinformation campaigns.

Facebook's algorithmic amplification did not cause Myanmar's ethnic conflict; the conflict's roots are deep in Myanmar's colonial and post-colonial history. But Facebook's algorithmic amplification provided the infrastructure for actors deliberately seeking to radicalize the population against the Rohingya, and it did so efficiently and at scale.


The Facebook Failure: What Happened

The Hate Speech Campaign

Between 2016 and 2018, Facebook was used by organized groups — including, as later established, members of the Myanmar military — to spread content targeting the Rohingya. This content included: fabricated stories of attacks by Rohingya on Buddhists, often with photographs sourced from other contexts and misrepresented; posts describing Rohingya using dehumanizing language comparing them to animals, insects, or diseases; fabricated photographs depicting Rohingya violence that were actually photographs from other countries; posts claiming that Rohingya were systematically committing rape and murder of Buddhists; and explicit calls for violence and ethnic cleansing.

UN investigators and independent researchers documented this content extensively. A UN fact-finding mission specifically identified Facebook's role in spreading hate speech as a "determining factor" in the violence, stating: "Facebook has been used to incite offline violence and hatred against the Rohingya or other minorities. We are convinced that Facebook needs to take more responsibility for the content that is on its platform."

Critically, this content was not merely present on the platform — it was amplified by Facebook's recommendation and sharing algorithms. Content that generated strong emotional reactions (outrage, fear, tribal anger) was systematically surfaced to more users, because those were the engagement signals that Facebook's algorithm was optimized to maximize. Hateful and inflammatory content about the Rohingya generated precisely those engagement signals; the algorithm amplified it accordingly.

What Facebook Knew and When

The internal history of Facebook's awareness of Myanmar is extensively documented through reporting, internal documents, external assessments, and the disclosure of internal Facebook research by whistleblower Frances Haugen in 2021.

Facebook's global operations team had been receiving reports about harmful content in Myanmar from civil society organizations and researchers since at least 2014. Researchers and journalists who specialized in Myanmar documented the spread of hate speech and asked Facebook to take action; the platform's response was slow and limited. Part of the problem was structural: Facebook had virtually no Burmese-language content moderation capacity. A small number of contractors with Burmese language skills were responsible for reviewing flagged content in a country with millions of active users.

In 2017, before the August military operations that resulted in mass atrocities, civil society groups and UN officials specifically warned Facebook that its platform was being used to spread content that could contribute to violence. Facebook's response was to acknowledge the concern and describe its investment in improving its Burmese-language capabilities — but these investments were insufficient and too slow.

Internal Facebook documents disclosed in 2021 revealed that the company's own researchers had, in 2019, acknowledged that Facebook's systems had contributed to civic harm in Myanmar. An internal presentation described the company's regret about its role and its retrospective acknowledgment that it had not taken sufficient action before the violence. Critically, internal documents also showed that Facebook had been studying its algorithmic amplification of harmful content globally and had found that the recommendation systems disproportionately amplified content that generated the "angry" reaction — with a weight five times greater than other reactions — precisely the content dynamic that made hate speech in Myanmar spread faster than corrective information.

The Structural Problem: No Burmese Language Capacity

The most damning specific failure was Facebook's lack of Burmese-language content moderation capacity at a time when Myanmar was experiencing a mass information crisis. With hundreds of millions of users globally and operations in virtually every country with internet access, Facebook deployed content moderation resources in rough proportion to advertising revenue — which was concentrated in wealthy markets. Myanmar was a large user base but a small advertising market. Content moderation in Burmese was, for years, handled by a tiny team.

This meant that content reported by users as violating policies went unreviewed for days or weeks; that automated content detection systems tuned for English-language hate speech were unable to identify hate speech in Burmese; that organized campaigns of coordinated inauthentic behavior in Burmese went undetected; and that the civil society organizations in Myanmar trying to report harmful content had little effective pathway to get action.


Facebook's Response and Its Inadequacy

Retrospective Acknowledgments

Facebook publicly acknowledged its failures in Myanmar, though only retrospectively and after the violence had reached its worst period. In 2018, Facebook commissioned an independent human rights assessment by the Business for Social Responsibility (BSR) consultancy. The BSR report, released in November 2018, concluded: "We believe Facebook has become a useful instrument for those seeking to spread hate and cause harm, and that Facebook has, at times, allowed misinformation and hate speech to flourish."

The report made specific recommendations including dramatically increasing Burmese-language moderation capacity, partnering more effectively with civil society, improving transparent reporting on hate speech enforcement, and investing in external research access to enable independent analysis of the platform's role. Facebook accepted the findings and described remediation steps it was taking.

The Accountability Gap

Despite the acknowledgment, Facebook faced limited legal accountability for its role in Myanmar. In the United States, Section 230 of the Communications Decency Act provides platforms broad immunity from liability for content generated by users — a legal protection that shielded Facebook from most liability for the consequences of user-generated hate speech on its platform, even content that its recommendation algorithm specifically amplified.

Multiple legal actions have been filed against Facebook/Meta in jurisdictions outside the United States where Section 230 protection does not apply. A group of Rohingya survivors and advocates filed a complaint against Facebook in the United Kingdom seeking damages of $150 billion, arguing that Facebook's systems were responsible for the spread of hate speech that facilitated the genocide. Cases were also filed in the United States asserting that Facebook's own conduct — specifically, its algorithmic amplification decisions — fell outside Section 230 protection because amplification is platform conduct rather than third-party content. These cases remained in litigation as of 2024, with Section 230's applicability to algorithmic amplification a key legal question.


Analysis: What Myanmar Reveals About Platform Responsibility

The Algorithmic Amplification Problem

The Myanmar case crystallizes the core accountability question for AI-powered content platforms: when a platform's algorithm specifically amplifies content that causes harm, is the platform responsible for that harm?

The traditional content moderation framework treats platforms as neutral conduits that host user-generated content and remove content that violates policies — a framework that maps naturally onto Section 230's immunity structure. But this framework is factually inaccurate for how modern social media actually operates. Facebook's algorithm is not neutral; it actively selects, sequences, and amplifies content based on machine learning predictions of what will generate engagement. When that amplification mechanism selects and amplifies hate speech targeting a vulnerable minority — because hate speech generates high engagement — the platform is not a passive host. It is an active participant in spreading that content.

The ethical implication is that platforms bear meaningful moral responsibility for content they algorithmically amplify, not merely for content they fail to remove. This represents a significant expansion of platform accountability from the traditional "notice and takedown" framework — but it is also a framework that accurately maps onto the actual operation of algorithmic platforms.

The Global Equity Problem

Myanmar also illustrates a systematic pattern in global AI deployment: AI systems designed and optimized for English-language, Western contexts perform worse — sometimes dramatically worse — in other languages and cultural contexts. Facebook's content moderation AI was substantially more capable in English than in Burmese. Its hate speech detection trained primarily on English-language examples could not reliably identify Burmese-language hate speech. Its automated systems for detecting coordinated inauthentic behavior were tuned for patterns of manipulation visible in Western contexts.

The consequence is that communities in the Global South — where AI development and testing resources have historically been concentrated least — receive the least protection from algorithmic harms. This is not merely a technical limitation; it reflects decisions about where to invest in AI capabilities, and those decisions are driven by commercial considerations that systematically undervalue non-English-language markets.

Regulatory Implications

Myanmar has become a reference case in every significant regulatory process concerning platform accountability for algorithmic harms. The EU Digital Services Act's systemic risk assessment requirements — specifically for risks to "fundamental rights" and "civic discourse" — were shaped in significant part by the Myanmar experience. The DSA requires VLOPs to assess the risk that their recommendation systems could amplify harmful content in specific language and geographic contexts, and to implement mitigation measures specifically calibrated to those risks.

The US regulatory debate has similarly been shaped by Myanmar, with the case cited in Senate hearings on Section 230 reform and in testimony before the House Judiciary Committee on platform accountability. Whether Section 230 protection should extend to AI-driven algorithmic amplification — as distinct from the passive hosting of user content — is a live legal and policy question that Myanmar has helped to sharpen.


Business Implications

Due Diligence for Global Deployment

The Myanmar case provides a template for thinking about AI deployment due diligence in global contexts. Organizations deploying AI systems — particularly content moderation, recommendation, and information distribution systems — in new geographic contexts should:

Assess whether their AI's training data and capability levels are adequate for the language and cultural context of deployment. English-optimized systems deployed in non-English markets will systematically perform worse, with consequences that may include the amplification of harmful content.

Map the human rights risk context before deployment, not after harm has occurred. Myanmar was a context with documented ethnic conflict, military human rights violations, and a vulnerable minority population — all factors that should have flagged the need for more careful content governance.

Build civil society relationships in advance, rather than responding to crisis. The civil society organizations in Myanmar that were warning Facebook about harmful content had no effective relationship with the company; their warnings went largely unheeded. Organizations with established relationships and communication channels for surfacing problems are more likely to identify issues before they reach crisis levels.

The Governance Gap

Myanmar illustrates the governance gap that exists when the entities most powerful in shaping information environments are global corporations accountable primarily to shareholders and subject to different legal frameworks in different jurisdictions. Facebook's decisions about Myanmar were made by a US company under US legal frameworks, but the consequences were experienced by people with no relationship to US law and no political standing to demand accountability from a US corporation.

This governance gap — between where AI systems operate and where the entities controlling them are accountable — is a recurring feature of global AI deployment and demands institutional responses at the international level that remain inadequately developed.


Discussion Questions

  1. Section 230 of the Communications Decency Act provides platforms immunity from liability for user-generated content. Should this immunity extend to content that platforms' algorithms specifically amplify? Why or why not?

  2. Facebook's Burmese-language content moderation capacity was dramatically inadequate for the scale of its operations in Myanmar. What due diligence standards should platforms be required to meet before deploying in new markets?

  3. The Myanmar genocide occurred before the EU Digital Services Act's systemic risk assessment requirements. How should the DSA framework be applied to the Myanmar scenario, and what would adequate risk mitigation have looked like?

  4. Facebook's internal researchers identified the role of algorithmic amplification in contributing to civic harm. What organizational structures and incentives could have enabled their findings to produce faster and more substantial changes to platform policy?

  5. Is the Myanmar case an argument for more aggressive content moderation, or does aggressive content moderation carry its own democratic risks? How should platforms navigate this tension in contexts with significant human rights risks?