Case Study 02: Facebook and the Myanmar Genocide — How Algorithmic Recommendation Helped Spread Anti-Rohingya Propaganda

Background

The genocide of the Rohingya Muslim minority in Myanmar is the most severe documented case of a social media platform contributing to mass atrocity. Between August and September 2017, the Myanmar military conducted a campaign of violence against Rohingya communities in the Rakhine State that the United Nations described in 2018 as bearing "the hallmarks of genocide." Over 700,000 Rohingya were forced to flee to Bangladesh; thousands were killed; villages were burned; rape was used as a systematic weapon of war. The International Criminal Court opened an investigation in 2019. Myanmar's military leaders have been charged with crimes against humanity.

Facebook was not the cause of anti-Rohingya sentiment in Myanmar—the roots of ethnic and religious conflict between the Buddhist majority and the Muslim Rohingya minority extend back decades, through military dictatorship, economic marginalization, and periodic violence. But the UN Fact-Finding Mission on Myanmar, which extensively investigated the violence and its enabling conditions, concluded with unusual directness that Facebook had played a "determining role" in spreading the hate speech and incitement that contributed to the violence. The platform's recommendation algorithm, combined with wholly inadequate content moderation infrastructure, turned Facebook into an engine for the spread of genocidal propaganda.

The Myanmar Context

Several features of Myanmar's media and technology environment made it uniquely vulnerable to the dynamics that produced the Facebook-facilitated atrocity.

Facebook as the internet: Myanmar had one of the world's lowest rates of internet access until 2014, when a combination of government policy changes, cheap Chinese smartphones, and mobile carrier plans that bundled free Facebook access transformed the country's information landscape in a matter of years. By 2018, Myanmar had approximately 18 million Facebook users in a country of 53 million people—many of whom had never used the internet before. For the majority of these users, Facebook was not a social network they used in addition to other internet services; it was essentially the internet. Accessing news, government services, business information, and social communication all happened through Facebook.

Absence of established digital media literacy: The transition from near-zero internet access to widespread Facebook use happened faster than any meaningful digital media literacy could develop. Myanmar users did not have years of experience navigating internet information environments, developing habits of source verification, or encountering the full range of false information that more experienced internet users learn to recognize. The Facebook information environment was encountered essentially fresh, with high trust in content shared by known contacts.

Pre-existing ethnic conflict and military incentives: Myanmar had a long history of military-orchestrated ethnic violence against minority communities including the Rohingya. The military had specific incentives to inflame anti-Rohingya sentiment to support a military campaign it was planning. The information operation on Facebook was not organic—it was deliberately conducted by military personnel operating fake accounts.

Weak regulatory and civil society environment: Myanmar's civil society was still developing following decades of military rule. Organizations with the capacity to monitor social media, report harmful content, and demand platform accountability were nascent and under-resourced. When civil society organizations did attempt to report hate speech to Facebook, the response was slow and often inadequate.

The Documented Information Operation

The UN Fact-Finding Mission documented a sophisticated Facebook-based information operation conducted by the Myanmar military (the Tatmadaw). The operation involved:

Fake account networks: Military personnel created hundreds of fake Facebook accounts posing as news organizations, celebrities, Buddhist religious figures, and ordinary Burmese citizens. These accounts were used to spread anti-Rohingya content including fabricated stories of Rohingya violence against Buddhists, inflammatory photographs (some taken from conflicts in other countries), and religious content framing Rohingya as a threat to Buddhist Myanmar.

Coordinated inauthentic amplification: Posts from the fake accounts were amplified through coordinated networks, giving the false content the appearance of organic spread and widespread popular sentiment. This coordinated amplification increased the content's visibility in Facebook's algorithm, which interpreted high engagement as a signal of relevance and distributed the content further.

Content tailored to engagement: The military information operation showed sophisticated understanding of what content would generate high engagement on Facebook. Content framing Rohingya as existential threats to Buddhism—a combination of religious identity threat and ethnic fear—was particularly effective at generating the sharing, commenting, and angry reactions that Facebook's algorithm rewarded.

Targeting of influential networks: The operation specifically targeted religious leaders, community figures, and political officials whose Facebook networks had high reach, ensuring that anti-Rohingya content spread not just through fake accounts but through the authentic social networks of real, influential people who shared content they believed was true.

Facebook's Content Moderation Failures

The Myanmar case revealed systematic failures in Facebook's content moderation infrastructure that were not unique to Myanmar but were particularly consequential in the Myanmar context.

Inadequate language capacity: In 2017, when the violence was occurring, Facebook employed only a small number of content moderators who could read and understand Burmese. Reports of hate speech made in Burmese often went unreviewed, or were reviewed by moderators without sufficient cultural context to recognize coded incitement. The UN Mission found that content that was clearly genocidal incitement in Burmese context—using specific terms for the Rohingya that carried dehumanizing connotations, making historically specific threats—was frequently not removed because reviewers did not have the cultural competence to recognize it.

Algorithm amplification of hate content: Facebook's recommendation algorithm, optimizing for engagement, amplified anti-Rohingya content because it generated intense engagement from users who were fearful and angry. The algorithm did not distinguish between benign high-engagement content and hateful high-engagement content—high engagement was high engagement, and it was rewarded with wider distribution regardless of content.

Slow response to civil society reports: Organizations working on ethnic conflict in Myanmar, including human rights organizations with deep local knowledge, repeatedly reported anti-Rohingya content to Facebook. The response was systematically slow and often resulted in content that was clearly incitement under Facebook's own community standards remaining on the platform for extended periods or not being removed at all.

Absence of proactive detection: Facebook's content moderation in Myanmar was primarily reactive—waiting for reports from users or civil society organizations. It did not have proactive systems to detect and remove coordinated inauthentic behavior (the fake account networks conducting the information operation) or to proactively monitor high-engagement content for hate speech. These capabilities existed in more developed markets but were not deployed in Myanmar.

What Facebook Knew and When

The gap between what Facebook knew about the situation in Myanmar and what it did is one of the most troubling aspects of this case.

By 2015, Fortify Rights, a human rights organization working in Myanmar, had sent Facebook multiple reports documenting specific instances of anti-Rohingya hate speech and calling for more robust content moderation in Burmese. By 2016, civil society organizations in Myanmar were reporting to researchers and journalists that Facebook was full of content calling for violence against the Rohingya.

Facebook's Napalm Girl decision in 2016—when the company briefly removed the iconic Vietnam War photograph before restoring it after public outcry—demonstrated that Facebook's content moderation was capable of responding to high-profile pressure. The systematic failure to respond to anti-Rohingya hate speech in Myanmar did not reflect a general inability to moderate content; it reflected the absence of commercial or reputational pressure significant enough to prompt investment in adequate Myanmar content moderation.

The UN Mission noted specifically that Facebook's explanations of its failures—that it lacked Burmese language capacity, that the scale of content made proactive detection difficult—were not adequate explanations for the company's failure to respond to specific, reported content. The reports from civil society organizations provided specific content and context. The failure to act on those reports was a choice, even if it was not a deliberate choice to permit genocide.

Facebook's Response and Remediation

After the UN Mission's conclusions were publicized in 2018, Facebook took several significant steps:

Account removals: Facebook removed hundreds of accounts, pages, and groups connected to the Myanmar military's information operation, in what the company described as its largest removal of state-sponsored content to that point. These removals came years after the content had been spreading and months after the violence had occurred.

Human rights impact assessment: Facebook commissioned and published an independent human rights impact assessment of its operations in Myanmar, conducted by BSR (Business for Social Responsibility). The report, published in November 2018, confirmed that Facebook had "contributed to offline harm" in Myanmar and provided 23 recommendations for improvement. This was a notable act of public accountability relative to the norm of corporate communications.

Investment in Burmese capacity: Facebook committed to increasing its Burmese-speaking content moderation capacity and to developing improved proactive detection systems for hate speech and coordinated inauthentic behavior in Myanmar.

Settlement: Facebook settled lawsuits related to its role in the Myanmar genocide in 2021, with terms that were not publicly disclosed.

Critics noted several limitations of Facebook's response. The account removals and capacity increases came years after the documented harms. The human rights impact assessment, while commendable, was commissioned by Facebook itself and did not have the independence of a regulatory investigation. The settlement terms were not public, preventing evaluation of whether they provided meaningful accountability or merely liability limitation. And the fundamental structural problem—that Facebook's engagement optimization algorithm amplified hateful content because it generated high engagement—was not addressed in a way that would prevent similar dynamics in other markets with similar vulnerability profiles.

Analysis Using Chapter Concepts

The Myanmar case illustrates several concepts from Chapter 32 with particular clarity.

Outrage optimization in its most extreme form: The mechanism described in the chapter—that engagement optimization algorithms reward outrage-generating content because outrage drives engagement—operated in Myanmar without the friction of a more developed information ecosystem. Anti-Rohingya propaganda was highly engaging because it combined religious identity threat with ethnic fear; the algorithm rewarded that engagement with wider distribution; and the wider distribution created more engagement, in a feedback loop that operated with deadly efficiency.

The structural vs. cultural explanation: The Myanmar case supports both structural and cultural explanations of social media-facilitated polarization and violence. The structural explanation: Myanmar's political economy (military control, ethnic conflict, weak civil society) created conditions where organized information operations could be executed with limited resistance. The cultural explanation: the specific dynamics of Facebook's information environment—trust in shared content, algorithm amplification of engagement, inadequate moderation—shaped how those structural conditions expressed themselves. Neither explanation is sufficient alone; the outcome required both.

The gap between intent and effect, and its limits: Throughout this textbook, we have emphasized the gap between platform designers' intentions (create connection, facilitate communication) and their effects (facilitate polarization, enable atrocity). The Myanmar case tests the limits of this framework. Facebook's intentions were not genocidal, and the information operation was conducted by Myanmar military actors who were the primary responsible parties. But Facebook had specific knowledge of the misuse—from civil society reports, from academic research published as early as 2014—and failed to act on that knowledge. At some point, failure to act on specific, documented knowledge of harm becomes something more troubling than "unintended effect."

Platform power and democratic fragility: The Myanmar case illustrates the extreme version of the power asymmetry theme throughout this textbook. In a context where a platform with inadequate localized content moderation becomes the primary information infrastructure, and where an organized state actor uses that platform to conduct an information operation supporting ethnic cleansing, the consequences of "just connecting people" are catastrophic. The platform's power—to determine what information flows, what gets amplified, what gets removed—is extraordinary, and the exercise of that power (including the failure to exercise it adequately) has extraordinary consequences.

What This Means for Users and Policymakers

Takeaway 1: Content moderation must be proportionate to context, not just to scale. The failure in Myanmar was not primarily a failure of scale—Facebook had moderation capacity globally. It was a failure of local knowledge and adequate investment in high-risk contexts. Platforms operating in conflict-affected areas with weak institutions need content moderation that is proportionate to local risk, not just to user numbers.

Takeaway 2: Civil society reporting mechanisms must be effective and accountable. Civil society organizations in Myanmar provided specific, actionable reports of harmful content and received inadequate responses. Effective content moderation must include meaningful response mechanisms for credible civil society reports, with accountability for failure to act.

Takeaway 3: Human rights impact assessments should be standard, not exceptional. Facebook's decision to commission a human rights impact assessment of its Myanmar operations, after the fact, should be a baseline requirement for platforms operating in high-risk environments, not an exceptional act of accountability. Prospective impact assessments, conducted before deployment in new markets with specific vulnerability profiles, would be more valuable than post-hoc assessments of documented harm.

Takeaway 4: The engagement-amplification mechanism is global. The same dynamic that amplified anti-Rohingya propaganda in Myanmar—engagement optimization rewarding outrage-generating content—operates in every market where Facebook and comparable platforms operate. The genocide that resulted in Myanmar reflects extreme conditions, but the underlying mechanism is not Myanmar-specific.

Discussion Questions

  1. The UN Fact-Finding Mission concluded that Facebook played a "determining role" in the Myanmar genocide. What does "determining role" mean in the context of assigning responsibility across multiple causal actors (the military, specific perpetrators, the platform, users who shared content)? Is "determining role" a legal concept, a moral concept, or both?

  2. Facebook had specific knowledge of anti-Rohingya hate speech through civil society reports before the genocide occurred and did not act adequately. At what point does a platform's failure to act on specific knowledge of harm become legally or morally equivalent to causing the harm directly? How should law and ethics draw this line?

  3. The Myanmar case represents an extreme outcome of dynamics (engagement optimization, inadequate moderation, coordinated inauthentic behavior) that operate in many markets. What should the lessons of Myanmar mean for how platforms operate in other countries with vulnerable information environments—particularly countries in sub-Saharan Africa, South Asia, and Southeast Asia with similar combinations of high Facebook penetration, weak regulatory capacity, and intense identity conflict?

  4. Facebook's human rights impact assessment was commissioned by Facebook itself. Design a more independent and robust accountability mechanism for evaluating social media platforms' human rights impacts in high-risk environments. Who should conduct it? What authority should it have? How should its findings be enforced?

  5. The platform's engagement-amplification mechanism is often discussed in the context of US or European politics, where the consequences are significant but rarely catastrophic in the Myanmar sense. Does understanding the Myanmar case change how you think about the same mechanism operating in your own country's political information environment? What is different about the conditions that made the Myanmar outcome so extreme?