The academic literature on misinformation is overwhelmingly a product of a small slice of the world's information environments. The majority of high-citation empirical studies on misinformation beliefs, fact-checking effectiveness, and...
In This Chapter
- Learning Objectives
- Introduction
- Section 40.1: The Western Bias Problem
- Section 40.2: The Global South Information Landscape
- Section 40.3: India — WhatsApp Misinformation Epicenter
- Section 40.4: Sub-Saharan Africa — Mobile-First Verification in Low-Bandwidth Environments
- Section 40.5: Latin America — WhatsApp Elections and Narco-Misinformation
- Section 40.6: Southeast Asia — Keyboard Armies and State-Sponsored Suppression
- Section 40.7: Post-Soviet Space — Russian Disinformation and Resilience
- Section 40.8: Cross-Cultural Differences in Trust and Credulity
- Section 40.9: Multilingual Challenges for Detection
- Section 40.10: Global Coordination
- Key Terms
- Discussion Questions
- Summary
Chapter 40: Global and Cross-Cultural Perspectives on Misinformation
Learning Objectives
By the end of this chapter, students will be able to:
- Identify the Western and WEIRD (Western, Educated, Industrialized, Rich, Democratic) bias in the existing misinformation research literature and articulate what dimensions of the global misinformation problem this bias causes us to miss.
- Describe the information ecosystem characteristics of the Global South, including mobile-first access patterns, WhatsApp and encrypted messaging platform dominance, and oral information culture dynamics.
- Analyze specific national cases — India, Nigeria, Kenya, Brazil, Philippines, and post-Soviet states — to identify context-specific misinformation drivers and the locally adapted responses that have emerged.
- Explain how cultural dimensions (collectivism/individualism, power distance, uncertainty avoidance) interact with information processing and misinformation susceptibility.
- Assess the technical challenges of multilingual misinformation detection, including NLP performance gaps in low-resource languages and the code-switching problem.
- Evaluate the role of international coordination mechanisms — the IFCN, the Global Fact Check Fund, and the International Partnership on Information and Democracy — in addressing cross-border misinformation.
- Apply a cross-cultural lens to the design of media literacy interventions, identifying what must be adapted for specific cultural and infrastructural contexts.
Introduction
The academic literature on misinformation is overwhelmingly a product of a small slice of the world's information environments. The majority of high-citation empirical studies on misinformation beliefs, fact-checking effectiveness, and platform-based manipulation have been conducted in the United States, the United Kingdom, and Western Europe, primarily in English, and primarily on populations with reliable high-bandwidth internet access, Western educational backgrounds, and democratic political systems. The theories, methodologies, and policy recommendations that emerge from this literature reflect those conditions, often implicitly.
This is a problem not merely of academic completeness but of practical consequence. Misinformation is not primarily a Western phenomenon, and the most severe documented cases of misinformation contributing to real-world violence have occurred in contexts quite unlike the ones most studied: rural India, where WhatsApp audio clips of kidnapping rumors triggered lynchings; Myanmar, where Facebook posts contributed to ethnic cleansing; West African election periods, where rumors spread through mobile networks in low-literacy populations; Latin American messaging ecosystems, where political disinformation moves through WhatsApp groups faster than any fact-checker can track.
Understanding misinformation as a global phenomenon requires confronting the diversity of information environments — different platforms, different literacy levels, different languages, different relationships to institutional trust, different political contexts, and different cultural frameworks for evaluating credibility. It also requires understanding the attempts, often under-resourced and underappreciated, of fact-checkers, researchers, and civil society organizations in the Global South to adapt media literacy tools to their specific contexts.
This chapter provides that global perspective, region by region, before drawing cross-cutting lessons about cultural dimensions of susceptibility, the technical challenges of multilingual detection, and the emerging architecture of international coordination.
Section 40.1: The Western Bias Problem
40.1.1 What WEIRD Means and Why It Matters
The acronym WEIRD — Western, Educated, Industrialized, Rich, Democratic — was introduced by psychologist Joseph Henrich and colleagues in a 2010 paper documenting that behavioral science research draws overwhelmingly on samples from a narrow slice of global humanity while making claims meant to generalize to all humans. The WEIRD critique has been particularly influential in psychology, where fundamental assumptions about cognition, perception, and social behavior have been shown to vary substantially across cultures.
The WEIRD critique applies with equal force to misinformation research. The dominant models of how misinformation spreads (through social media networks with high internet penetration), how it is believed (through dual-process models calibrated to Western educated populations), how it should be countered (through fact-checking by credentialed institutions and platform labeling), and how media literacy should be taught (in formal educational settings with individual critical thinking as the goal) are all artifacts of their research environments.
What do we miss when we focus on WEIRD populations and platforms? Several things:
Oral information cultures: Much of the world's information circulation is verbal — spoken in person, transmitted through audio messages, embedded in community storytelling traditions. Misinformation research focused on written social media posts misses the oral dimension of information transmission entirely.
WhatsApp and encrypted messaging: In many Global South countries, WhatsApp is the primary platform for news consumption and political discussion — not Facebook, Twitter, or Instagram. WhatsApp's encryption makes content invisible to platform moderation systems and to researchers. The research tools developed for visible social media platforms do not translate.
Low-literacy audiences: Media literacy interventions designed for literate populations with formal education fail in contexts where substantial portions of the target audience have limited literacy. Alternative approaches — visual, audio-based, community-mediated — are needed but less studied.
Weak institutional trust environments: Fact-checking models that rely on audiences trusting credentialed fact-checking organizations assume a baseline institutional trust that does not exist in many contexts where institutions — government, media, academia — have historically served partisan or colonial interests.
Multilingual and code-switching environments: Populations that move fluidly between multiple languages, or that use hybrid registers mixing two or more languages (code-switching), cannot be served by monolingual detection and intervention systems.
40.1.2 The Research Gap in Numbers
An analysis of publications in leading misinformation research journals reveals the scale of the geographic imbalance. Studies focused on US populations constitute approximately 60–70% of the empirical literature on misinformation beliefs and fact-checking effectiveness; European studies add another 15–20%. Studies from Asia, Africa, and Latin America together constitute a small fraction, despite these regions containing the majority of the world's population and some of the most severe documented misinformation harms.
Language compounds the gap: research published in English is indexed, cited, and policy-relevant in ways that research published in other languages often is not. This creates a feedback loop in which the research base is WEIRD, the theories developed are WEIRD, the interventions designed are WEIRD, and the policy recommendations are WEIRD — even as the most acute misinformation harms occur elsewhere.
Section 40.2: The Global South Information Landscape
40.2.1 Mobile-First, Messaging-First
The phrase "mobile-first" understates the reality in much of the Global South: for hundreds of millions of people, the smartphone is the only internet access device they have or expect to have. This has profound implications for information consumption patterns. Smartphone users in low-bandwidth environments cannot easily consume long-form journalism, image-heavy websites, or video content that requires reliable high-speed connections. They can and do consume text messages, audio clips, and short video — and they share them through messaging applications.
WhatsApp — owned by Meta but with a distinctive usage pattern in the Global South — functions in these environments not as a supplement to mainstream news consumption but as the primary news ecosystem. In India, Brazil, Nigeria, Indonesia, and many other countries, WhatsApp groups (which can hold up to 1,024 members in broadcast list configuration) serve as the channels through which news, rumors, political content, and community information circulate. The median shared news item in these environments is not an article from a verified news organization but a forwarded message — an image with text overlay, an audio clip, a voice note — whose origin and veracity are impossible to verify from within the app.
40.2.2 Oral Information Culture Dynamics
In societies with significant portions of the population with limited formal literacy, information cultures have historically privileged oral transmission: spoken authority, communal storytelling, trust networks built on personal relationships and community standing. These dynamics do not disappear with the introduction of digital technology; they adapt to it. WhatsApp voice notes fit naturally into oral communication traditions, carrying the credibility signals associated with a trusted sender's voice. Visual content — images with text — can be processed by partially literate audiences in ways that long-form written text cannot.
The oral information culture context has specific implications for misinformation. False information embedded in voice notes from trusted community figures has significantly higher credibility than the same content delivered in writing from an anonymous source. The social authority of the sender — a village elder, a religious leader, a family patriarch — overrides the content of the message in credibility evaluation in ways that formal media literacy training, focused on evaluating source credentials, does not address.
40.2.3 Colonial Media Legacy
In many countries that were colonized by European powers, the legacy of colonialism shapes contemporary media trust dynamics. National mainstream media in many postcolonial states were historically controlled by colonial administrations or immediately post-independence governments that used them for regime legitimacy rather than public information. This history generates a durable institutional distrust that makes audiences skeptical of mainstream media fact-checking — an institution they associate with power rather than truth.
In some contexts, this means that fact-checking organizations perceived as connected to state power or to international (Western) NGO funding are treated with the same suspicion as state media. The organizational independence and funding transparency of fact-checking organizations become critical credibility factors in these environments in ways they do not in contexts where journalistic institutions have longer and less compromised histories.
Section 40.3: India — WhatsApp Misinformation Epicenter
40.3.1 Scale and Context
India is the world's most populous democracy, with approximately 500 million WhatsApp users — the largest national user base of any country. It is also, as of this writing, the country with the most documented cases of WhatsApp-mediated misinformation contributing to offline violence. The combination of these two facts is not coincidental.
Several structural factors make India a particularly challenging misinformation environment: - Linguistic diversity: India has 22 officially recognized languages and hundreds of dialects. Content moderation systems and fact-checking organizations cannot cover this linguistic diversity adequately. - WhatsApp dominance: WhatsApp is not just popular; for many users it is the primary interface with digital information entirely. - High-stakes political polarization: India's political environment is intensely polarized along religious, caste, and partisan lines, creating strong motivated reasoning pressures. - Low digital media literacy: Despite high smartphone penetration, digital media literacy levels vary enormously. Rural users who have recently adopted smartphones often lack the contextual experience to evaluate digital content critically.
40.3.2 The BJP's "IT Cell" and Political Misinformation
The Bharatiya Janata Party (BJP), India's ruling party from 2014, has been extensively documented as operating a sophisticated social media presence known internally as the "IT Cell" — a network of party workers, volunteers, and professional social media managers producing and distributing BJP-favorable political content and BJP-opposition political attacks across WhatsApp, Twitter, and Facebook.
The IT Cell's operations as documented by journalists, researchers, and political opponents include: the creation and management of hundreds of WhatsApp groups reaching millions of members; the production of image-based content (often with disputed or false claims) designed for easy sharing; the use of coordinated accounts to amplify content into mainstream media attention; and the deployment of content in multiple regional languages to reach diverse linguistic communities.
What makes the IT Cell case analytically interesting is that it represents a scalable political communication infrastructure that blends legitimate political messaging with misinformation, making the boundary between them difficult to draw. Not all IT Cell content is false; the misinformation is embedded in a larger volume of partisan but factually grounded content, complicating identification and response.
40.3.3 Lynching Cases: When WhatsApp Rumors Kill
The most severe documented harms from WhatsApp misinformation in India have been lynchings — mob killings triggered by false rumors circulated via messaging applications. Between 2017 and 2020, at least 30 deaths attributable to WhatsApp-mediated rumors were documented by Indian media and researchers.
The pattern in these cases is consistent: a voice note or image message — typically claiming that strangers have been seen kidnapping children in the area — circulates rapidly through a WhatsApp community network. Recipients who cannot verify the message but do not know how to evaluate its credibility forward it to their contacts. The message's claimed local specificity (the next village, the town market, a recognizable landmark) makes it feel immediately relevant and credible. A mob forms and attacks the next stranger or strangers who appear, killing them.
The consequences in India included WhatsApp implementing forwarding limits — capping the number of times a message can be forwarded to five — in India in 2018, later extending the restriction globally. Research on the effectiveness of forwarding limits suggests they reduced viral spread of messages without eliminating the spread of false content through slower, more deliberate forwarding.
40.3.4 AltNews: Fact-Checking in a Multilingual Democracy
AltNews, founded in 2017 by Mohammed Zubair and Pratik Sinha, is India's most prominent fact-checking organization and the one most cited internationally for its work on politically motivated misinformation. AltNews's methodology involves reverse image search, source tracing, and linguistic analysis across multiple Indian languages — fact-checking content in Hindi, Urdu, Tamil, Telugu, and other languages.
AltNews's experience highlights specific challenges of fact-checking in India: the sheer volume of false content, the language diversity, the politicization of the fact-checking enterprise itself (AltNews's founders have been targeted by legal action by BJP-linked complainants), and the difficulty of reaching the audiences who consumed the original misinformation with corrections. AltNews publishes corrections primarily in English and Hindi, which means corrections may not reach audiences who consumed misinformation in regional languages.
Section 40.4: Sub-Saharan Africa — Mobile-First Verification in Low-Bandwidth Environments
40.4.1 Mobile-First Information Consumption
Sub-Saharan Africa has one of the fastest-growing mobile internet user bases in the world, but connectivity remains heavily mobile and often low-bandwidth. Feature phones (non-smartphones) remain in wide use in many countries; in some rural areas, voice calls and SMS messages are the primary digital communication channels. Where smartphones are available, WhatsApp is dominant — often accessed through "zero-rated" data plans that make it available without consuming data allowances.
These infrastructure realities mean that misinformation in many African contexts circulates in formats adapted to low bandwidth: audio clips, image messages with text overlays, forwarded SMS chains, voice notes. These formats are difficult to fact-check programmatically (requiring audio transcription and image analysis) and reach populations without reliable high-bandwidth access to verification websites.
40.4.2 Election Misinformation in Kenya, Nigeria, and Zimbabwe
Elections in Africa have become significant misinformation hotspots. Three documented cases illustrate the patterns:
Kenya 2017 and 2022: The 2017 Kenyan presidential election, contested between Uhuru Kenyatta and Raila Odinga, generated substantial documented misinformation including fabricated polling data, false attribution of statements to political figures, and inflammatory ethnic content designed to stoke violence along Kenya's historical fault lines. The 2022 election saw similar patterns with AI-assisted content production added. Africa Check (Nairobi) documented hundreds of false claims circulating via social media and messaging apps during both election periods.
Nigeria 2023: Nigeria's February 2023 presidential election, the largest in Africa, was accompanied by a sustained misinformation campaign targeting all major candidates. False content included fabricated election result screenshots, fake government announcements about election postponement, and religiously inflammatory content targeting Muslim-Christian divisions. The election's disputed result generated further waves of misinformation about vote-counting processes.
Zimbabwe 2023: The Zimbabwean government's information control apparatus has historically generated both state-sponsored propaganda and suppressed opposition information. The 2023 elections saw information control operate through both state media dominance and the circulation of false content about opposition candidates through pro-government social media networks.
40.4.3 Africa Check: Verification in Low-Resource Environments
Africa Check, founded in 2012 and operating across Anglophone Africa with additional bureaus for French and Portuguese-speaking countries, has developed fact-checking methodologies specifically adapted to African information environments. These adaptations include:
Source diversification: Verification methodologies that work with African government data, UN data, and regional databases rather than Western institutional sources that may not cover African subjects.
Language adaptation: Fact-checking in multiple African languages, including Swahili, Amharic, and West African French variants, recognizing that English-language verification misses the majority of the content that reaches most African audiences.
Multimedia verification: Investment in audio transcription and image forensics tools suited to the formats in which misinformation circulates (voice notes, image memes) rather than the written text formats most studied in the Western research literature.
Community trust building: Partnerships with local radio stations and community organizations for correction distribution, recognizing that corrective content needs to reach audiences through the same trusted channels that delivered the original misinformation.
Section 40.5: Latin America — WhatsApp Elections and Narco-Misinformation
40.5.1 Brazil's WhatsApp Election Ecosystem
Brazil's 2018 and 2022 presidential elections have been among the most extensively studied in the world for WhatsApp's role in political information ecosystems. The 2018 election, which Jair Bolsonaro won, saw a massive and well-documented WhatsApp disinformation campaign on his behalf — including a later-documented instance of bulk-purchase of anti-PT (Workers' Party) messaging services by Bolsonaro-supporting businesses.
Research by Brazilian academics at FGV DAPP and by journalists at The New York Times, Folha de S.Paulo, and The Intercept documented the structure of pro-Bolsonaro WhatsApp networks: thousands of groups, millions of members, automated content distribution, and content ranging from genuine political commentary to fabricated scandals, false health claims (anti-vaccine content), and religious misinformation targeting evangelical Christian communities.
The Brazilian case illustrates several dynamics generalizable to other contexts: - Closed network coordination: WhatsApp's encryption makes coordinated campaigns invisible to platform moderation until they are reported by human insiders. - Community gatekeepers: Group administrators function as information gatekeepers whose personal credibility shapes the credibility of shared content. - Religious community exploitation: Religious communities (evangelical churches, Catholic lay organizations) have established WhatsApp networks that are exploited for political content distribution.
40.5.2 Chequeado and Regional Fact-Checking
Chequeado, based in Buenos Aires, is Latin America's oldest and one of its most respected fact-checking organizations. Founded in 2010, Chequeado has developed a methodology for checking claims in the Argentine political context and has trained fact-checkers across Latin America through its Chequeado Enseña program.
The Latin American fact-checking ecosystem has developed several distinctive approaches: - Pre-bunking electoral narratives: Several organizations, including Chequeado, have developed pre-election guides to anticipated false narratives, allowing audiences to encounter anticipated misinformation in a debunked frame before they see it in circulation. - University partnerships: Partnerships with universities for training programs and to embed media literacy in undergraduate education. - Cross-border coordination: The Latam Chequea network facilitates coordination across 20+ fact-checking organizations in Latin America, including shared databases of fact-checks, so that misinformation debunked in Colombia can quickly be published in adapted form in Mexico and Brazil.
40.5.3 Narco-Misinformation in Mexico
Mexico presents a specific form of misinformation ecosystem shaped by the context of cartel violence and the failure of state institutions. In areas with significant cartel presence, official government information is treated with deep skepticism rooted in documented government corruption and collaboration with criminal organizations. Social media, including WhatsApp groups, Twitter/X, and Telegram, have become primary sources of security information in communities that cannot rely on official sources.
The narco-misinformation dynamic is distinctive: cartel organizations themselves use social media for territorial messaging, recruitment, and intimidation. Counter-messaging by rival cartels and by vigilante groups creates an ecosystem in which multiple actors produce false or misleading security-related content. Fact-checking in this context carries personal risk for journalists, several of whom have been killed in Mexico in connection with security reporting.
Section 40.6: Southeast Asia — Keyboard Armies and State-Sponsored Suppression
40.6.1 The Philippines' "Keyboard Army"
The Philippines under Rodrigo Duterte's administration (2016–2022) became a widely documented case of state-affiliated organized online harassment and disinformation. Duterte's political operation employed a network of paid social media workers — colloquially called a "keyboard army" — to amplify Duterte-supporting content, attack journalists and critics, and manufacture the appearance of popular consensus around administration positions.
Maria Ressa, the founder of the Philippines investigative news outlet Rappler and the 2021 Nobel Peace Prize co-laureate, documented the keyboard army's operations in her research and journalism, subsequently described in her memoir How to Stand Up to a Dictator. Rappler's research identified networks of fake accounts coordinating to amplify pro-Duterte and anti-journalist content, and documented how these networks interacted with Facebook's algorithm to maximize reach.
The Philippines case is significant partly because it documented, earlier than most comparable cases, the specific mechanism by which state-affiliated disinformation uses platform algorithmic amplification as a force multiplier. The keyboard army did not need to reach all Filipinos directly; it needed to reach the platform algorithm, which would then amplify high-engagement content to broader audiences organically.
40.6.2 Singapore's POFMA
Singapore represents the opposite end of the government response spectrum. The Protection from Online Falsehoods and Manipulation Act (POFMA), enacted in 2019, gives the Singapore government broad authority to order corrections, takedowns, and labeling of content it deems false or misleading. The law has been criticized by press freedom organizations as providing excessive government control over information, with the potential to be used against legitimate political opposition and journalism.
POFMA represents a governance model that is distinct from both the Western market-liberal approach (relying primarily on platform self-regulation and individual media literacy) and the authoritarian content suppression model (which blocks access entirely). Singapore's model uses legal compulsion with nominal rights to appeal, giving the government significant but not absolute control. The law's critics argue that the appeal mechanism, which goes to courts that are historically deferential to the government, provides inadequate protection.
40.6.3 Thailand's Lèse-Majesté Overlap
In Thailand, the misinformation problem is inseparable from the criminal law context of lèse-majesté — laws criminalizing criticism of the monarchy. Content that discusses the monarchy critically can be prosecuted as criminal speech, regardless of its truthfulness, creating an environment in which the distinction between "false" and "dangerous to power" is collapsed in law. This dynamic means that fact-checking organizations operate under significant legal constraint when content involves the monarchy or military governance.
The Thai case illustrates a broader point about context dependence: what "fighting misinformation" means, and what media literacy education should teach, depends on the political-legal environment. In some contexts, teaching people to seek official corrections is appropriate; in contexts where official speech is authoritarian, teaching people to seek official corrections can actually lead them toward state propaganda.
Section 40.7: Post-Soviet Space — Russian Disinformation and Resilience
40.7.1 Russian-Language Disinformation Targeting Diaspora and Neighbors
The Russian government's information operations targeting Russian-speaking populations outside Russia — in Ukraine, the Baltic states, Moldova, Kazakhstan, and diaspora communities in Western Europe — represent some of the best-documented state-sponsored disinformation campaigns in the world. The operations use a combination of state media (RT, Sputnik), social media amplification, and a network of proxy websites and influencers to distribute narratives favorable to Russian state interests.
The targeting of Russian-speaking populations outside Russia is strategically rational: these communities have Russian as a primary language, are reachable through Russian-language media infrastructure, and in some cases have political and cultural affinities with Russia that make them more receptive to Russian narratives. In the Baltic states (Estonia, Latvia, Lithuania), Russian-speaking minorities constitute significant portions of the population and have historically received their news primarily from Russian-language sources based in Russia.
40.7.2 EUvsDisinfo Database
EUvsDisinfo, operated by the European External Action Service (EEAS), maintains an ongoing database of disinformation narratives attributed to Russian state or state-affiliated sources. The database has catalogued thousands of cases of identified pro-Kremlin disinformation across multiple languages and platforms, providing the most comprehensive systematic public record of Russian information operations.
The EUvsDisinfo methodology — identifying recurring narratives, tracing their propagation across platforms and languages, and documenting the specific false claims — has established important precedents for systematic disinformation tracking. The database is valuable not only as a record of historical operations but as a tool for identifying recurring narrative templates that may be repurposed across different contexts.
40.7.3 Baltic Resilience
The Baltic states — Estonia, Latvia, and Lithuania — have developed distinctive approaches to Russian disinformation resilience, shaped by their histories as Soviet-occupied states and their concerns about Russian interference in domestic politics.
Estonia's Propastop, Latvia's Re:Check, and Lithuania's Demaskuok are fact-checking and disinformation awareness organizations specifically focused on Russian-language disinformation targeting their respective populations. The Baltic states have also invested substantially in media literacy education as a national security priority, embedding critical media literacy in school curricula.
Estonia is particularly notable for its digital governance achievements and its "data embassy" infrastructure, which maintains government data backups outside Estonian territory as protection against Russian cyber operations. The Estonian approach treats information resilience as part of national security, with investments in digital literacy alongside military defense — a model that has attracted attention from NATO partners.
Section 40.8: Cross-Cultural Differences in Trust and Credulity
40.8.1 Cultural Dimensions and Information Processing
Geert Hofstede's cultural dimensions framework — developed from survey data across more than 50 countries — provides one systematic approach to cross-cultural comparison of psychological orientations relevant to information processing. Three dimensions are particularly relevant to misinformation susceptibility:
Individualism/Collectivism: In collectivist cultures, group identity and social harmony are prioritized over individual judgment. Information that threatens group solidarity or challenges in-group authority figures may be resisted regardless of its factual accuracy. Conversely, information from in-group sources may be accepted with lower scrutiny than information from out-group sources. This dynamic is relevant to understanding why WhatsApp community misinformation has high credibility — it comes from in-group members in high-collectivism social environments.
Power Distance: In high power-distance cultures, hierarchical authority is respected and deferred to. Information from authority figures — government officials, religious leaders, community elders — receives higher credibility than information from peers or unknown sources. This can be protective (authority figures who promote accurate information are listened to) or vulnerability-inducing (authority figures who promote false information are also listened to). High power-distance dynamics may make top-down communication of fact-checked information more effective — but also make authoritative disinformation more dangerous.
Uncertainty Avoidance: In high uncertainty-avoidance cultures, ambiguity is uncomfortable and clear answers (even if wrong) are preferred over calibrated uncertainty. Misinformation that provides clear, unambiguous explanations for complex or threatening events may be more psychologically satisfying and therefore more credible in high uncertainty-avoidance contexts than in low uncertainty-avoidance ones.
40.8.2 Institutional Trust Variations
The Edelman Trust Barometer, an annual global survey, documents substantial variation in trust in media, government, and NGOs across countries. Countries in Northern Europe and Australia consistently show high media trust; countries in the Middle East, Latin America, and parts of Asia show much lower trust in mainstream media institutions.
These trust differences are not simply psychological; they often reflect documented histories of media corruption, political capture of media institutions, or suppression. In contexts where mainstream media has been historically untrustworthy, the rational response is skepticism of mainstream media — which then creates vulnerability to alternative information sources that may be less accurate. The low-trust media environment is not primarily a problem of irrational credulity but of rational adaptation to unreliable institutional information.
40.8.3 Social Network Effects on Credulity
Research on social influence and information credibility suggests that the social network position of the message sender is a more important credibility determinant than source characteristics in many cultures. This finding is particularly pronounced in high-collectivism, high-relationship-orientation contexts. A message shared by a person's trusted relative or community member receives higher credibility than the same message from an institutional source, regardless of institutional authority.
This observation has important implications for fact-checking design: fact-checks distributed through trusted social network channels are likely more effective than fact-checks distributed through institutional websites or official announcements, in contexts where interpersonal trust exceeds institutional trust.
Section 40.9: Multilingual Challenges for Detection
40.9.1 NLP's Performance Gap in Low-Resource Languages
Natural language processing (NLP) — the computational field underlying automated text analysis, content moderation, and misinformation detection tools — has developed primarily in and for high-resource languages, especially English. The gap in NLP performance between high-resource and low-resource languages is substantial. A misinformation detection classifier trained primarily on English data may perform adequately on English content while performing near chance-level on content in languages with less training data.
Low-resource languages — defined roughly as languages for which less than a few hundred million words of labeled training data are available — include most African languages, most South and Southeast Asian languages other than English and Hindi, and many languages of the Middle East, Central Asia, and Oceania. In aggregate, these languages are spoken by billions of people. The NLP performance gap is not merely an academic problem; it means that the platform content moderation systems that are most developed for English leave corresponding communities with significantly less protection.
40.9.2 The Code-Switching Problem
Code-switching — the practice of moving between two or more languages within a conversation, sometimes within a single sentence — is common in multilingual communities and presents specific challenges for automated language analysis. A message that begins in English, shifts to Tagalog for a culturally specific reference, and ends in an English-Tagalog hybrid (Taglish) is not adequately analyzed by systems designed for monolingual input.
Code-switching patterns are culturally specific and evolving, making them particularly difficult to address through static training data. Machine learning systems trained on one code-switching pattern may fail to generalize to a different community's code-switching practices. The code-switching problem is an instance of the general challenge that digital communities adapt language faster than NLP systems can be retrained.
40.9.3 The Resource Allocation Problem
The disproportion between where AI and NLP research capacity is concentrated and where NLP capability gaps are most consequential for misinformation is an instance of a general resource allocation problem in the global information ecosystem. English NLP tools are plentiful, well-funded, and competitive; Yoruba NLP tools are scarce, underfunded, and in many cases nonexistent.
The resource allocation problem extends beyond technical tools to human capacity: the number of linguists, NLP researchers, and digital rights advocates working in low-resource language communities is small relative to need. Training and retaining this capacity in countries with weaker research infrastructure is challenging. International funding for NLP research in low-resource languages has been increasing but remains far short of what would be needed to close the capability gap.
Section 40.10: Global Coordination
40.10.1 The International Fact-Checking Network (IFCN)
The International Fact-Checking Network, established at the Poynter Institute in 2015, serves as an accreditation body and coordination mechanism for fact-checking organizations worldwide. IFCN accreditation requires adherence to a code of principles including nonpartisanship, transparency of funding, and commitment to methodological standards. As of this writing, over 100 organizations in more than 60 countries hold IFCN accreditation.
IFCN accreditation has become a significant gatekeeping function: Facebook (Meta), Google, and other platforms have established partnerships with IFCN-accredited organizations for third-party fact-checking, giving IFCN accreditation commercial as well as reputational significance. The IFCN's ClaimReview schema — a structured data format that allows fact-checks to be machine-readable — enables search engines and platforms to surface fact-checks alongside related search results, creating a systematic (if partial) integration of fact-checking into the information ecosystem.
40.10.2 The Global Fact Check Fund
The Global Fact Check Fund, administered by the International Fact-Checking Network, provides funding specifically to fact-checking organizations in the Global South — organizations that serve large populations and face acute misinformation problems but lack the stable revenue models available to fact-checkers in wealthier markets.
The fund reflects recognition that the market for fact-checking is structurally inadequate in many low-income country contexts: fact-checking requires investment in skilled journalists, technological infrastructure, and outreach, but generates direct revenue primarily through platform partnerships that are less developed outside the US and EU. Philanthropic funding, while essential, is uncertain; the Global Fact Check Fund attempts to provide more stable, needs-assessed support for capacity building in underserved regions.
40.10.3 The International Partnership on Information and Democracy
The International Partnership on Information and Democracy (IPID), signed by governments including France, Canada, the UK, and others, represents an attempt to establish intergovernmental commitments to information integrity norms. The partnership commits signatories to principles including supporting pluralistic and independent media, protecting journalists, promoting media literacy, and developing accountability mechanisms for digital platforms.
The IPID faces the fundamental challenge of intergovernmental coordination: its norms are voluntary, its enforcement mechanism is primarily reputational, and its membership does not include the governments most responsible for state-sponsored disinformation campaigns (Russia, China) or the most egregious cases of government manipulation of information (several IPID member governments have themselves been documented restricting press freedom).
Callout Box: The Forwarding Limit Experiment
In 2018, WhatsApp introduced a forwarding limit in India specifically in response to lynching cases attributed to viral misinformation: messages that have been forwarded more than five times can only be forwarded to one contact at a time, rather than the previous five. This was the first major platform policy intervention specifically targeting forwarding velocity as a misinformation mechanism. Research on the effectiveness of forwarding limits found that they reduced the velocity of viral spread without eliminating the spread of false content. The policy illustrates both the potential of platform structural interventions to reduce harm and their limitations when motivated forwarders route around restrictions by using alternative channels.
Callout Box: WEIRD Assumptions in Media Literacy Design
Standard media literacy curricula teach students to: identify the source of information, check credentials, look for corroborating sources, and evaluate the evidence. These heuristics assume that: (1) sources have identifiable credentials to evaluate, (2) credentialed sources are more trustworthy than non-credentialed ones, (3) corroborating sources are accessible, and (4) evidence evaluation is an individual cognitive exercise. Each of these assumptions fails in significant ways in many non-Western contexts. When WhatsApp audio clips come from anonymous community members, credential verification is impossible. When institutions have documented histories of corruption, credentials may indicate bias rather than trustworthiness. When the information environment is primarily oral and audio-based, text-based verification tools are inaccessible. When information evaluation is a social rather than individual practice, individual critical thinking training misses the relevant mechanism. Effective global media literacy must adapt its heuristics to these contexts.
Key Terms
- WEIRD: Western, Educated, Industrialized, Rich, Democratic — describes the characteristics of most research subject populations in behavioral science, whose over-representation biases research conclusions.
- Low-resource language: A language for which limited annotated training data is available for NLP systems, resulting in substantially lower performance of automated text analysis tools compared to high-resource languages like English.
- Code-switching: The practice of moving between two or more languages within a conversation or text, common in multilingual communities.
- IT Cell: Informal term for the social media operation of India's BJP party; more generally, any organized political social media influence operation.
- POFMA: Protection from Online Falsehoods and Manipulation Act; Singapore's 2019 law empowering the government to order corrections and takedowns of content deemed false.
- IFCN: International Fact-Checking Network; the Poynter-hosted body that accredits fact-checking organizations against a code of principles.
- ClaimReview: A structured data schema developed by Schema.org and used by the IFCN to make fact-checks machine-readable for integration into search engines and platform systems.
- Lèse-majesté: Laws criminalizing criticism of monarchs, relevant in Thailand and some other countries where such laws intersect with media freedom and misinformation governance.
- Oral information culture: An information culture in which verbal transmission — spoken conversation, storytelling, audio messages — plays the primary role, as distinct from text-based or visual information cultures.
- EUvsDisinfo: The European External Action Service database tracking Russian-attributed disinformation narratives across languages and platforms.
- Power distance: A cultural dimension (Hofstede) measuring the degree to which less powerful members of society accept unequal distribution of power; relevant to how authority signals shape information credibility.
- Global Fact Check Fund: A fund administered by the IFCN providing financial support to fact-checking organizations in the Global South.
Discussion Questions
-
The chapter argues that media literacy curricula based on WEIRD assumptions may be ineffective or counterproductive in non-Western contexts. Design a media literacy intervention specifically for a rural WhatsApp community in a country with high illiteracy rates and low institutional trust. What would be different from a standard curriculum?
-
India's AltNews fact-checks primarily in English and Hindi. Most misinformation in India circulates in regional languages. How should a well-resourced fact-checking ecosystem address this gap, and what are the practical constraints on doing so?
-
Singapore's POFMA gives the government authority to order corrections and takedowns of content it deems false. Critics argue this is a censorship tool; supporters argue it is an effective anti-misinformation mechanism. What criteria would you apply to evaluate whether a government anti-misinformation law protects or threatens information integrity?
-
The Hofstede cultural dimensions framework offers one approach to thinking about cross-cultural differences in misinformation susceptibility. What are the limitations of applying this framework to individual-level misinformation behavior? What alternative frameworks might capture relevant cross-cultural differences?
-
Russian disinformation campaigns target Russian-speaking diaspora communities as well as populations in Russia's geographic neighborhood. What obligations, if any, do origin countries (Russia) and diaspora destination countries have to protect these audiences from state-sponsored disinformation? What mechanisms would give effect to those obligations?
-
The chapter discusses the resource allocation problem in NLP: English NLP tools are well-developed while tools for low-resource languages are not. Who should bear responsibility for closing this gap: platform companies, governments, international organizations, or academic researchers? What practical steps would your chosen actors take?
Summary
This chapter has examined misinformation as a genuinely global phenomenon with significant variation across information environments, cultural contexts, and political systems. The Western and WEIRD bias in the existing research literature causes us to miss the most severe misinformation harms, which occur in contexts characterized by mobile-first connectivity, oral information culture dynamics, multilingual environments, weak institutional trust, and political systems ranging from vibrant-if-stressed democracies to authoritarian information-control states.
Regional case studies across India, Sub-Saharan Africa, Latin America, Southeast Asia, and the post-Soviet space reveal both the diversity of misinformation problems and the creativity of locally adapted responses — from India's AltNews multilingual fact-checking to Africa Check's audio-based verification methodologies to the Baltic states' national security-level media literacy investment.
Cross-cultural analysis reveals that cultural dimensions including collectivism, power distance, and uncertainty avoidance interact with information processing in ways that standard media literacy education does not address. Technical solutions face a compounded challenge: NLP tools perform substantially worse in low-resource languages, while code-switching patterns create additional complexity for automated analysis.
International coordination mechanisms — the IFCN, the Global Fact Check Fund, the IPID — provide partial but genuinely valuable infrastructure for cross-border cooperation. Their limitations reflect the limits of voluntary coordination among actors with diverse interests, without enforcement mechanisms sufficient to constrain state actors with strong incentives to continue disinformation operations.
The chapter's overarching lesson is that addressing the global misinformation problem requires context-specific design: interventions calibrated to specific information ecosystems, cultural orientations, language environments, and political-legal contexts, rather than the exportation of WEIRD-developed solutions to non-WEIRD contexts.