Case Study: Social Credit in Practice: China's System Analyzed

"The reality of China's social credit system is far more complex — and in some ways more concerning — than the dystopian caricature suggests." — Genia Kostka, Freie Universitat Berlin

Overview

When the EU AI Act prohibits "social scoring" by public authorities, it has a specific reference point: the widespread perception that China has built — or is building — a unified system that assigns a behavioral score to every citizen, rewards the "trustworthy," and punishes the "untrustworthy" with restrictions on travel, education, and public services. This perception, amplified by Western media coverage, has become a primary justification for the AI Act's strongest prohibitions.

But what is China's social credit system actually? How does it work? And how well does the Western caricature match the reality?

This case study examines China's social credit system as it actually operates — a fragmented, evolving, and institutionally complex set of programs that defies simple description. Understanding the reality is essential for evaluating both the EU's regulatory response and the broader question of how risk-based regulation engages with governance systems that reflect fundamentally different political values.

Skills Applied: - Analyzing the gap between perception and reality in technology governance - Evaluating governance systems within their political and cultural context - Comparing regulatory responses across jurisdictions - Assessing how fears about foreign systems shape domestic regulation


The Origins: Trust Infrastructure

The Problem of Trust

China's social credit system did not emerge from an Orwellian desire for total control — though control is certainly part of its function. It emerged from a practical governance problem: the absence of a comprehensive credit reporting system in a rapidly modernizing economy.

In the early 2000s, China's economic growth had outpaced its institutional infrastructure. Commercial fraud was rampant. Contracts were routinely broken without consequence. Food safety scandals — including the 2008 melamine-contaminated milk crisis that poisoned 300,000 children — revealed deep failures in regulatory compliance. The government concluded that China needed a "trust infrastructure" — a system that would create consequences for dishonest behavior in commercial, legal, and social domains.

In 2014, the State Council published the "Planning Outline for the Construction of a Social Credit System (2014-2020)." The document described a system that would "allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step." This language — poetic and ominous in translation — set the frame for Western interpretations.

What the 2014 Outline Actually Proposed

The Planning Outline described four domains of social credit:

  1. Government affairs honesty (zhengwu chengxin): Ensuring government agencies fulfill their commitments — a notable inclusion, since it implies that the state itself is subject to credit evaluation.
  2. Commercial integrity (shangwu chengxin): Ensuring businesses honor contracts, comply with regulations, and maintain product quality.
  3. Societal integrity (shehui chengxin): Encouraging honest behavior in daily social interactions.
  4. Judicial credibility (sifa gongxin): Ensuring the legal system operates fairly and that court judgments are enforced.

The Outline was a policy direction, not a technical specification. It did not describe a single unified scoring system. It described a vision for building institutional trust across multiple domains.


The Reality: A Fragmented System

No Single Score

Contrary to the most common Western narrative, there is no unified "social credit score" assigned to every Chinese citizen. What exists instead is a complex, fragmented ecosystem of overlapping programs at national, provincial, municipal, and commercial levels:

National blacklist/redlist system. The central government maintains databases of individuals and companies that have been placed on "blacklists" (for serious violations like court judgment default, tax evasion, or food safety violations) or "redlists" (for exemplary compliance). Being blacklisted can result in restrictions on high-speed rail and air travel, limits on luxury purchases, and public naming. Being redlisted can facilitate access to government services and preferential regulatory treatment.

The most significant national-level mechanism is the court judgment defaulter list (shixin beizhixingren), managed by the Supreme People's Court. Individuals who fail to comply with court judgments — the Chinese equivalent of contempt of court — are placed on this list, which triggers automated restrictions on travel and certain financial transactions. As of 2023, over 8 million individuals had been placed on this list at some point.

Municipal pilot programs. Dozens of Chinese cities have developed their own social credit scoring systems, each with different criteria, scoring methodologies, and consequences. These programs vary enormously:

  • Rongcheng (Shandong Province): One of the earliest and most publicized municipal programs. Residents start with 1,000 points. Points are added for volunteer work, blood donation, and community service; points are deducted for traffic violations, failure to care for elderly parents, and spreading false information. High scorers receive benefits like free bus rides and discounted heating.
  • Suzhou (Jiangsu Province): The "Osmanthus Score" (guihua fen) emphasizes financial creditworthiness and civic participation. Participation is voluntary.
  • Shanghai: Focuses primarily on commercial compliance, targeting businesses rather than individuals.

Research by Genia Kostka and others has found that most municipal programs are small-scale, experimental, and poorly integrated. Many have limited data inputs and rely heavily on existing government records (traffic violations, court judgments, tax records) rather than novel surveillance technologies. Some programs have been scaled back or abandoned due to public indifference or administrative burden.

Commercial credit systems. Private companies, particularly Alibaba (through its Sesame Credit system) and Tencent, have developed their own credit scoring systems. Sesame Credit assigns scores based on purchasing behavior, bill payment, social connections, and other factors. High Sesame Credit scores unlock benefits like waived rental deposits and expedited visa processing. These commercial systems are distinct from government programs, though the boundary between public and private data in China is more porous than in Western systems.


The Surveillance Infrastructure

The Technology Layer

While the social credit system itself is fragmented, it sits atop a surveillance infrastructure that is not. China has deployed an estimated 700 million surveillance cameras — more than any other country. Facial recognition technology is pervasive in public spaces, payment systems, and building access. Digital payment platforms (WeChat Pay, Alipay) generate comprehensive transaction records. And the Great Firewall ensures that digital activity within China is subject to government monitoring.

This surveillance infrastructure provides the data that social credit systems can draw upon, even when the systems themselves are not technically sophisticated. The combination of comprehensive data collection and a political system without independent judicial oversight creates risks that transcend any individual social credit program.

The Enforcement Mechanism

The most consequential element of China's social credit system is not scoring but joint punishment (lianhe chengjie). This mechanism allows multiple government agencies to impose coordinated restrictions on blacklisted individuals. A person blacklisted by a court for failing to pay a judgment may simultaneously lose access to high-speed rail (enforced by the transportation ministry), be denied government contracts (enforced by procurement agencies), and be restricted from holding corporate directorships (enforced by market regulators).

This joint punishment mechanism is what gives the system its teeth. It transforms a single violation into a cascade of consequences that affect multiple domains of life — creating the kind of comprehensive, inescapable consequences that the EU AI Act's prohibition on social scoring is designed to prevent.


Western Perceptions vs. Reality

The Dystopian Narrative

Western media coverage of China's social credit system has overwhelmingly emphasized the dystopian interpretation: a unified system that monitors every citizen's behavior, assigns a single score, and determines access to all aspects of public life. Headlines like "China's Orwellian Social Credit System" and "Big Brother with Chinese Characteristics" have shaped public understanding.

This narrative is not entirely wrong — the surveillance infrastructure, the joint punishment mechanism, and the political system's lack of independent checks create genuine and serious concerns. But it is significantly oversimplified:

  • There is no single unified score.
  • Many municipal programs are voluntary, poorly funded, or inactive.
  • The most consequential restrictions apply to individuals who have defied court orders — a category of enforcement that exists in Western legal systems as well (contempt of court, credit reporting, sex offender registries).
  • Public opinion research within China shows surprisingly high support for social credit systems, particularly commercial credit scoring, which many Chinese citizens view as addressing real problems of commercial fraud and institutional trust.

The Analytical Problem

The oversimplification matters for regulatory purposes. If the EU AI Act's prohibition on social scoring is based on a caricature of the Chinese system, it may be poorly calibrated to address the actual risks:

  • Western governments already operate systems with social-scoring characteristics: credit scores, no-fly lists, sex offender registries, welfare fraud databases, and school discipline records all assign consequences based on behavioral evaluation. The line between these systems and "social scoring" is blurrier than the AI Act's binary prohibition suggests.
  • The actual danger in the Chinese system may lie less in scoring itself and more in the absence of independent judicial review, the comprehensiveness of the surveillance infrastructure, and the joint punishment mechanism's cascading consequences — features that are not exclusive to scoring systems.

Implications for Risk-Based Regulation

What the AI Act Prohibits

The AI Act prohibits "the placing on the market, the putting into service or the use of an AI system by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (a) detrimental or unfavourable treatment of certain natural persons or groups thereof in social contexts that are unrelated to the contexts in which the data was originally generated or collected; (b) detrimental or unfavourable treatment of certain natural persons or groups thereof that is unjustified or disproportionate to their social behaviour or its gravity."

This prohibition is carefully drafted. It targets: (1) public authority use, (2) AI-based evaluation of trustworthiness, (3) based on social behavior, (4) with consequences in unrelated contexts or disproportionate consequences. A system that restricts air travel for individuals who defaulted on court judgments might or might not fall within this prohibition, depending on how "unrelated context" and "disproportionate" are interpreted.

The Boundary Problem

The AI Act's prohibition raises a fundamental question: Where does legitimate behavioral consequence end and prohibited social scoring begin?

Consider the following systems that exist in EU member states: - A government welfare fraud database that triggers enhanced scrutiny of future benefit claims - A tax authority risk-scoring system that flags individuals for audit based on behavioral patterns - A municipal program that rewards residents with lower parking fees for civic participation

None of these systems are as comprehensive as China's joint punishment mechanism. But each involves government evaluation of individual behavior with consequences that could, in certain configurations, resemble social scoring. The AI Act's prohibition provides a ceiling, but the exact floor of permissible government behavioral evaluation remains to be defined through enforcement and judicial interpretation.


Discussion Questions

  1. Does understanding the fragmented reality of China's social credit system change your assessment of its risks? Is a fragmented system of multiple overlapping programs more or less concerning than a unified national score?

  2. Western governments operate systems (credit scores, no-fly lists, welfare fraud databases) that share characteristics with social credit. Where should the line be drawn between legitimate behavioral consequences and prohibited social scoring? Is the AI Act's definition adequate?

  3. Research within China shows significant public support for social credit systems. Does public support make a governance system more legitimate? Can a system be both popular and rights-violating?

  4. The AI Act's social scoring prohibition was partly inspired by Western fears about the Chinese model. Is it appropriate for one jurisdiction's regulatory choices to be shaped by perceptions of another jurisdiction's practices? What are the risks of legislating based on caricature?


Your Turn: Mini-Project

Option A: Research one Chinese municipal social credit pilot program (Rongcheng, Suzhou, or another) in depth. Write a 1,000-word analysis of how the program actually works, who it affects, and how it compares to the Western narrative.

Option B: Identify three government-operated behavioral evaluation systems in the US or EU that share characteristics with social credit (e.g., credit scoring, no-fly lists, predictive policing). For each, analyze whether it would fall within the AI Act's prohibition on social scoring. Write a 1,000-word comparative analysis.

Option C: The AI Act's social scoring prohibition targets public authorities. Private social scoring (e.g., Sesame Credit, employer reputation systems, platform trust scores) is not prohibited. Write a 1,000-word essay arguing for or against extending the prohibition to private-sector social scoring systems.


References

  • Creemers, Rogier. "China's Social Credit System: An Evolving Practice of Control." SSRN Working Paper, May 2018.

  • Kostka, Genia. "China's Social Credit Systems and Public Opinion: Explaining High Levels of Approval." New Media & Society 21, no. 7 (2019): 1565–1593.

  • Liang, Fan, et al. "Constructing a Data-Driven Society: China's Social Credit System as a State Surveillance Infrastructure." Policy & Internet 10, no. 4 (2018): 415–453.

  • State Council of the People's Republic of China. "Planning Outline for the Construction of a Social Credit System (2014-2020)." Guofa No. 21 (2014). Translated by Rogier Creemers.

  • Dai, Xin. "Toward a Reputation State: The Social Credit System Project of China." SSRN Working Paper, 2018.

  • Mac Sithigh, Daithi, and Mathias Siems. "The Chinese Social Credit System: A Model for Other Countries?" Modern Law Review 82, no. 6 (2019): 1034–1071.

  • Ahmed, Shazeda. "The Messy Truth About Social Credit." Logic Magazine, Issue 7 (May 2019).