Case Study 17.2: "GDPR in Practice"
How Article 22 Has (and Hasn't) Been Enforced
Overview
When GDPR came into force in May 2018, advocates of algorithmic accountability hoped it would prove transformative: a legal framework with real enforcement teeth that would force organizations to make their AI decision systems legible to the people they affected. The right of individuals not to be subject to solely automated decisions, the right to meaningful information about the logic involved, and the right to human review represented, in theory, a powerful toolkit for demanding accountability from algorithmic systems.
Five-plus years later, the reality is considerably more complex. Enforcement of Article 22 specifically — as distinct from GDPR enforcement generally — has been limited, uneven, and has revealed significant gaps between the regulation's promise and its practical implementation. This case study examines what enforcement actions have been brought under or related to Article 22, what they revealed about the gap between legal right and practical remedy, and what this experience means for the future of explanation rights.
The Enforcement Landscape: Slow Start, Growing Momentum
GDPR enforcement generally was slower in its early years than advocates expected. Several factors contributed to this: the complexity of individual complaints and investigations, the resource constraints of national data protection authorities (DPAs), the strategic prioritization of enforcement toward the largest platforms, and the legal complexity of cross-border cases involving major technology companies.
Enforcement specifically targeting Article 22 violations was even slower, for additional reasons. Article 22 cases require DPAs to assess whether a decision was "solely automated" — a technical and factual determination that requires understanding the organization's actual decision processes, not just its stated policies. They require assessing whether the decision had "legal or similarly significant effects" — a legal determination that may be contested. And they require evaluating the adequacy of whatever explanation was provided — which requires technical expertise about what meaningful explanation actually looks like for the AI system in question.
These are substantial analytical demands, and most DPAs are not resourced to meet them routinely. The result has been enforcement actions that reach Article 22 issues in the context of broader investigations, rather than Article 22-specific enforcement.
Key Enforcement Actions and Investigations
The Dutch Tax Authority Case (Toeslagenaffaire)
The most significant Article 22-adjacent enforcement action to date is the Dutch Data Protection Authority's investigation of the Dutch Tax Authority's (Belastingdienst) supplementary benefit fraud detection system. The system — which used a model to flag applications for childcare benefits as potentially fraudulent — discriminated against applicants of non-Dutch nationality, flagging them for additional scrutiny at significantly higher rates. Thousands of families, predominantly with non-Dutch backgrounds, had their childcare benefits incorrectly terminated and were required to repay large sums.
The Dutch DPA's investigation, concluded in 2022 with a fine of 3.7 million euros (later adjusted), found multiple GDPR violations including: use of nationality as a processing criterion without legal basis; failure to be transparent about the use of algorithmic decision-making; and failure to conduct a data protection impact assessment. The investigation found that human review of algorithmic fraud flags was not meaningful — reviewers routinely approved the algorithm's recommendations without independent assessment — implicating Article 22's requirement for genuine human oversight.
The toeslagenaffaire became a national political crisis, forcing the resignation of the Dutch Cabinet in January 2021. The political magnitude of the case — affecting tens of thousands of families, requiring years of remediation, and eventually generating billions in compensation — illustrates what is at stake when algorithmic government decisions are made without adequate transparency or human oversight.
The Austrian Social Benefits Agency Case
The Austrian Data Protection Authority in 2020 issued a decision finding that the Employment Service Austria (AMS) had violated GDPR through its "AMS algorithm" — a system that assigned job seekers to one of three categories (good, medium, or low employment prospects) and used this categorization to determine the intensity of job placement support. The DPA found that the algorithm involved automated processing that produced significant effects on individuals, triggering Article 22 obligations. The DPA did not ultimately prohibit the system but required modifications to ensure genuine human review and adequate explanation.
The Austrian case is notable for its clarity about what "significant effects" means: being categorized as having low employment prospects by an algorithm that determines the level of government support you receive is unquestionably a decision with significant effects on your life, triggering the full protections of Article 22. The case also illustrated a practical challenge: the AMS argued that every categorization involved human review (a counselor reviewed the algorithm's categorization), but the DPA found that this review was not genuinely independent of the algorithmic output.
The Clearview AI Investigations
Clearview AI — a company that scraped billions of facial images from the internet to build a facial recognition database sold to law enforcement — has been the subject of GDPR enforcement actions in multiple EU jurisdictions. These cases have primarily addressed unlawful data collection rather than Article 22 specifically. But they are relevant to the explanation framework because facial recognition used in law enforcement produces decision-significant effects — being identified as present at a crime scene, being flagged as a person of interest — without any mechanism for affected individuals to know that the system was used or to challenge its output.
The Clearview cases illustrate a gap in Article 22's framework: the provision applies to decisions made using automated processing of personal data, but the person subject to the decision may not know they are in the system or that it has produced a result affecting them. Explanation rights that require the affected person to request an explanation presuppose that the person knows they have been affected — which may not be the case for surveillance and identification systems.
The UK Information Commissioner's ICO: Enforcement Gaps
The UK ICO — which enforces UK GDPR after Brexit — has issued extensive guidance on automated decision-making and explanation but has brought relatively few enforcement actions specifically under Article 22. Investigations into credit reference agencies, employment AI, and financial services AI have raised Article 22 issues without resulting in Article 22-specific enforcement decisions.
Critics have argued that the ICO has been too deferential to industry in its Article 22 enforcement, accepting organizations' claims of human involvement in decision processes without rigorously testing whether that involvement is meaningful. The ICO's 2020 guidance on "explaining decisions made with AI" represents best-practice guidance that is significantly more demanding than what enforcement actions have actually required. The gap between the ICO's guidance and its enforcement suggests that the explanation standard in practice may be substantially lower than the guidance would imply.
Individual Experiences: The Right in Practice
Academic and journalistic investigation has examined the experiences of individuals who have attempted to invoke their Article 22 rights. The picture that emerges is of rights that exist in law but are difficult to access in practice.
Research by the Algorithm Watch foundation, which submitted requests for explanation to a range of European companies using automated decision systems, found that most companies provided responses that were technically GDPR-compliant in form but inadequate in substance. Responses typically described the general categories of factors considered by the decision system without providing: the specific data about the individual that had been processed; the weights assigned to different factors; the specific reasons the individual's outcome differed from the general population; or any information about error rates or the possibility that the system's output was wrong.
A systematic analysis of Article 22 exercise letters sent by researchers at the Vrije Universiteit Amsterdam found that even companies that acknowledged Article 22 obligations typically provided responses that would not enable a sophisticated user — let alone an ordinary consumer — to understand why the specific decision was made about them or to identify any basis for challenging it.
The researchers concluded that the gap between the Article 22 right and what individuals actually receive when they invoke it reflects a structural feature of the regulation: it creates the right to explanation without specifying with sufficient precision what an adequate explanation looks like, leaving organizations to determine for themselves what "meaningful information about the logic involved" requires. Until enforcement actions establish a more demanding standard through specific findings of inadequacy, organizations have limited incentive to provide explanations that go beyond what they calculate to be legally defensible.
The Enforcement Gap: Why Article 22 Has Underperformed
The gap between GDPR Article 22's promise and its practice reflects multiple interacting factors.
Resource constraints. Most national DPAs are substantially under-resourced relative to the volume of complaints they receive and the complexity of the cases that warrant investigation. Article 22 investigations are resource-intensive: they require technical expertise to assess AI decision systems, factual investigation to determine whether human review is genuine, and legal analysis to determine whether effects are "legal or similarly significant." DPAs that are already struggling with basic complaint volume and data breach reporting cannot prioritize Article 22 enforcement without significant additional resources.
Definitional ambiguity. The core terms of Article 22 — "solely automated," "legal or similarly significant effects," "meaningful information about the logic involved" — are all subject to interpretive dispute. Organizations that are not clearly over the line can argue they are compliant without DPAs having clear authority to find otherwise. The EDPB's guidance, while authoritative, does not have the binding force of law and is not uniformly applied across member states.
Industry resistance and legal capacity. Major companies that use AI for consequential decisions have substantial legal resources and can contest Article 22 enforcement actions through extensive litigation, creating cost and delay that DPAs — with limited enforcement budgets — must manage strategically. The cost of fighting enforcement creates implicit limits on how aggressively DPAs can pursue Article 22 cases.
The opacity of opacity. Paradoxically, the opacity that Article 22 is designed to address makes it harder to enforce. Organizations that do not disclose their decision logic are difficult to investigate without extensive document demands and forensic analysis. The resources required to establish that a decision was "solely automated" and that the effects were "significant" may exceed what DPAs can invest in individual cases without a sufficiently large sanction to justify the cost.
Ireland as bottleneck. Most major technology companies with EU operations are headquartered in Ireland for regulatory purposes, making the Irish Data Protection Commission the lead supervisory authority under GDPR's one-stop-shop mechanism. The Irish DPC has been extensively criticized for slow and deferential enforcement, and several cases in which other DPAs brought proceedings against major platforms have been complicated by conflicts with the Irish DPC's lead authority status.
What Effective Article 22 Enforcement Would Require
Based on the enforcement experience of the first five years of GDPR, identifying what more effective Article 22 enforcement would require:
Clearer definition of adequate explanation. The EDPB should issue binding guidance specifying with more precision what "meaningful information about the logic involved" requires for different categories of AI decision systems. This guidance should specify: what individual-level information must be provided (not just general model descriptions); what the adequate format for explanation is; and what information about error rates and uncertainty must be included. With clearer standards, organizations know what is required and DPAs have clearer authority to enforce.
Dedicated enforcement resources. DPAs need dedicated AI enforcement units with technical expertise — including data scientists and AI engineers who can analyze AI decision systems and evaluate explanation adequacy. The complexity of Article 22 cases requires specialized capacity that generalist enforcement teams cannot adequately provide.
Proactive auditing. Reactive enforcement — responding to individual complaints — is insufficient for a systemic accountability problem. DPAs should conduct proactive audits of AI decision systems in high-risk sectors, similar to financial regulatory examinations of bank lending practices. These audits would apply Article 22 requirements to systems before affected individuals make complaints, enabling earlier identification and correction of systemic problems.
Civil society standing. Under GDPR, most enforcement actions require an individual complaint. Civil society organizations — advocacy groups, research organizations — that identify systemic Article 22 problems on behalf of large populations of affected individuals should have standing to bring representative complaints and enforcement actions. Several EU member states have explored this through collective redress mechanisms, but the approach is uneven across jurisdictions.
Coordination with AI Act enforcement. As the EU AI Act's explanation requirements come into force, coordination between GDPR enforcement (DPAs) and AI Act enforcement (national market surveillance authorities) will be essential to avoid regulatory fragmentation and to ensure that explanation obligations under both frameworks are coherently applied.
Lessons and Implications
The GDPR Article 22 enforcement experience offers important lessons for the broader project of AI explanation rights.
Rights without enforcement are aspirational. The most sophisticated legal framework for explanation rights in the world has not, in its first five years, consistently delivered meaningful explanation to the individuals it protects. Rights require enforcement to be real — enforcement that is adequately resourced, technically capable, and willing to establish demanding standards through specific findings.
Specificity matters. The ambiguity of Article 22's key terms has significantly reduced its practical impact. Future AI regulation — including the EU AI Act — must be more specific about what adequate explanation requires, to reduce organizations' ability to define compliance for themselves and to give enforcement authorities clearer grounds for action.
Technical capacity is essential. Enforcing explanation rights requires technical understanding of AI systems that most regulatory agencies currently lack. Building technical capacity in regulatory agencies is a necessary investment for the future of AI accountability.
Individual rights are insufficient for systemic accountability. Even robust enforcement of Article 22's individual rights would not reveal systemic patterns of discriminatory or erroneous AI decision-making. Individual rights must be complemented by systemic transparency requirements — aggregate reporting, algorithmic auditing, and research access — to produce genuine algorithmic accountability.
Reflection Questions
-
The GDPR Article 22 enforcement experience suggests that ambiguity in legal text significantly reduces practical impact. How specific must a legal explanation requirement be to be effective? What are the risks of over-specification?
-
The Dutch Tax Authority case became a national political crisis, forcing a cabinet resignation. Does political accountability substitute for legal enforcement in some cases? Or does political accountability depend on prior legal accountability to create the transparency that makes political accountability possible?
-
Several researchers have proposed that DPAs should conduct proactive AI audits rather than waiting for individual complaints. What legal authority would DPAs need to conduct proactive audits? What would the appropriate scope and frequency of such audits be?
-
GDPR enforcement has been significantly hampered by the concentration of major tech companies' EU headquarters in Ireland and the Irish DPC's slow enforcement. What reforms to GDPR's enforcement structure would address this bottleneck?
-
The gap between what Article 22 requires in theory and what individuals actually receive when they invoke it suggests that organizations are complying with the letter but not the spirit of the provision. What changes in organizational culture — not just legal requirements — would be needed for organizations to provide genuinely meaningful explanations?