Case Study: The Right to Explanation — GDPR Article 22 in Practice
"The right to explanation is like a door. The GDPR put the door in the wall. But we're still arguing about whether it's locked, and who holds the key." — European data protection lawyer, speaking at a 2023 privacy conference
Overview
The European Union's General Data Protection Regulation (GDPR), which took effect in May 2018, is the most comprehensive data protection law in the world. Among its most debated provisions is Article 22, which addresses automated decision-making. Some scholars and advocates have argued that Article 22, together with related provisions, creates a "right to explanation" — a legal entitlement to understand how algorithmic systems make decisions about individuals. Others argue that the GDPR creates only a right to general information, not a right to a specific explanation.
This case study examines what Article 22 actually says, how it has been interpreted and enforced, and what it reveals about the challenges of regulating algorithmic transparency in practice.
Skills Applied: - Analyzing legal text and its practical implications - Evaluating the gap between legal rights and practical enforcement - Connecting regulatory frameworks to the concepts of transparency and explainability - Assessing the adequacy of current governance approaches
The Legal Text
What Article 22 Says
Article 22(1) states:
"The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."
This creates a right not to be subject to solely automated decisions with significant effects — not, on its face, a right to an explanation. The right applies when three conditions are met:
- The decision is based solely on automated processing (no meaningful human involvement)
- The processing includes profiling (automated processing to evaluate personal aspects)
- The decision produces legal effects or similarly significantly affects the individual
The Exceptions
Article 22(2) provides three exceptions where solely automated decisions are permitted:
- (a) The decision is necessary for entering into or performing a contract
- (b) The decision is authorized by EU or member state law
- (c) The decision is based on the individual's explicit consent
When an exception applies, Article 22(3) requires that the data controller implement "suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention, to express his or her point of view and to contest the decision."
The "Right to Explanation" Debate
The text of Article 22 does not explicitly mention a "right to explanation." So where does the claim come from?
Recital 71 of the GDPR — a non-binding explanatory statement accompanying the legislation — states that data subjects should have the right to obtain:
"...an explanation of the decision reached after [automated] assessment and to challenge the decision."
Additionally, Articles 13(2)(f) and 14(2)(g) require data controllers to provide:
"...meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject."
These provisions, read together, have generated a vigorous scholarly and legal debate.
The Two Sides of the Debate
The Expansive Reading: A Right to Explanation Exists
Scholars including Bryce Goodman and Seth Flaxman (2017) argue that the GDPR creates a meaningful right to explanation, based on:
- Recital 71 explicitly mentions "an explanation of the decision reached"
- Articles 13 and 14 require "meaningful information about the logic involved"
- The GDPR's overarching purpose is to protect individuals from the power of data-processing entities
- A right without an explanation is meaningless — if you cannot understand why a decision was made, you cannot meaningfully contest it (as Article 22(3) requires)
Under this reading, an individual denied a loan by an algorithm would have the right to know: which factors contributed to the denial, how they were weighted, and what the individual could do to change the outcome.
The Restrictive Reading: Only a Right to General Information
Scholars including Sandra Wachter, Brent Mittelstadt, and Luciano Floridi (2017) argue that the GDPR creates only a right to general information about automated decision-making systems, not a right to explanation of specific decisions:
- Recitals are non-binding — they inform interpretation but do not create legal rights
- Articles 13 and 14 require information about "the logic involved" at the time data is collected — before any specific decision is made. This is ex ante general information, not ex post specific explanation
- "Meaningful information about the logic involved" can be satisfied by describing the system's general methodology without explaining individual decisions
- Requiring specific explanations could compromise trade secrets, system security, and computational feasibility
Under this reading, an individual denied a loan would be entitled to know that algorithmic processing is used, what categories of data are considered, and the general logic of the system — but not why their specific application was denied.
Implementation in Practice
How Companies Have Responded
The practical implementation of Article 22 has been uneven. Research by several organizations has documented how companies respond to requests for information about automated decision-making:
Credit decisions. Some EU banks provide standardized letters listing the factors considered in credit decisions (income, employment status, credit history) without specifying which factors were decisive for a particular application. Others provide numerical credit scores with brief factor summaries (similar to the U.S. adverse action notice requirement). Few provide anything approaching a specific explanation of the model's reasoning for an individual case.
Hiring. Automated resume screening tools used in the EU are subject to Article 22 when they produce significant effects on applicants. In practice, most companies either argue that a human reviews the algorithmic output (removing the "solely automated" element) or provide generic descriptions of the screening process. Applicants rarely receive specific explanations of why their resume was screened out.
Insurance. Algorithmic pricing and eligibility decisions in insurance fall squarely within Article 22. Some insurers have responded by providing "key factors" summaries — listing the main variables that influenced pricing without revealing the model's exact logic. Others have argued that their models involve human oversight, removing the "solely automated" trigger.
Enforcement Actions
Enforcement of Article 22 has been limited but growing:
Italy (2020). The Italian Data Protection Authority (Garante) fined a food delivery platform for using an algorithm to rank riders for shift allocation without providing adequate transparency about the algorithm's logic. The Garante found that the system produced significant effects on workers' livelihoods and that the platform failed to provide meaningful information about the automated decision-making process.
Netherlands (2020). The District Court of The Hague struck down SyRI (System Risk Indication), a Dutch government algorithm used to detect welfare fraud, partly on transparency grounds. The court found that the system's opacity — the government had not disclosed how it worked, what data it used, or how decisions were made — violated the right to privacy under the European Convention on Human Rights.
France (2020). The French Council of State addressed algorithmic decision-making in university admissions (Parcoursup), ruling that the system's transparency was insufficient and that applicants were entitled to more information about how their applications were assessed.
The "Human in the Loop" Problem
A critical implementation challenge involves the "solely automated" requirement. Many organizations argue that their algorithmic systems include human review — a "human in the loop" — and therefore fall outside Article 22's scope. But the quality of human review varies enormously:
- At one extreme, a human carefully evaluates the algorithm's recommendation, considers additional context, and makes an independent decision. This is genuine human involvement.
- At the other extreme, a human rubber-stamps the algorithm's output without meaningful review — clicking "approve" or "deny" based solely on the score. This is a human in the loop in name only.
The GDPR does not clearly specify what counts as meaningful human involvement, creating a loophole that organizations can exploit by inserting a nominal human reviewer without providing genuine oversight.
Analysis Through Chapter Frameworks
The Black Box Problem Applied to Law
The GDPR's transparency provisions attempt to address the black box problem through legal requirements. But the law faces the same challenges the chapter identifies:
The locked room problem. If a company uses a deep neural network for credit decisions, it may be technically unable to provide a specific explanation of any individual decision — even if it wants to comply with the law. The GDPR does not address the question of whether legal transparency requirements should constrain the choice of model architecture. Should a company be required to use an interpretable model if its deployment domain falls under Article 22?
The locked safe problem. Some companies resist explanation not because of technical impossibility but because of trade secret concerns. The GDPR does not clearly resolve the tension between the individual's right to explanation and the company's right to protect commercial secrets. Recital 63 states that the right of access "should not adversely affect the rights or freedoms of others, including trade secrets or intellectual property" — but provides no clear guidance on how to balance these competing rights.
Transparency Theater in Regulatory Compliance
The chapter's concept of transparency theater applies directly to GDPR compliance. Companies can satisfy the letter of Articles 13 and 14 — providing information about "the logic involved" — with boilerplate language that provides no meaningful understanding:
"We use automated systems to assess your application. These systems consider relevant factors including but not limited to your financial history, employment information, and other data you have provided. The assessment is conducted using advanced analytical methods designed to ensure accurate and consistent evaluation."
This satisfies the formal requirement (information was provided) while conveying almost nothing. The individual cannot determine: which specific factors mattered, how they were weighted, what the threshold was, or what they could change. This is transparency theater — and the GDPR, as currently enforced, has not consistently distinguished it from genuine transparency.
The Broader Significance
GDPR as a Global Standard
The GDPR has influenced data protection law worldwide. Brazil's LGPD, India's DPDP Act, and numerous other national laws draw on GDPR concepts. The EU AI Act (2024) builds on Article 22's foundation, imposing more specific transparency requirements for "high-risk" AI systems. How Article 22 is interpreted and enforced will shape algorithmic transparency requirements globally.
The Gap Between Rights and Remedies
The GDPR creates a legal right (or at minimum, a legal expectation) regarding automated decision-making. But a right without an effective remedy is merely aspirational. For the right to explanation to be meaningful, individuals need: (a) awareness that they are subject to automated decisions, (b) knowledge of how to exercise their rights, (c) resources to challenge decisions, and (d) enforcement agencies willing and able to act. Each of these conditions is currently only partially met.
Discussion Questions
-
The interpretation question. Based on the legal text provided in this case study, which interpretation do you find more persuasive — the expansive reading (a right to specific explanation) or the restrictive reading (a right to general information)? What principles of legal interpretation support your position?
-
The enforcement gap. Even if a right to explanation exists in theory, enforcement has been limited. Why? Consider: the resources required for individuals to challenge algorithmic decisions, the difficulty of proving that an explanation is inadequate, and the power imbalance between individuals and the organizations that deploy algorithmic systems.
-
The technical feasibility question. Can companies be legally required to explain systems that are technically inexplicable (deep neural networks with millions of parameters)? Should the law require companies to use interpretable models in high-stakes domains? What are the trade-offs?
-
The "human in the loop" question. How should the GDPR define "meaningful human involvement" to prevent the rubber-stamp problem? Design a test that distinguishes genuine human review from perfunctory sign-off.
Your Turn: Mini-Project
Option A: Subject Access Request. If you interact with an EU-regulated service (or a company with EU operations), submit a request under GDPR Articles 13-15 asking for information about any automated decision-making that affects you. Document: what you asked for, what you received, how long it took, and whether the information was meaningful. Write a one-page analysis. (If you cannot submit a request, find published examples online and analyze those.)
Option B: Regulatory Comparison. Research how at least two jurisdictions (e.g., EU GDPR Article 22, the EU AI Act's transparency requirements, New York City Local Law 144, or proposed U.S. federal legislation) address the right to explanation for algorithmic decisions. Write a two-page comparative analysis: What does each require? What are the gaps? Which approach is most protective of individuals?
Option C: Model Explanation Specification. You are advising a bank that uses algorithmic credit scoring in the EU. Draft a one-page specification for the explanations the bank should provide to applicants who are denied credit, ensuring compliance with the most expansive reasonable interpretation of Article 22. Specify: What information should be included? In what format? At what level of detail? How should the explanation balance transparency with trade secret protection?
References
-
European Parliament and Council. "Regulation (EU) 2016/679 (General Data Protection Regulation)." Official Journal of the European Union, 2016. Articles 13, 14, 22; Recital 71.
-
Goodman, Bryce, and Seth Flaxman. "European Union Regulations on Algorithmic Decision-Making and a 'Right to Explanation.'" AI Magazine 38, no. 3 (2017): 50-57.
-
Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. "Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation." International Data Privacy Law 7, no. 2 (2017): 76-99.
-
Selbst, Andrew D., and Julia Powles. "Meaningful Information and the Right to Explanation." International Data Privacy Law 7, no. 4 (2017): 233-242.
-
European Commission. "Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)." 2024.
-
District Court of The Hague. NJCM et al. v. The State of the Netherlands (SyRI), Case No. C/09/550982/HA ZA 18-388, February 5, 2020.
-
Garante per la Protezione dei Dati Personali (Italy). Decision on Foodinho s.r.l. (Deliveroo), Register of Measures No. 234, 2021.
-
Kaminski, Margot E. "The Right to Explanation, Explained." Berkeley Technology Law Journal 34, no. 1 (2019): 189-218.