Case Study 17.1: "The Arkansas Medicaid Case"
Automated Benefits Cuts and the Right to Know Why
Overview
The intersection of algorithmic decision-making and constitutional due process rights is nowhere more clearly visible — or more consequential — than in the administration of government benefits. When an algorithm determines how many hours of home care a disabled person receives, and when that algorithm's outputs cannot be explained to the person whose life depends on them, the abstract right to explanation becomes a question of survival.
The Arkansas Medicaid home care case — litigated under the case names Ledgerwood v. Jobe and its companion cases — is the foundational US legal authority on the right to explanation in algorithmic government benefit administration. It is examined in detail in Chapter 15's Case Study 17.1 from the communication perspective; this companion case study focuses on the constitutional dimension: the right to explanation as a matter of due process, and what the case reveals about the limits of algorithmic decision-making in government contexts where citizens' constitutional entitlements are at stake.
This case study should be read alongside Chapter 15's case study for a complete picture. Together, they illustrate how the same set of facts — automated welfare benefit cuts without adequate explanation — can be analyzed through the lens of communication (Chapter 15) and through the lens of constitutional rights (Chapter 17).
The Constitutional Framework: Goldberg v. Kelly and Its Successors
The constitutional analysis of algorithmic benefit decisions begins with the Supreme Court's 1970 decision in Goldberg v. Kelly. In Goldberg, the Court held that welfare recipients have a constitutionally protected property interest in their benefits, and that this interest cannot be terminated without the procedural protections of due process. At minimum, due process in this context requires:
Notice. Prior to termination of benefits, the recipient must receive notice that contains the reasons for the proposed action with enough specificity to enable the recipient to mount a response.
Opportunity to be heard. The recipient must have a genuine opportunity to present their case before a neutral decision-maker, including the right to appear personally, to confront witnesses, and to present evidence.
Reasoned decision. The decision-maker must explain why the recipient's arguments were not persuasive.
These requirements were designed for human decision-makers who could articulate their reasoning in human terms. The question that the Arkansas case raised — and that courts are still working out — is how these requirements apply when the decision-maker is an algorithm.
The Pre-Algorithm System: Professional Judgment and Its Variability
Before Arkansas implemented the algorithmic home care determination system in 2016, individual nurse assessors exercised professional judgment to recommend care hour allocations for Medicaid recipients. Nurses visited recipients, conducted assessments, reviewed medical records, and recommended care levels based on their professional evaluation of each person's needs.
This system had genuine virtues: it was individualized, it could take into account factors not captured in standardized assessments, and it produced recommendations that nurses could explain in ordinary professional terms. A recipient whose hours were reduced could ask the assessing nurse why and receive a meaningful response based on the nurse's clinical assessment.
It also had genuine limitations: professional judgment is variable, and different nurses made different assessments of similar situations. Arkansas's adoption of an algorithmic system was motivated in part by a legitimate goal of consistency — ensuring that similarly situated recipients received similar allocations regardless of which nurse happened to assess their case.
The algorithmic system traded individualized professional judgment for algorithmic consistency. What it did not do was preserve — or even attempt to preserve — the capacity to explain decisions in individual terms. The algorithm's outputs could be produced, but they could not be explained.
The Algorithm's Explanation Problem
The ALC tool used by Arkansas calculated care hour allocations based on scores on the ARIA assessment instrument. The calculation involved a mapping from ARIA subscores to care tiers, with each tier corresponding to a range of authorized care hours. The specifics of this mapping — the tier boundaries, the weights assigned to different subscores, the adjustments applied for particular conditions — were contained in the algorithm's logic, which was treated as proprietary information by the vendor, Brocade.
When a recipient's care hours changed under the new system, the change could have multiple possible causes: - The recipient's functional status had changed (their ARIA scores were different from the previous assessment). - The ARIA assessment instrument had been revised. - The tier boundaries in the ALC tool had been adjusted. - The ALC tool had been modified in some other way.
Recipients who received notice of care hour changes had no way to determine which of these causes had produced the change, because the notice did not disclose: what the current ARIA scores were, how they compared to previous scores, what the tier boundaries were, or whether the algorithm itself had changed. The notice contained neither the inputs to the algorithm, the algorithm's logic, nor a comparison that would enable the recipient to understand why the output had changed.
This was not merely a communication failure; it was a constitutional failure. Due process notice that is insufficient to enable the recipient to understand the basis for the decision and to mount a response is constitutionally inadequate.
The Federal Court's Analysis
In ruling on preliminary injunctions requested by affected recipients, the federal district court applied Goldberg's due process framework and found multiple constitutional failures.
Notice adequacy. The court found that the notices recipients received were constitutionally inadequate because they did not provide the reasons for the change in care hours with sufficient specificity to enable a meaningful response. The court emphasized that the constitutional standard for notice is not whether the notice identifies the program under which the decision was made, but whether it gives the recipient enough information to know why the decision was made about their particular case and to challenge it if wrong. The algorithmic termination notices failed this standard.
Genuine opportunity to be heard. The court found that the appeal process was constitutionally inadequate because appeals were reviewed using the same ALC tool that had made the original decision, and the appeals reviewer had no authority to deviate from the tool's output based on the information the appellant presented. This is the constitutional core of the algorithmic due process problem: due process requires a genuine opportunity to contest the decision, which means a decision-maker who can actually be persuaded by the facts the appellant presents. An appeal process in which the outcome is determined by the same algorithm before the hearing begins is not a genuine opportunity to be heard.
Access to the algorithm. The court expressed significant concern about the state's refusal to disclose the ALC tool's methodology, noting that a recipient who cannot examine the basis for the decision that cut their benefits cannot effectively challenge it. The court did not issue a holding requiring disclosure — the preliminary injunction was based on notice and hearing inadequacy — but its analysis suggested that the constitutional right to a meaningful hearing may require access to the algorithmic logic that drove the contested decision.
Implications for Algorithmic Government Decision-Making
The Arkansas case has generated extensive academic commentary and has influenced both policy discussions and subsequent litigation. Several implications are particularly significant.
Algorithm opacity is constitutionally problematic in government benefits contexts. The constitutional protections of due process — which have required reasoned explanation of government benefit decisions for decades — do not disappear because the decision is made by an algorithm. An algorithm that cannot explain its decisions cannot satisfy the constitutional standard for those decisions.
Vendor confidentiality cannot override constitutional rights. The state's argument that it could not disclose the algorithm's methodology because of its contractual obligations to the vendor was constitutionally unpersuasive. The state's constitutional obligations to its citizens run higher than its contractual obligations to vendors. Government agencies that procure AI systems for use in benefit administration must ensure that those systems can be disclosed to the extent necessary to satisfy constitutional requirements.
Human review must be genuine. The appeal process that recycled cases through the same algorithm without giving the appeals reviewer authority to deviate from its outputs was constitutionally insufficient. Any government decision-making process that uses AI must maintain genuine human review — not nominal review — for cases where the AI's output is challenged.
The "solely automated" problem has constitutional dimensions. GDPR's "solely automated" concern is mirrored in constitutional law: a decision process that is nominally supervised by humans but functionally determined by an algorithm raises constitutional due process concerns even if a human formally signs the decision. Courts applying due process analysis to algorithmic benefit decisions are developing doctrine that parallels the European regulatory approach.
The Broader Welfare Algorithm Problem
The Arkansas case is not an isolated incident. Similar algorithmic benefit determination systems have been deployed in public benefits administration across the United States, with similar constitutional problems.
Indiana deployed a privatized Medicaid eligibility system in the late 2000s that made automated denials at a high rate, many of which were found to be erroneous. After extensive litigation and public controversy, Indiana terminated the vendor contract and returned to human caseworkers.
The Los Angeles County child welfare agency deployed an algorithmic tool to predict child abuse risk, which was used to prioritize caseworker investigations. The tool's methodology was not disclosed to parents whose cases were investigated on its basis, and researchers found significant racial disparities in its predictions.
Michigan's unemployment insurance system deployed an automated fraud detection tool that wrongly flagged over 40,000 claimants for fraud, issuing automatic penalties without adequate individual review. A class action lawsuit found that the system violated due process; Michigan ultimately paid over $20 million in settlements and repaid benefits to wrongly accused claimants.
The pattern across these cases is consistent: algorithmic government benefit systems are implemented with inadequate attention to due process requirements, producing systems that cannot explain their decisions, do not preserve genuine human oversight, and generate systematic errors that fall disproportionately on the most vulnerable recipients.
Reform Pathways
The constitutional and policy analysis of the Arkansas case and its siblings suggests several reform requirements for algorithmic government benefit systems.
Pre-procurement: Constitutional compliance review. Before procuring an AI system for use in benefit determination, government agencies should conduct a constitutional compliance review asking: Can this system's decision logic be disclosed to affected citizens? Can this system produce individual-level explanations sufficient for due process notice? Can the system preserve genuine human override capacity? Systems that cannot answer yes to these questions should not be deployed in constitutionally sensitive contexts.
Contractual requirements: Transparency and disclosure. Procurement contracts for government AI systems should include requirements that the vendor disclose methodology sufficient for constitutional compliance, waive trade secrecy claims that would prevent required disclosure, and support individual-level explanation generation.
System design: Explanation capacity. Algorithmic benefit systems should be designed from the ground up to produce individual-level explanations — records of which inputs drove each output, capable of being translated into plain-language notices. This may favor interpretable model architectures over more complex ones; the performance tradeoff, if any, must be evaluated against the constitutional requirement.
Appeal process: Genuine human override. Appeals of algorithmic benefit decisions must be reviewed by human decision-makers with genuine authority to override the algorithmic output and access to sufficient information to evaluate the original decision on its merits.
Monitoring: Aggregate error detection. Algorithmic benefit systems should be monitored for aggregate error patterns — error rates disaggregated by demographic characteristics, error patterns that emerge over time — with mechanisms to trigger system review when error patterns are detected.
Reflection Questions
-
The Arkansas court found that the appeal process was constitutionally inadequate because the appeals reviewer was bound by the algorithm's output. How much discretion must a human reviewer have to satisfy due process? Is it sufficient that the reviewer can exercise discretion in a narrow range of cases, or must they be able to override the algorithm in any case where they believe it is wrong?
-
The state argued that it could not disclose the algorithm's methodology because of vendor confidentiality requirements. What legal mechanisms should be available to government agencies and affected citizens when vendor confidentiality conflicts with constitutional disclosure requirements?
-
The pre-algorithm system — individual nurse judgment — had its own problems: variability, inconsistency, and potential bias. How should these limitations of the alternative be weighed against the constitutional problems with the algorithmic system? Does the choice between imperfect human judgment and algorithmically consistent but constitutionally problematic automated decision-making have a clear answer?
-
The Arkansas case involves a government benefits context where constitutional protections are clearly applicable. Do similar explanation rights exist for private-sector AI decisions that have similar consequences for people's lives — employment, housing, credit, insurance? If not, should they?
-
Michigan, Indiana, Arkansas, and other states have deployed algorithmic benefit systems that were subsequently found to violate due process. Who should bear the cost of these errors — the state, the vendor, or the affected recipients? How should liability be structured to create appropriate incentives for constitutional compliance in government AI procurement?