Case Study: The Algorithmic Accountability Act: Legislative Responses
"If you can't audit it, you can't govern it." — Senator Ron Wyden, co-sponsor of the Algorithmic Accountability Act
Overview
When algorithmic systems make consequential decisions about people's lives — their creditworthiness, their job prospects, their housing applications, their insurance premiums, their parole eligibility — and those decisions are opaque, untested for bias, and unaccountable, the question eventually moves from the technical and ethical domain to the legislative one. Someone must write the rules.
The Algorithmic Accountability Act, first introduced in the United States Congress in 2019 and reintroduced in revised form in 2022, represents one of the most significant legislative attempts to close the accountability gap described in this chapter. It proposes to require large companies that deploy automated decision systems in high-stakes domains to conduct impact assessments — evaluating their systems for accuracy, fairness, bias, and privacy before and during deployment.
This case study examines the Act's origins, its key provisions, the debate it has generated, and what it reveals about the challenges of legislating algorithmic accountability in a rapidly evolving technological landscape.
Skills Applied: - Evaluating legislative approaches to the accountability gap - Connecting AIA frameworks (Section 17.3) to concrete policy proposals - Analyzing stakeholder positions on algorithmic regulation - Assessing the adequacy of proposed governance mechanisms
Background: The Path to Legislation
The Problem Congress Confronted
By the late 2010s, a growing body of evidence documented the harms of unaccountable algorithmic systems. ProPublica's 2016 investigation of the COMPAS recidivism algorithm revealed that Black defendants were nearly twice as likely as white defendants to be falsely flagged as high-risk. Researchers at MIT demonstrated that commercial facial recognition systems had error rates of up to 34.7% for darker-skinned women compared to 0.8% for lighter-skinned men. Studies of online advertising showed that searches for traditionally Black names were more likely to trigger ads for arrest records. Housing platforms and hiring algorithms displayed patterns of racial and gender discrimination.
These were not isolated failures. They represented a systemic pattern: consequential algorithmic systems were being deployed at scale without adequate testing, without independent oversight, and without clear accountability when they caused harm.
The existing regulatory landscape was fragmented. The Fair Housing Act prohibited housing discrimination but was written before algorithmic pricing existed. The Equal Employment Opportunity Commission addressed workplace discrimination but lacked specific authority over automated hiring tools. The Fair Credit Reporting Act governed credit decisions but did not clearly extend to the machine learning models increasingly used by lenders. Sector-specific regulators had neither the technical expertise nor the clear statutory authority to evaluate algorithmic systems within their jurisdictions.
"The law was playing catch-up," observed a Congressional Research Service report in 2021, "and it was losing."
The Legislative Response
Senator Ron Wyden (D-OR) and Representative Yvette Clarke (D-NY) introduced the Algorithmic Accountability Act of 2019. The bill was reintroduced in substantially revised form in 2022, reflecting lessons learned from the intervening three years of debate, scholarship, and real-world algorithmic controversies.
The 2022 version of the Act was notably more specific, more demanding, and more influenced by the emerging consensus in the algorithmic accountability research community.
Key Provisions of the Act
Who Is Covered
The Act applies to "covered entities" — defined as companies that meet specific size and data-processing thresholds. Generally, this means companies with: - More than $50 million in annual revenue, or - Data on more than one million individuals, or - Data broker operations that buy or sell data on more than 100,000 individuals
This threshold was designed to capture large technology companies, major financial institutions, healthcare organizations, and significant government contractors — the entities most likely to deploy consequential algorithmic systems — while exempting small businesses for whom the compliance burden would be disproportionate.
What Is Required: Impact Assessments
The core requirement of the Act is that covered entities must conduct impact assessments of their "automated decision systems" and "augmented critical decision processes." The terminology is significant: the Act covers both fully automated decisions and human decisions that are substantially informed by algorithmic outputs.
An impact assessment under the Act must evaluate:
-
The system's purpose and design: What decisions does the system make or inform? What data does it use? What model architecture and training methodology were employed?
-
Performance and accuracy: How accurate is the system? How was accuracy measured? Are there differential error rates across demographic groups?
-
Fairness and bias: Does the system produce disparate outcomes across protected characteristics (race, gender, age, disability, etc.)? If so, has the entity assessed whether those disparities are justified by legitimate business necessity?
-
Privacy and data protection: What personal data does the system collect, store, and process? Are data minimization principles observed? How is the data secured?
-
Transparency and explainability: Can the system's decisions be explained to affected individuals? What notice is provided to individuals about the system's existence and its role in decisions affecting them?
-
Stakeholder consultation: Has the entity consulted with affected communities, civil rights organizations, or domain experts in designing and evaluating the system?
Oversight and Enforcement
The Act designates the Federal Trade Commission (FTC) as the primary enforcement agency. The FTC would: - Establish detailed rules for conducting impact assessments - Require covered entities to submit assessment summaries - Conduct or commission independent audits - Investigate complaints of algorithmic harm - Impose civil penalties for non-compliance
The Act also creates a Bureau of Technology within the FTC — a new division with dedicated staff and expertise for algorithmic oversight. This provision reflects the recognition that effective enforcement requires technical capacity, not just legal authority.
The Debate
Arguments in Favor
Closing the accountability gap. Proponents argue that the Act directly addresses the core problem identified throughout Chapter 17: algorithmic systems make consequential decisions without adequate accountability. Impact assessments create a structured process for identifying and mitigating harms before they occur.
"This is not about banning algorithms," Senator Wyden stated. "It's about requiring the same basic accountability for automated decisions that we expect for human decisions."
Establishing a floor, not a ceiling. Supporters emphasize that the Act establishes minimum standards without prohibiting innovation. Companies remain free to deploy algorithmic systems — they simply must assess their impacts. This mirrors the Environmental Impact Assessment model: EIAs do not ban development projects, but they require proponents to evaluate and mitigate environmental harm.
Empowering regulators. The creation of a Bureau of Technology within the FTC would address the widely acknowledged gap in regulatory technical capacity. As Sofia Reyes of the DataRights Alliance observed: "You can't enforce algorithmic accountability with regulators who don't understand algorithms. The Bureau of Technology is the single most important provision in the bill."
Building an evidence base. Impact assessments, even imperfect ones, generate documentation. Over time, a corpus of assessments creates an evidence base that researchers, regulators, and policymakers can analyze to identify patterns of harm, best practices, and areas needing further regulation.
Arguments Against
Compliance burden. Industry groups, including the Information Technology Industry Foundation and the Chamber of Commerce's Technology Engagement Center, argued that the Act imposes costly compliance requirements that would disproportionately burden mid-size companies and stifle innovation. They estimated compliance costs of $10,000 to $100,000 per impact assessment, depending on system complexity.
Vagueness. Critics from both industry and civil society raised concerns about definitional vagueness. What exactly constitutes an "automated decision system"? Does a spam filter count? A content recommendation algorithm? The Act's definitions, while broader than some proposals, still require extensive rulemaking to operationalize.
Risk of capture. Scholars including Meredith Whittaker (AI Now Institute) warned that the impact assessment model is vulnerable to the same capture dynamics that have plagued Environmental Impact Assessments: companies may treat the process as a compliance exercise, producing lengthy documents that satisfy the letter of the law without meaningfully preventing harm. "The question is not whether impact assessments are conducted," Whittaker wrote, "but whether they are conducted with teeth."
Insufficient scope. Civil rights organizations, including the ACLU and the Lawyers' Committee for Civil Rights Under Law, argued that the Act's size thresholds exclude too many entities. A county government with a population under one million that deploys a predictive policing algorithm would not be covered. A startup with fewer than $50 million in revenue that sells a hiring algorithm used by hundreds of companies might not be covered either.
Preemption concerns. Some state legislators worried that a federal law could preempt stronger state-level protections. Illinois's Biometric Information Privacy Act (BIPA), for example, provides individuals with a private right of action for biometric data violations — a stronger enforcement mechanism than anything in the federal bill. Federal legislation that sets a national floor but preempts state laws that exceed it could actually weaken protections in some jurisdictions.
Comparative Context
The Algorithmic Accountability Act does not exist in isolation. It is part of a global trend toward algorithmic governance:
-
The EU AI Act (adopted 2024) takes a risk-based approach, categorizing AI systems by risk level and imposing proportional requirements. High-risk systems face mandatory conformity assessments, human oversight requirements, and post-market surveillance.
-
Canada's Directive on Automated Decision-Making (2019) requires federal government agencies to conduct Algorithmic Impact Assessments before deploying automated systems that affect individuals' rights, benefits, or services.
-
New York City's Local Law 144 (2023) requires employers using automated employment decision tools to commission annual independent bias audits and disclose the results to job candidates.
-
Brazil's AI Bill (under consideration) proposes a graduated regulatory framework with mandatory impact assessments for high-risk AI systems.
The Algorithmic Accountability Act is broader in scope than NYC's Local Law 144 (which covers only employment decisions) but less comprehensive than the EU AI Act (which includes risk classification, prohibited practices, and detailed technical standards). It occupies a middle ground that reflects the United States' traditionally lighter regulatory approach to technology.
Connections to Chapter 17
The AIA Framework
The Act's impact assessment requirement is a direct implementation of the Algorithmic Impact Assessment framework discussed in Section 17.3. The Act's assessment criteria — accuracy, fairness, privacy, transparency, stakeholder consultation — map closely to the chapter's recommended AIA components. The Act adds a legislative mandate and enforcement mechanism, transforming the AIA from a best practice into a legal obligation.
The Many Hands Problem
The Act implicitly addresses the many hands problem by making the deploying entity primarily responsible for conducting assessments. While the Act does not resolve all questions of upstream liability (e.g., responsibility of training data providers or model developers), it establishes that the entity that chooses to use an algorithmic system in a consequential context bears the obligation to assess its impacts. This is a partial but meaningful step toward assigning accountability within the distribution chain.
The Audit Ecosystem
The Act's provision for FTC-commissioned independent audits directly supports the emerging audit ecosystem discussed in Section 17.5. By creating demand for independent algorithmic auditing — and backing that demand with regulatory authority — the Act would accelerate the professionalization and standardization of algorithmic auditing practices.
Discussion Questions
-
The threshold question. The Act applies only to companies meeting specific size and data thresholds. A small startup that develops a hiring algorithm used by dozens of mid-size employers falls below the threshold. Is this a reasonable accommodation for small businesses, or a dangerous loophole? How would you redesign the threshold, if at all?
-
Capture and teeth. Meredith Whittaker warns that impact assessments may become compliance exercises without "teeth." What specific design features would give the Act's assessment process real teeth? Consider: Who conducts the assessment? Who reviews it? What happens when an assessment reveals harm? Is there a private right of action for individuals?
-
The FTC's capacity. The Act designates the FTC as the primary enforcement agency and creates a Bureau of Technology. As of this writing, the FTC has approximately 1,100 employees to oversee a technology industry worth trillions of dollars. Is institutional capacity the binding constraint on algorithmic accountability? What alternatives exist?
-
Comparative advantage. Compare the Algorithmic Accountability Act's approach to the EU AI Act's risk-based framework. Which approach is more likely to prevent algorithmic harm? Which is more likely to be adopted in a politically polarized legislature? Are the two approaches compatible, or do they reflect fundamentally different governance philosophies?
Your Turn: Mini-Project
Option A: Legislative Drafting. Identify one gap in the Algorithmic Accountability Act (e.g., the size threshold, the lack of a private right of action, the absence of criminal penalties for egregious violations). Draft a 500-word amendment to the Act that addresses this gap. Explain the rationale for your amendment, anticipate one counterargument, and respond to it.
Option B: Stakeholder Testimony. Choose one of the following roles: (a) a civil rights attorney, (b) a technology industry lobbyist, (c) a small business owner, or (d) a community organizer from a neighborhood affected by predictive policing. Write a 600-word statement to be delivered at a Congressional hearing on the Act. Argue your position with specificity, referencing the Act's provisions and concepts from Chapter 17.
Option C: Comparative Analysis. Research New York City's Local Law 144 (bias audits for automated employment decision tools), which took effect in 2023. Write a 600-word comparison with the Algorithmic Accountability Act, addressing: scope, assessment methodology, enforcement mechanism, and effectiveness. Which approach is more likely to produce meaningful accountability?
References
-
U.S. Congress. "Algorithmic Accountability Act of 2022." S. 3572, 117th Congress, 2nd Session. February 3, 2022.
-
Raji, Inioluwa Deborah, et al. "Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT)*, 33-44. ACM, 2020.
-
Selbst, Andrew D. "An Institutional View of Algorithmic Impact Assessments." Harvard Journal of Law & Technology 35, no. 1 (2021): 117-191.
-
Whittaker, Meredith, et al. "AI Now 2018 Report." AI Now Institute, New York University, 2018.
-
Buolamwini, Joy, and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77-91. PMLR, 2018.
-
Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. "Machine Bias." ProPublica, May 23, 2016.
-
New York City Department of Consumer and Worker Protection. "Automated Employment Decision Tools (Local Law 144)." Final Rule, April 2023.
-
European Parliament. "Regulation (EU) 2024/1689 of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)." Official Journal of the European Union, 2024.
-
Congressional Research Service. "Artificial Intelligence: Overview, Recent Advances, and Considerations for the 118th Congress." CRS Report R47644, 2023.