Case Study 29-2: The Illinois Artificial Intelligence Video Interview Act

Overview

In 2019, Illinois became the first state in the United States to specifically regulate artificial intelligence in the hiring process, passing the Artificial Intelligence Video Interview Act (AIVIA), which took effect on January 1, 2020. The law is narrow — it applies only to AI analysis of video job interviews — but it represents the first significant legislative attempt in the U.S. to regulate algorithmic hiring surveillance, and its structure and limits illustrate the broader challenges of using law to constrain algorithmic hiring bias.

This case study examines what AIVIA does, what it does not do, and what its passage and implementation reveals about the politics and limitations of algorithmic hiring regulation.


What the Illinois AIVIA Requires

The Artificial Intelligence Video Interview Act has four core requirements:

1. Disclosure: Employers that use AI to analyze video interviews must notify applicants before the interview that AI will be used, what general aspects of the interview the AI will be analyzing, and how AI is being used in the hiring process.

2. Consent: Employers must obtain the applicant's consent before using AI analysis of the video interview.

3. Limitations on sharing: Employers may not share video interviews analyzed by AI with third parties except when "necessary for evaluating the applicant" or complying with law. The AI analysis provider and its employees are considered part of this restricted sharing.

4. Deletion: Applicants who request deletion of their video interview and AI analysis data must receive deletion within 30 days, except where deletion would conflict with legal requirements.


What the Illinois AIVIA Does Not Require

Understanding AIVIA's limits is as important as understanding its requirements:

No accuracy or validity requirement: AIVIA does not require that the AI analysis actually work — that it accurately predict job performance or that its assessments be scientifically validated. An employer can use a facial expression analysis tool that has never been validated for the specific job type and comply fully with AIVIA.

No bias audit requirement: AIVIA does not require that employers or AI vendors audit their systems for discriminatory impact. An AI tool that produces severe racial disparate impact complies with AIVIA as long as disclosure and consent requirements are met.

No right to an explanation: Applicants are not entitled to an explanation of their AI score, what factors contributed to it, or why they were rejected. The law requires disclosure that AI is being used, not disclosure of how it works.

No right to human review: Unlike GDPR Article 22, AIVIA does not give applicants the right to request that a human review the AI assessment or override its recommendation.

Consent is still structured: An applicant who refuses to consent to AI analysis under AIVIA can simply be excluded from consideration for the position. The consent is real — the applicant can say no — but the consequence of saying no is that their application is not processed.


Enforcement and Limitations

AIVIA's enforcement mechanism is through the Illinois Human Rights Act — applicants who believe their rights under AIVIA have been violated can file a complaint with the Illinois Department of Human Rights.

In practice, enforcement has been limited. Several challenges:

Detection problem: Applicants who were not informed that AI analysis was being used cannot easily discover the violation. Companies that fail to disclose AI use are unlikely to be reported unless an applicant specifically investigates.

Remedy limitations: Remedies for AIVIA violations are not clearly specified and have not been established through significant enforcement actions.

Scope limitations: AIVIA only covers video interviews analyzed by AI — not AI-analyzed audio-only interviews, gamified assessments, resume screening algorithms, or any other form of AI hiring analysis.


Why AIVIA Was Passed

The legislative history of AIVIA reveals the political dynamics of algorithmic hiring regulation.

The bill was sponsored by Illinois state representative Jaime Andrade, Jr., who was motivated by constituent reports of video interviewing practices that felt invasive and opaque. The bill passed with bipartisan support — a rarity for privacy legislation — partly because the specific concern (AI analyzing your face to make a hiring decision) was intuitive and viscerally troubling to legislators across the political spectrum.

The bill's passage was also facilitated by what it did not do: it did not prohibit AI hiring analysis, did not impose significant compliance burdens on employers, and did not create strong private rights of action that plaintiffs' attorneys might use against employers. The law created disclosure requirements that employers could comply with through a standard contract clause and a checkbox.

This political dynamic — disclosure requirements as the compromise position between prohibition and no regulation — is typical of technology regulation. Disclosure requirements are politically achievable because they burden employers less than substantive restrictions on AI use; they may fail to address the underlying harm, but they represent the available compromise in a political environment where technology companies and employers are influential stakeholders.


AIVIA's consent requirement created an interesting natural experiment in how applicants respond when given a choice about AI analysis.

Several Illinois employers reported that after implementing AIVIA consent processes, a meaningful percentage of applicants declined AI analysis — and then were excluded from further consideration. This outcome frustrated the law's apparent intent: the consent requirement was designed to give applicants power over their data; in practice, it gave applicants the power to exclude themselves from employment opportunities.

Some applicants reported feeling coerced: "I don't want my face analyzed, but I also need this job." This is the consent problem identified throughout this chapter: consent is only meaningful when the alternative to consenting is acceptable. For job applicants who need employment, declining AI analysis is often not acceptable.


The Broader Regulatory Landscape Post-AIVIA

AIVIA's passage in 2019 preceded a wave of state-level activity on algorithmic hiring regulation. Key subsequent developments:

Maryland HB 1202 (2020): Prohibits employers from solely using facial recognition as the basis for a hiring decision. Note the qualification: "solely" — AI facial analysis remains permissible if combined with human review.

New York City Local Law 144 (2023): As noted in the chapter, requires bias audits and disclosure. This is substantively stronger than AIVIA: the bias audit requirement creates an accountability mechanism that AIVIA lacks.

Washington State: Several bills have been introduced but not enacted requiring algorithmic transparency in hiring, including provisions requiring employers to disclose whether automated systems were used in screening.

Federal-level activity: The EEOC's AI guidance and proposed federal legislation (including the "Algorithmic Accountability Act," which has been introduced in multiple congressional sessions without passage) indicate federal attention to the issue but no enacted federal framework.


Lessons for Algorithmic Hiring Regulation

AIVIA and its successors suggest several lessons for designing effective algorithmic hiring regulation:

Disclosure alone is insufficient. Without accuracy requirements, bias audit mandates, or rights to explanation, disclosure of AI use does not protect applicants from discriminatory AI systems.

Consent in coercive conditions is not meaningful protection. Applicants who need employment cannot meaningfully decline algorithmic hiring processes without accepting exclusion from opportunities.

Substantive requirements are politically difficult but necessary. The most effective regulations — New York City's bias audit requirement, GDPR's human review right — impose substantive requirements on AI systems' operation, not just their disclosure. These requirements are harder to achieve legislatively but address the underlying harm.

Algorithmic hiring bias is a civil rights issue. The most powerful legal framework for addressing algorithmic hiring discrimination is probably existing civil rights law, specifically the disparate impact doctrine under Title VII. The challenge is enforcement: bringing a disparate impact claim requires access to data about how the algorithm performs across demographic groups — data that is systematically held by employers and vendors.


Discussion Questions

  1. AIVIA requires disclosure and consent but does not require accuracy validation or bias auditing. Is disclosure an adequate remedy for the harms identified in this chapter? What theory of harm does disclosure address? What harms does it not address?

  2. Some applicants who declined AI analysis under AIVIA were subsequently excluded from consideration. What does this reveal about the limits of consent-based privacy regulation in employment contexts? How should regulation respond to this problem?

  3. New York City's Local Law 144 requires bias audits — but applicants cannot see the audit results without specifically requesting them, and the audit methodology is defined by the vendor being audited. Is this a meaningful accountability mechanism or a compliance theater? What would a more rigorous auditing framework look like?

  4. The federal government has not enacted equivalent regulation. What political and economic factors have prevented federal legislation on algorithmic hiring? Who benefits from the regulatory vacuum?

  5. Jordan Ellis was not in Illinois. Even if Jordan had been in Illinois, AIVIA would only have protected them from undisclosed AI video analysis — not from resume screening algorithms, not from gamified assessments, not from culture fit algorithms. What does the gap between Jordan's actual experience and what AIVIA covers reveal about the challenges of regulating algorithmic hiring comprehensively?