Case Study 1: EU AI Act Compliance — What a High-Risk AI Hiring Tool Requires

The Scenario

Consider HireRight Analytics, a fictional but representative mid-sized European HR technology company that has developed an AI-powered recruiting platform. The platform uses a combination of natural language processing, machine learning-based scoring, and behavioral analytics to help employers screen job applications. Its core features include: automated resume parsing and scoring against job requirements; AI-generated ranking of candidates; video interview analysis that assesses candidates' speech patterns, vocabulary, and what the vendor describes as "professional competency signals"; and a predictive model that estimates candidates' likely performance and retention.

The platform is deployed by 200 employer clients across Germany, France, the Netherlands, Belgium, Spain, and Poland. Its customers range from large multinationals to mid-sized regional employers. About 1.5 million job applications per year are processed through the platform, resulting in approximately 400,000 candidates being automatically screened out before any human review.

With the EU AI Act's high-risk requirements applying from August 2026, HireRight Analytics needs to understand what compliance requires — and so do the employers who deploy its platform.

Step 1: Risk Classification

The first compliance task is determining where HireRight Analytics' platform sits in the EU AI Act's risk hierarchy. This analysis is consequential: if the system qualifies as high-risk, the compliance burden is substantial. If it falls into a lower tier, requirements are minimal.

The EU AI Act's Annex III, which lists the categories of high-risk AI systems, includes under the employment heading: "AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests." It also includes AI systems for decisions on promotion, dismissal, and task allocation. HireRight Analytics' platform fits squarely within this description.

The video interview analysis component raises an additional issue. The EU AI Act separately addresses AI systems for emotion recognition and biometric categorization of persons. Depending on the specific technical design of HireRight Analytics' video analysis — what signals it analyzes and how it processes them — this component may face additional regulatory constraints, including possible prohibition if it categorizes individuals based on biometric data in ways that infer protected characteristics. This requires careful technical and legal analysis before the platform is deployed.

The conclusion of Step 1 is clear: HireRight Analytics' recruiting platform is high-risk AI under Annex III. The compliance journey begins.

Step 2: Preparing Technical Documentation

The EU AI Act requires that providers of high-risk AI systems prepare comprehensive technical documentation before placing their systems on the market. This documentation must enable competent authorities to assess compliance and must be kept up to date. For HireRight Analytics, preparing this documentation requires deep collaboration between legal, technical, and product teams.

The technical documentation must include, at minimum: a general description of the system including its intended purpose; the system's version history and update process; the technical specifications including hardware requirements; a description of the development process; the data used in training, validation, and testing, including information about data origin, scope, annotation methods, collection procedures, and known biases or limitations; the performance metrics and how they were validated; the known limitations and conditions under which the system may fail or underperform; the risk management measures implemented; and the post-market monitoring plan.

For HireRight Analytics, preparing compliant technical documentation reveals several uncomfortable realities. The company's development process was not designed with documentation in mind — early versions of the model were trained on proprietary data whose provenance is not fully documented. The validation datasets used to test the system do not adequately represent the demographic diversity of the populations the system will be applied to. Performance metrics have been reported to customers in terms of overall accuracy without disaggregated analysis by gender, age, ethnicity, or disability status. And the video interview analysis component uses a proprietary scoring approach whose relationship to validated occupational psychology constructs is unclear.

These gaps are not uncommon. Many AI products were developed with technical rigor in terms of model performance but without the documentation and governance infrastructure that compliance requires. The process of preparing EU AI Act-compliant technical documentation frequently reveals these gaps — which is, in part, the point of the requirement.

Remediation of these gaps requires substantial work: reconstructing and documenting the data provenance for training data; conducting disaggregated bias analysis across demographic groups; validating the video interview component against published occupational psychology research; and establishing ongoing documentation processes for future model updates. This work takes time — typically six to twelve months for a system of this complexity — and requires investment in technical resources that smaller companies may find challenging.

Step 3: Risk Management System

The EU AI Act requires a documented risk management system covering the AI system's entire lifecycle. For HireRight Analytics, this means identifying and assessing the risks the system presents across several dimensions.

The most significant risk category is discriminatory impact. If the system systematically produces worse outcomes for candidates of a particular gender, ethnicity, age group, or disability status, it may violate not only the EU AI Act but also EU employment anti-discrimination directives and national anti-discrimination law. The risk management documentation must describe how HireRight Analytics identifies this risk, what testing has been conducted to detect discriminatory impacts, what mitigation measures are in place, and how discriminatory impact would be detected and addressed post-deployment.

A second significant risk is the use of the system for decisions it was not designed for. If employers use HireRight Analytics' ranking scores as the sole basis for hiring decisions without any human review, this goes beyond the system's intended purpose and creates risks the provider cannot fully control. The risk management system must address this, including through the instructions for use provided to deployers.

A third risk category is data quality failures — cases where candidates' applications are processed inaccurately (e.g., due to OCR failures parsing documents, or NLP failures misinterpreting non-native English writing) in ways that disadvantage them. The risk management documentation must address how the system handles such failures, what accuracy guarantees are appropriate, and how errors can be detected and corrected.

The risk management system is not a one-time exercise. It must be updated when new risks are identified (including through post-market monitoring), when the system is updated, and when new use cases are identified.

Step 4: Human Oversight Implementation

The EU AI Act's human oversight requirement is one of the most practically demanding, and one that has significant implications not just for HireRight Analytics as the provider but for the employers who deploy the platform.

For HireRight Analytics, the human oversight requirement means that the system must be designed so that deployers can meaningfully exercise oversight over its outputs. This includes: providing interfaces that make the system's outputs interpretable (not just "Candidate ranked 47th" but "Candidate ranked 47th because of X, Y, and Z factors"); providing documentation that enables deployers to understand the system's limitations and conditions under which its outputs are less reliable; and enabling deployers to override the system's outputs when their human judgment differs.

Practically, this means HireRight Analytics must invest in explainability features. Rather than simply providing a rank score, the platform must explain — in terms interpretable by HR professionals, not just data scientists — what factors drove each candidate's assessment. This explanation must be honest about uncertainty: if the model's confidence is low for a particular candidate, that uncertainty must be communicated.

For the employer-deployers, the human oversight requirement creates its own obligations. They cannot treat HireRight Analytics' rankings as the sole basis for hiring decisions without any human review. They must design hiring workflows in which qualified human reviewers actually review AI-assisted candidates and exercise genuine judgment about whether to accept or override the AI's assessments. The Act requires that deployers ensure "the monitoring of the operation of the high-risk AI system in accordance with the instructions for use" and that the human oversight function be assigned to "natural persons who are identified and have the authority and capability to exercise that oversight, in particular where the high-risk AI system raises concerns."

Implementing this in practice requires employers to train HR staff on both how the system works and how to exercise meaningful oversight. Staff who simply rubber-stamp AI rankings without genuine engagement are not providing the "human oversight" the Act requires — and this rubber-stamping has been documented in research on how humans actually interact with AI systems in hiring contexts.

Step 5: Bias Audit and Fundamental Rights Impact Assessment

The EU AI Act requires deployers of certain high-risk AI systems to conduct fundamental rights impact assessments before deploying those systems. For employment AI deployed by large employers (more than 250 employees), this requirement applies. The assessment must consider the potential impact on the rights of affected persons — in this case, job candidates — document that assessment, and incorporate input from groups potentially affected by the system.

For HireRight Analytics' clients, this means commissioning a fundamental rights impact assessment that examines: whether the system produces discriminatory impacts on protected groups under applicable EU and national law; whether the data protection rights of candidates are adequately respected; whether candidates are adequately informed about the use of AI in their evaluation; and whether candidates have meaningful access to human review and the ability to contest AI-assisted decisions.

Conducting this assessment reveals that HireRight Analytics' clients face exposure on several fronts. Candidates are not currently informed that an AI system is evaluating their video interviews, which may violate GDPR transparency requirements. Candidates have no mechanism to contest an AI-generated rejection. Disaggregated performance analysis, when conducted for the assessment, reveals statistically significant differences in pass rates between male and female candidates that could not be explained by legitimate job-related criteria. These findings must be documented and addressed before deployment continues.

Step 6: Registration in the EU Database

Providers of high-risk AI systems must register those systems in the EU AI Act database before placing them on the market. This registration is publicly accessible, creating a transparency commitment: anyone can see what high-risk AI systems are on the EU market, who developed them, and what they are used for.

For HireRight Analytics, registration requires providing: the provider's identity and contact information; the AI system's name and version; its intended purpose and categories of natural persons affected; information about training data; the categories of personal data processed; the applicable conformity assessment procedure; whether the system was subject to third-party conformity assessment; and a URL to additional public information.

Registration also requires ongoing maintenance: updates must be registered when systems change in ways that affect their compliance status.

Step 7: Post-Market Monitoring

Once deployed, high-risk AI systems must be actively monitored for continued compliance. For HireRight Analytics, this requires establishing monitoring systems that track: whether the system is being used within its intended purpose; whether actual performance metrics in real-world deployment match those established in pre-deployment validation; whether new risks or failure modes are emerging; and whether serious incidents occur.

Serious incidents — defined as incidents that result in death, serious injury, damage to critical infrastructure, infringement of fundamental rights, or environmental damage — must be reported to the relevant national competent authority immediately (or within 72 hours for incidents likely to require urgent intervention). For an employment AI system, a serious incident might include: systematic discrimination affecting large numbers of candidates, discovered post-deployment; a data breach exposing sensitive candidate data; or a documented failure of human oversight resulting in widespread discriminatory hiring decisions.

The monitoring system must be documented in advance, and the monitoring data must be preserved for inspection by competent authorities. This creates an ongoing compliance obligation that does not end when the system is deployed.

The Compliance Cost and the Compliance Value

The compliance journey described above is substantial — not a brief paperwork exercise but a multi-month, resource-intensive process that requires legal, technical, and organizational investment. For a mid-sized company like HireRight Analytics, the direct costs of EU AI Act compliance for a single high-risk product may run into hundreds of thousands of euros, depending on the work needed to bring existing systems into compliance and the ongoing monitoring costs.

These costs are real and worth acknowledging honestly. The EU AI Act imposes a genuine burden on AI companies, particularly smaller ones, and that burden is a policy choice whose costs and benefits deserve honest evaluation.

But the process of compliance also has value that is easy to underestimate. The technical documentation process that HireRight Analytics found burdensome revealed genuine gaps in the company's understanding of its own system — data provenance gaps, validation gaps, and discriminatory impact risks that could have produced significant legal, reputational, and human harm if discovered through an enforcement action rather than a compliance process. The bias audit that compliance required revealed disparate outcomes by gender that the company had not previously detected. The human oversight implementation process forced a conversation between HireRight Analytics and its clients about how the system was actually being used — conversations that revealed patterns of use far outside the system's intended design.

Organizations that approach EU AI Act compliance as a governance investment rather than a compliance burden are in a significantly better position than those that approach it as a box-checking exercise. The documentation, monitoring, and oversight processes that compliance requires are not just regulatory requirements — they are good governance practices that produce AI systems that work better, fail less catastrophically, and generate less legal and reputational exposure.


Discussion Questions: (1) What aspects of EU AI Act compliance would be most challenging for a large enterprise software company that has embedded AI features throughout its HR product suite rather than offering a stand-alone AI tool? (2) The chapter discusses how the compliance process revealed discriminatory impact risks that HireRight Analytics had not previously detected. What organizational processes should have detected this before regulatory compliance became the trigger? (3) How should the costs of EU AI Act compliance be allocated between AI providers (like HireRight Analytics) and their business clients (the employers)? What contractual provisions would be needed to implement your answer?