Quiz: The EU AI Act and Risk-Based Regulation

Test your understanding before moving to the next chapter. Target: 70% or higher to proceed.


Section 1: Multiple Choice (1 point each)

1. The EU AI Act's regulatory approach is best described as:

  • A) A technology-based approach that regulates specific AI technologies (neural networks, deep learning, etc.) regardless of their application.
  • B) A risk-based approach that classifies AI systems into tiers based on their potential for harm, with obligations calibrated to the risk level.
  • C) A blanket prohibition on all AI systems that process personal data.
  • D) A voluntary code of conduct that AI developers may choose to follow.
Answer **B)** A risk-based approach that classifies AI systems into tiers based on their potential for harm, with obligations calibrated to the risk level. *Explanation:* Section 21.2 describes the AI Act's foundational design principle: rather than regulating AI technology itself, the Act classifies AI *systems* (specific applications and uses) into four risk tiers — unacceptable risk, high risk, limited risk, and minimal risk — and imposes obligations proportional to the tier. This means the same underlying technology (e.g., a large language model) may face different obligations depending on how it is used.

2. Which of the following AI practices is prohibited under the EU AI Act?

  • A) Using AI to recommend products to consumers based on their browsing history.
  • B) Deploying AI systems that manipulate persons through subliminal techniques beyond their consciousness, causing or likely to cause physical or psychological harm.
  • C) Using AI to generate synthetic text for creative writing purposes.
  • D) Deploying AI-powered chatbots for customer service.
Answer **B)** Deploying AI systems that manipulate persons through subliminal techniques beyond their consciousness, causing or likely to cause physical or psychological harm. *Explanation:* Section 21.3 lists prohibited practices — those assigned to the "unacceptable risk" tier. Subliminal manipulation that causes harm is explicitly prohibited because it undermines individual autonomy and informed decision-making. Product recommendations (A), creative text generation (C), and customer service chatbots (D) are not prohibited, though they may fall under limited-risk transparency requirements.

3. Under the AI Act, "social scoring" by public authorities is:

  • A) Permitted if the scoring is transparent and the individual can contest their score.
  • B) Classified as high-risk and subject to conformity assessment requirements.
  • C) Prohibited outright, because it evaluates individuals based on social behavior in ways that lead to unjustified or disproportionate detrimental treatment.
  • D) Permitted only during a transition period ending in 2027.
Answer **C)** Prohibited outright, because it evaluates individuals based on social behavior in ways that lead to unjustified or disproportionate detrimental treatment. *Explanation:* Section 21.3 identifies social scoring by public authorities as a prohibited practice under the unacceptable-risk tier. The prohibition reflects the EU's judgment that systematic government evaluation of citizens' trustworthiness based on behavioral data is fundamentally incompatible with human dignity and non-discrimination. The Act specifically targets scoring systems that lead to unfavorable treatment in unrelated contexts or treatment that is disproportionate to the behavior scored.

4. VitraMed's predictive analytics system, which identifies patients at high risk of cardiac events, would most likely be classified as:

  • A) Minimal risk, because it assists rather than replaces medical professionals.
  • B) Limited risk, requiring only transparency obligations.
  • C) High risk, because AI systems used in healthcare and as safety components of medical devices are listed in Annex III.
  • D) Unacceptable risk, because errors could result in patient death.
Answer **C)** High risk, because AI systems used in healthcare and as safety components of medical devices are listed in Annex III. *Explanation:* Section 21.4 specifies that AI systems intended for use as safety components of medical devices, or as medical devices themselves, are classified as high-risk under Annex III. VitraMed's cardiac risk prediction system — which informs clinical decisions that directly affect patient outcomes — would fall squarely within this category. This classification triggers the full suite of high-risk requirements: conformity assessment, risk management system, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.

5. The AI Act's requirement for "human oversight" of high-risk AI systems means:

  • A) A human must personally review and approve every individual decision the AI makes.
  • B) High-risk AI systems must be designed to allow natural persons to effectively oversee the system's functioning, understand its capabilities and limitations, and intervene or override as appropriate.
  • C) The AI system must be supervised by a team of at least three human operators at all times.
  • D) Human oversight is satisfied by having a human available to answer questions about the AI system's outputs upon request.
Answer **B)** High-risk AI systems must be designed to allow natural persons to effectively oversee the system's functioning, understand its capabilities and limitations, and intervene or override as appropriate. *Explanation:* Section 21.4 describes human oversight not as a requirement for human approval of every decision but as a design requirement: the system must be built so that humans *can* meaningfully oversee it. This includes understanding what the system does, recognizing its limitations, monitoring its operation for anomalies, and having the ability to intervene, override, or halt the system. The standard is functional effectiveness, not procedural formality — a distinction that matters when evaluating whether "rubber stamp" review constitutes genuine oversight.

6. The AI Act's treatment of "general-purpose AI models" (GPAI) was driven primarily by:

  • A) Concerns about GPAI models being used for social scoring in EU member states.
  • B) The emergence of large language models and foundation models (like ChatGPT) after the original legislative proposal was drafted, forcing negotiators to address a category of AI the proposal did not contemplate.
  • C) Industry lobbying to exempt GPAI models from all regulatory requirements.
  • D) A pre-existing EU regulation on foundation models that needed to be harmonized.
Answer **B)** The emergence of large language models and foundation models (like ChatGPT) after the original legislative proposal was drafted, forcing negotiators to address a category of AI the proposal did not contemplate. *Explanation:* Section 21.5 explains that the Commission's 2021 proposal did not address foundation models or GPAI because ChatGPT and comparable systems did not yet exist when the proposal was drafted. The rapid emergence of these models in 2022-2023 created urgent pressure to address them, resulting in new provisions negotiated during the trilogue. This illustrates a fundamental challenge for technology regulation: the legislative process is slower than the technology it seeks to govern.

7. A GPAI model is classified as posing "systemic risk" under the AI Act if:

  • A) It is developed by a company with annual revenue exceeding 500 million.
  • B) It has high-impact capabilities, assessed based on criteria including computational power used for training (exceeding a specified threshold in floating-point operations).
  • C) It is used by more than 1 million individual users.
  • D) It has been involved in at least one documented incident of harm.
Answer **B)** It has high-impact capabilities, assessed based on criteria including computational power used for training (exceeding a specified threshold in floating-point operations). *Explanation:* Section 21.5 describes the systemic risk classification as based on the model's capabilities, with training compute (measured in floating-point operations, or FLOPs) serving as a key threshold indicator. Models exceeding the specified compute threshold are presumed to pose systemic risk, triggering additional obligations including adversarial testing, incident reporting, and cybersecurity measures. Revenue (A), user count (C), and prior incidents (D) are not the classification criteria, though they may be relevant to risk assessment.

8. The "Brussels Effect" in the context of the AI Act refers to:

  • A) The AI Act's requirement that all AI training data must be stored in Brussels.
  • B) The likelihood that the AI Act will influence AI governance practices and legislation globally, as companies and countries align with EU standards.
  • C) A provision allowing Brussels-based companies to self-certify their AI systems.
  • D) The Act's exemption for AI systems developed within EU institutions.
Answer **B)** The likelihood that the AI Act will influence AI governance practices and legislation globally, as companies and countries align with EU standards. *Explanation:* Section 21.6 applies the Brussels Effect concept (introduced in Chapter 20) to AI regulation. Because the AI Act applies to any AI system placed on the EU market — regardless of where the provider is headquartered — global AI companies will need to comply with its requirements for their EU operations. Many will find it more efficient to implement Act-compliant practices globally. Additionally, other jurisdictions may use the AI Act as a template for their own AI legislation, amplifying its influence.

9. Regulatory sandboxes under the AI Act are designed to:

  • A) Allow AI companies to operate with no regulatory oversight for five years.
  • B) Provide controlled environments in which innovative AI systems can be developed and tested under regulatory supervision, with certain requirements relaxed during the testing period.
  • C) Create physical testing facilities in each EU member state where AI hardware is evaluated.
  • D) Exempt startups from all AI Act obligations permanently.
Answer **B)** Provide controlled environments in which innovative AI systems can be developed and tested under regulatory supervision, with certain requirements relaxed during the testing period. *Explanation:* Section 21.7 describes regulatory sandboxes as a mechanism to balance innovation and regulation. They allow developers to test AI systems under relaxed requirements while maintaining regulatory oversight. Sandboxes are time-limited and supervised — not a permanent exemption. They reflect the concern, prominent during negotiations, that overly rigid regulation could push AI development outside the EU.

10. Real-time remote biometric identification in public spaces by law enforcement is:

  • A) Completely prohibited under the AI Act with no exceptions.
  • B) Permitted without restriction for any law enforcement purpose.
  • C) Prohibited in principle but permitted under narrow exceptions for specific serious offenses, subject to prior judicial authorization and other safeguards.
  • D) Classified as minimal risk and subject only to transparency requirements.
Answer **C)** Prohibited in principle but permitted under narrow exceptions for specific serious offenses, subject to prior judicial authorization and other safeguards. *Explanation:* Section 21.3 explains that this was one of the most contested provisions in the AI Act negotiations. The Parliament sought a complete ban; the Council wanted broader law enforcement exceptions. The compromise prohibits real-time remote biometric identification in public spaces as a general matter but allows it for a limited list of serious offenses (e.g., terrorism, trafficking) with prior judicial authorization. This compromise reflects the tension between fundamental rights (privacy, non-discrimination) and public safety concerns.

Section 2: True/False with Justification (1 point each)

11. "The AI Act applies only to AI systems developed by companies headquartered in the European Union."

Answer **False.** *Explanation:* Like the GDPR, the AI Act has extraterritorial reach. Section 21.6 explains that the Act applies to providers placing AI systems on the EU market or putting them into service in the EU, and to deployers of AI systems located in the EU — regardless of where the provider is established. A US or Chinese AI company whose system is used in the EU must comply. This extraterritorial scope is a key mechanism of the Brussels Effect.

12. "AI systems classified as 'minimal risk' under the AI Act have no regulatory obligations whatsoever."

Answer **True (with nuance).** *Explanation:* Section 21.2 confirms that the vast majority of AI systems — those classified as minimal risk — face no mandatory obligations under the AI Act. The Act encourages voluntary adoption of codes of conduct for these systems, but compliance is not required. However, these systems remain subject to other applicable laws (the GDPR, consumer protection law, product safety directives, etc.). The AI Act adds no *additional* requirements for minimal-risk systems, but they are not in a regulatory vacuum.

13. "The AI Act's conformity assessment for high-risk AI systems requires third-party certification in all cases."

Answer **False.** *Explanation:* Section 21.4 explains that the AI Act allows for both self-assessment and third-party assessment, depending on the type of high-risk system. For most high-risk AI systems listed in Annex III, providers may conduct an internal conformity assessment based on harmonized standards. Third-party assessment (by a "notified body") is required for certain categories, particularly those related to biometric identification and critical infrastructure. The self-assessment option was a concession to industry concerns about compliance costs.

14. "The AI Act regulates the use of AI systems (their applications) rather than the underlying technology (neural networks, machine learning algorithms, etc.)."

Answer **True.** *Explanation:* Section 21.2 describes this as a fundamental design choice. The same neural network technology might power a minimal-risk music recommendation system, a limited-risk chatbot, or a high-risk medical diagnostic tool. The Act classifies based on application and context, not technology. This approach avoids the problem of technology-specific regulation (which becomes obsolete as technology evolves) but creates classification challenges when the same model is used for multiple purposes.

15. "The European Parliament and the Council agreed on every provision of the AI Act without significant disagreement."

Answer **False.** *Explanation:* Section 21.1.2 describes intense disagreement along several fault lines. The Parliament pushed for stricter biometric surveillance bans; the Council sought law enforcement exceptions. The emergence of foundation models created urgent new disputes about how to regulate GPAI. Industry competitiveness concerns produced fierce debate about the stringency of requirements for SMEs. The final text was a product of thirty-seven hours of continuous negotiation, reflecting extensive compromise on all sides.

Section 3: Short Answer (2 points each)

16. The AI Act's risk tiers range from "unacceptable" to "minimal." Explain the logic behind this tiered approach. Why not simply impose the same requirements on all AI systems?

Sample Answer The tiered approach reflects a proportionality principle: regulatory burden should match the level of risk. Imposing high-risk requirements on all AI systems would be disproportionate — a spam filter does not pose the same dangers as a criminal sentencing algorithm — and would create compliance costs that could stifle innovation, particularly for startups and SMEs. Conversely, treating all AI systems as minimal risk would leave dangerous applications unregulated. The tiered approach concentrates regulatory attention on the systems that pose the greatest threats to fundamental rights and safety, while leaving low-risk systems to operate with minimal regulatory overhead. The challenge is drawing the boundaries between tiers — a classification that is inevitably imprecise and that requires periodic revision as technology and applications evolve. *Key points for full credit:* - Explains the proportionality logic (obligations should match risk) - Identifies at least one advantage (avoids over-regulating low-risk systems) - Identifies at least one challenge (classification boundary problems)

17. Why did the emergence of ChatGPT and other large language models create a regulatory problem for the AI Act's original proposal? How did negotiators address this problem?

Sample Answer The Commission's 2021 proposal classified AI systems based on their specific applications — but general-purpose AI models like ChatGPT are not designed for a single application. They can be used for customer service, medical advice, legal analysis, creative writing, or any number of tasks, potentially including high-risk ones. The original proposal had no mechanism for regulating a model whose risk depends entirely on how downstream users deploy it. Negotiators addressed this by creating a new regulatory category: "general-purpose AI models" (GPAI). GPAI providers must meet transparency obligations (including disclosure of training data and model capabilities) regardless of use case, and GPAI models that exceed a computational threshold (measured in FLOPs) are presumed to pose "systemic risk," triggering additional obligations including adversarial testing, systemic risk assessment, and incident reporting. This approach regulates the model itself — not just its applications — recognizing that the capabilities of the model create risks regardless of any specific deployment. *Key points for full credit:* - Explains why general-purpose models did not fit the original application-based classification - Describes the GPAI category and its obligations - Mentions the systemic risk threshold and additional obligations

18. Explain the concept of a "fundamental rights impact assessment" (FRIA) as required by the AI Act. How does it differ from a standard technical risk assessment?

Sample Answer A fundamental rights impact assessment evaluates how a high-risk AI system might affect the fundamental rights of individuals and groups — including the right to non-discrimination, privacy, dignity, freedom of expression, and access to effective remedies. It goes beyond technical risk assessment (which focuses on system failures, accuracy, and cybersecurity) to examine the social, political, and ethical consequences of deployment. A technical risk assessment might ask: "What is the system's error rate?" A FRIA asks: "Do those errors fall disproportionately on particular racial, gender, or socioeconomic groups?" and "Does the system's deployment create chilling effects on fundamental freedoms?" The FRIA is required of deployers (not just providers) of high-risk AI, recognizing that the same system can have different fundamental rights implications depending on the context in which it is used. *Key points for full credit:* - Defines FRIA as focused on fundamental rights, not just technical performance - Distinguishes it from technical risk assessment with specific examples - Notes that the deployer (not just provider) is responsible

19. The AI Act represents a political compromise between innovation and protection. Using a specific example from the Act (e.g., regulatory sandboxes, SME provisions, biometric surveillance exceptions), explain how this compromise works in practice and evaluate whether it achieves an appropriate balance.

Sample Answer Regulatory sandboxes illustrate the compromise directly. Innovation advocates argued that strict requirements would discourage experimentation and push AI development to less regulated jurisdictions. Protection advocates argued that unregulated AI testing could harm individuals, particularly in high-risk domains. Sandboxes resolve this by providing a controlled environment where innovative AI systems can be developed and tested under regulatory supervision, with certain compliance requirements temporarily relaxed. Participants receive guidance from regulators and can test novel approaches without facing the full weight of compliance obligations. The balance is reasonable but imperfect: sandboxes protect innovation, but the individuals whose data is used in sandbox testing may face real risks without the full protections that would otherwise apply. The Act mitigates this by requiring that sandbox participants still comply with fundamental rights protections and that testing involves informed consent where applicable. Whether this balance is "appropriate" depends on whether one prioritizes the dynamic benefits of innovation or the immediate protections of rights. *Key points for full credit:* - Identifies a specific compromise mechanism from the Act - Explains the interests on both sides - Evaluates the balance with nuance rather than a one-sided assessment

Section 4: Applied Scenario (5 points)

20. Read the following scenario and answer all parts.

Scenario: CityView AI

A European city government plans to deploy "CityView AI," an AI-powered surveillance system, across its public transit network. The system uses real-time video analysis to: (a) detect unattended bags that may pose a security threat, (b) identify individuals on a national wanted-persons list using facial recognition, (c) count passenger numbers for capacity management, and (d) detect "aggressive behavior" patterns to alert security personnel.

The system was developed by a US-based AI company, TransitGuard Inc., and will be operated by the city's transit authority. TransitGuard trained the system on a dataset of 2 million video clips from transit systems in the US, UK, and South Korea.

(a) Classify each of the four functions (a-d) under the AI Act's risk tiers. Explain your reasoning for each classification. (1 point)

(b) Identify whether TransitGuard Inc. is the "provider" or "deployer" in this scenario. Who bears which obligations under the Act? (1 point)

(c) The facial recognition function (b) triggers specific provisions regarding real-time remote biometric identification in public spaces. Under what conditions, if any, could this function be lawfully deployed under the AI Act? (1 point)

(d) The training dataset consists of video from the US, UK, and South Korea — none from the EU. Identify at least two concerns this raises under the Act's data governance requirements for high-risk AI systems. (1 point)

(e) Propose a compliance roadmap for the city government that would allow it to deploy the system lawfully under the AI Act. Address each function separately and identify which functions, if any, cannot be deployed in their current form. (1 point)

Sample Answer **(a)** Classifications: - **(a) Unattended bag detection:** **High risk** — an AI system used for safety-critical purposes in public infrastructure, affecting physical security. Falls under Annex III as a system used in the management and operation of critical infrastructure. - **(b) Facial recognition against wanted-persons list:** **Unacceptable risk / Prohibited (with narrow exceptions)** — this is real-time remote biometric identification in a publicly accessible space. Prohibited in principle under the AI Act, with narrow exceptions for law enforcement investigating specific serious crimes, subject to prior judicial authorization. - **(c) Passenger counting:** **Minimal risk** — anonymous counting of passenger volumes does not involve personal data processing or create significant risks to individuals. No AI Act obligations apply, though general data protection law may apply if any personal data is incidentally processed. - **(d) Aggressive behavior detection:** **High risk** — an AI system used by law enforcement or public authorities to assess individuals' behavior, potentially triggering interventions that affect their rights and safety. The subjective nature of "aggressive behavior" creates significant risks of bias and disproportionate targeting. **(b)** TransitGuard Inc. is the **provider** — it developed and trained the AI system. The city's transit authority is the **deployer** — it will operate the system in its transit network. Under the Act, TransitGuard bears obligations for conformity assessment, technical documentation, data governance, accuracy, and robustness. The transit authority bears obligations for human oversight, fundamental rights impact assessment, monitoring for risks during operation, and ensuring the system is used in accordance with its instructions for use. Both share transparency obligations. **(c)** The facial recognition function constitutes real-time remote biometric identification in a publicly accessible space. Under the AI Act, this is prohibited except for a narrowly defined list of serious offenses (terrorism, trafficking, etc.), and only when: (i) law enforcement is conducting the identification, (ii) there is prior judicial authorization or another form of independent authorization as specified by member state law, (iii) the use is strictly necessary and proportionate, and (iv) the identification targets specific individuals sought in connection with specific serious crimes. A transit authority conducting general surveillance against a "wanted-persons list" would likely not meet these conditions unless the list is limited to individuals sought for qualifying offenses and prior authorization has been obtained for each deployment. **(d)** Training data concerns include: (1) **Representativeness:** A dataset drawn from the US, UK, and South Korea may not be representative of the demographic composition of the European city where the system will be deployed, risking biased performance — particularly for the facial recognition and behavior detection functions, which may perform differently across racial and ethnic groups. (2) **Data governance compliance:** The Act requires that training data be subject to appropriate data governance and management practices, including examination for biases. Training on data from non-EU jurisdictions raises questions about whether the data was collected in compliance with GDPR-equivalent standards and whether data subjects were informed that their images would be used for AI training. **(e)** Compliance roadmap: - **Function (a) — Bag detection:** Can proceed as high-risk. Require TransitGuard to complete conformity assessment, provide technical documentation, and demonstrate accuracy. Transit authority must conduct FRIA and implement human oversight. - **Function (b) — Facial recognition:** Cannot be deployed in current form. Must be either (i) limited to specific serious offenses with prior judicial authorization and operated by law enforcement (not the transit authority), or (ii) removed from the system entirely. - **Function (c) — Passenger counting:** Can proceed with minimal obligations. Verify that no personal data is incidentally collected or, if it is, that GDPR requirements are met. - **Function (d) — Behavior detection:** Can proceed as high-risk, but requires significant additional work: bias testing on representative European populations, conformity assessment, FRIA, clear definition of "aggressive behavior" with human oversight to prevent discriminatory targeting, and a complaint mechanism for affected individuals. - **Cross-cutting:** TransitGuard must address training data representativeness and GDPR compliance for all functions. The transit authority should commission an independent bias audit before deployment.

Scoring & Review Recommendations

Score Range Assessment Next Steps
Below 50% (< 15 pts) Needs review Re-read Sections 21.1-21.3, redo Part A exercises
50-69% (15-20 pts) Partial understanding Review specific weak areas, focus on Part B exercises
70-85% (21-25 pts) Solid understanding Ready to proceed to Chapter 22
Above 85% (> 25 pts) Strong mastery Proceed to Chapter 22: Data Governance Frameworks and Institutions
Section Points Available
Section 1: Multiple Choice 10 points (10 questions x 1 pt)
Section 2: True/False with Justification 5 points (5 questions x 1 pt)
Section 3: Short Answer 8 points (4 questions x 2 pts)
Section 4: Applied Scenario 5 points (5 parts x 1 pt)
Total 28 points