Quiz: Autonomous Systems and Moral Machines

Test your understanding before moving to the next chapter. Target: 70% or higher to proceed.


Section 1: Multiple Choice (1 point each)

1. According to the SAE International framework described in Section 19.1.1, at which level of driving automation does the vehicle system handle all driving tasks in certain conditions, even if the human does not respond to a takeover request?

  • A) Level 2 (Partial Automation)
  • B) Level 3 (Conditional Automation)
  • C) Level 4 (High Automation)
  • D) Level 5 (Full Automation)
Answer **C)** Level 4 (High Automation). *Explanation:* Section 19.1.1 defines Level 4 as "high automation" — the system handles all driving tasks within its operational design domain, and can respond safely even if the human occupant does not take over when prompted. This distinguishes Level 4 from Level 3, where the human *must* be ready to intervene when the system requests it. Level 5 involves full automation in all conditions — no operational design domain limitations. The transition from Level 3 to Level 4 is significant for governance because it shifts the system from "human must be available" to "system must handle it independently."

2. The "vigilance problem" described in Section 19.1.2 refers to:

  • A) The difficulty of programming autonomous systems to remain alert to all possible threats
  • B) The tendency of human monitors to become inattentive when overseeing systems that work well most of the time
  • C) The technical challenge of maintaining sensor accuracy in adverse weather conditions
  • D) The problem of hackers gaining unauthorized access to autonomous vehicle systems
Answer **B)** The tendency of human monitors to become inattentive when overseeing systems that work well most of the time. *Explanation:* Section 19.1.2 describes the vigilance problem as a human factors challenge: when a system operates correctly most of the time, the human monitoring it becomes less attentive over time. This is particularly dangerous at Level 3 autonomy, where the human is formally responsible for intervening when the system cannot handle a situation — but the human may not be prepared because the system has been handling everything successfully. Research in aviation and other domains has consistently demonstrated that monitoring a reliable automated system is one of the most difficult cognitive tasks humans are asked to perform.

3. Section 19.2.1 argues that the trolley problem is a misleading guide to autonomous vehicle ethics. Which of the following is NOT a reason the chapter gives for this claim?

  • A) The forced-binary-choice scenario is extremely rare in real autonomous driving.
  • B) The framing obscures the real ethical questions about safety standards, equity, and liability.
  • C) The trolley problem has been definitively solved by philosophers, so it no longer poses an interesting question.
  • D) It individualizes a structural problem — the real ethical issues are systemic, not about single crisis moments.
Answer **C)** The trolley problem has been definitively solved by philosophers, so it no longer poses an interesting question. *Explanation:* The chapter does NOT claim the trolley problem has been solved. Section 19.2.1 gives three reasons the trolley problem is misleading: (1) the forced-binary-choice scenario is extremely rare — real incidents involve perception failures and system errors, not impossible dilemmas; (2) it obscures the real ethical questions about safety thresholds, acceptable error rates, equity in risk distribution, and liability; and (3) it individualizes what is fundamentally a structural problem about infrastructure, regulation, and community consent. The chapter acknowledges the trolley problem raises genuine philosophical questions but argues it has consumed disproportionate attention relative to more pressing governance challenges.

4. Eli raises a concern about autonomous vehicles that connects to the themes of bias and equity from earlier chapters. His primary concern is:

  • A) Autonomous vehicles will be too expensive for residents of low-income neighborhoods to purchase
  • B) Testing and training data may not adequately represent Black pedestrians in various conditions, and deployment decisions may not include community consent
  • C) Autonomous vehicles will increase traffic congestion in urban areas
  • D) The trolley problem has not been adequately resolved for deployment in diverse communities
Answer **B)** Testing and training data may not adequately represent Black pedestrians in various conditions, and deployment decisions may not include community consent. *Explanation:* Section 19.2.2 quotes Eli directly: "When autonomous vehicles are deployed in my neighborhood, will they be tested as rigorously in Black neighborhoods as in white ones? Will the training data include enough examples of Black pedestrians in different lighting conditions? Who decided that my neighborhood is the testing ground?" Eli's concern is about equity in both the technical development (biased perception systems) and the governance process (lack of community consent for deployment). This connects directly to the bias themes from Chapter 14 and the fairness frameworks from Chapter 15.

5. Which of the following is the strongest ethical argument AGAINST lethal autonomous weapons systems (LAWS) as presented in Section 19.3?

  • A) LAWS are too expensive for most militaries to develop
  • B) Delegating the decision to kill a human being to a machine is incompatible with the moral significance of that decision — machines lack the capacity for moral judgment
  • C) LAWS are less accurate than human soldiers in targeting decisions
  • D) LAWS violate existing international trade agreements on military technology
Answer **B)** Delegating the decision to kill a human being to a machine is incompatible with the moral significance of that decision — machines lack the capacity for moral judgment. *Explanation:* Section 19.3 presents the deontological argument that the decision to take a human life carries intrinsic moral weight that requires moral agency — the capacity for moral reasoning, empathy, and judgment — that machines do not possess. This is the argument advanced most forcefully by the Campaign to Stop Killer Robots and by ethicists like Peter Asaro. Option A is empirically questionable (LAWS may actually reduce military costs). Option C is contested (LAWS may be more precise in some contexts). Option D is not supported by the chapter.

6. The MIT Moral Machine experiment (Section 19.4) found that participants' moral preferences for autonomous vehicle dilemmas varied significantly across cultures. Which of the following was a key finding?

  • A) All cultures universally prioritized saving the maximum number of lives in every scenario
  • B) Preferences varied by cultural cluster — for example, participants in some cultures showed stronger preferences for protecting the elderly, while others showed stronger preferences for protecting the young
  • C) Participants in all countries refused to make trade-offs, stating that autonomous vehicles should never be deployed
  • D) Cultural variation was minimal — the experiment found a universal moral framework that could be directly encoded in software
Answer **B)** Preferences varied by cultural cluster — for example, participants in some cultures showed stronger preferences for protecting the elderly, while others showed stronger preferences for protecting the young. *Explanation:* Section 19.4 describes the Moral Machine experiment's finding that moral preferences varied significantly across three major cultural clusters (identified by Awad et al., 2018). Some cultures showed strong preferences for sparing younger individuals, while others showed more equal treatment of age groups. Some showed stronger preferences for sparing higher-status individuals (executives over homeless persons), while others showed weaker status effects. The chapter argues this cultural variation means there is no single "correct" moral algorithm — and that directly encoding crowd-sourced preferences into autonomous systems would be ethically and practically problematic.

7. "Meaningful human control" as discussed in Section 19.5 refers to:

  • A) A requirement that a human physically operate the controls of any autonomous system at all times
  • B) A standard requiring that humans have sufficient understanding, authority, and capacity to intervene in and override autonomous system decisions in a timely manner
  • C) A legal requirement that only human decisions, never machine decisions, can have legal force
  • D) The philosophical position that only humans are capable of meaningful action
Answer **B)** A standard requiring that humans have sufficient understanding, authority, and capacity to intervene in and override autonomous system decisions in a timely manner. *Explanation:* Section 19.5 defines meaningful human control as more than nominal human presence — it requires that the human has adequate information about the system's operation, the authority to override or stop the system, sufficient time to make a decision, and the training and cognitive capacity to exercise judgment. The concept was developed in the autonomous weapons context but applies across domains. It rejects both the extreme of full human operation (which eliminates the benefits of autonomy) and the fiction of nominal oversight (where a human is technically "in the loop" but functionally unable to intervene).

8. The EU AI Act's risk-based classification system (Section 19.6) categorizes AI systems into which of the following risk levels?

  • A) Low, medium, high, and critical
  • B) Unacceptable risk, high risk, limited risk, and minimal risk
  • C) Prohibited, regulated, monitored, and unregulated
  • D) Levels 1-5, corresponding to the SAE autonomy framework
Answer **B)** Unacceptable risk, high risk, limited risk, and minimal risk. *Explanation:* Section 19.6 describes the EU AI Act's four-tier risk classification: unacceptable risk (prohibited — e.g., social scoring systems, real-time biometric surveillance in public spaces with limited exceptions), high risk (subject to stringent requirements including conformity assessments, transparency, human oversight, and documentation — e.g., AI in critical infrastructure, education, employment, law enforcement), limited risk (subject to transparency requirements — e.g., chatbots that must disclose they are AI), and minimal risk (no specific requirements — e.g., spam filters, video game AI). The proportionality principle ensures that governance requirements scale with the potential for harm.

9. Section 19.5 discusses "automation bias." This concept refers to:

  • A) The tendency of AI developers to automate processes that should remain under human control
  • B) The tendency of humans to defer to automated system recommendations even when they have reason to believe the system is wrong
  • C) Statistical bias in the training data of autonomous systems
  • D) The regulatory preference for automated solutions over human decision-making
Answer **B)** The tendency of humans to defer to automated system recommendations even when they have reason to believe the system is wrong. *Explanation:* Section 19.5 describes automation bias as a well-documented cognitive tendency: humans systematically over-trust automated systems, accepting their outputs even when contradictory evidence is available. Studies in aviation, medicine, and criminal justice have demonstrated that human decision-makers frequently defer to algorithmic recommendations, effectively making the "human oversight" nominal rather than substantive. This is distinct from data bias (C) or policy preferences (D). Automation bias is particularly relevant at Levels 2-3 of autonomy, where humans are formally responsible for oversight but cognitively inclined to defer to the system.

10. Dr. Adeyemi's statement — "Every increase in autonomy is also an increase in the governance demand" — means:

  • A) Autonomous systems are inherently more dangerous than human-controlled systems
  • B) As systems become more autonomous, the need for governance structures — accountability frameworks, oversight mechanisms, liability rules — grows proportionally
  • C) Governments should prohibit autonomous systems above Level 3
  • D) Autonomous systems should be self-governing, requiring governance proportional to their intelligence
Answer **B)** As systems become more autonomous, the need for governance structures — accountability frameworks, oversight mechanisms, liability rules — grows proportionally. *Explanation:* Section 19.1.2 presents Dr. Adeyemi's statement as a principle: greater autonomy means the human decision-maker is further removed from the decision, which means the governance infrastructure must compensate. At Level 0, existing human accountability frameworks suffice. At Level 5, entirely new frameworks are needed because there is no human decision-maker to hold accountable. The statement does not claim autonomous systems are inherently dangerous (A) — they may be safer than human alternatives — but rather that safety is not self-ensuring and requires institutional support proportional to the degree of autonomy.

Section 2: True/False with Justification (1 point each)

11. "The trolley problem is the most important ethical framework for governing autonomous vehicles."

Answer **False.** *Explanation:* Section 19.2.1 explicitly argues against this claim. The trolley problem addresses an extremely rare scenario (forced binary choice with perfect information) while the actual ethical challenges of autonomous vehicles are structural: safety testing standards, acceptable error rates, equitable deployment, liability frameworks, labor displacement for professional drivers, community consent for testing, and the equity of training data. The chapter argues that the trolley problem has consumed disproportionate public attention and diverted focus from these more consequential governance questions.

12. "Under the EU AI Act, a social scoring system used by a government to rate citizens' behavior would be classified as high-risk AI requiring a conformity assessment."

Answer **False.** *Explanation:* Section 19.6 classifies government social scoring systems as *unacceptable risk* — the highest category, which means they are *prohibited*, not merely subject to conformity assessments. The EU AI Act bans AI systems that score individuals based on social behavior or personal characteristics for general-purpose government assessment, considering them fundamentally incompatible with European values of human dignity and non-discrimination. High-risk systems (the next tier down) are permitted but subject to stringent requirements.

13. "The MIT Moral Machine experiment demonstrated that there is a universal set of moral preferences that could serve as the basis for programming autonomous vehicle decision-making."

Answer **False.** *Explanation:* Section 19.4 reports the opposite finding: the Moral Machine experiment revealed significant cross-cultural variation in moral preferences. While some preferences were broadly shared (a general preference for saving more lives over fewer), others varied dramatically across cultural clusters — including preferences about age, social status, and the relative value of passengers versus pedestrians. The chapter argues that this variation makes it inappropriate to directly encode crowd-sourced moral preferences into autonomous systems, because doing so would impose one culture's moral preferences on others.

14. "Human-on-the-loop oversight means that a human monitors the autonomous system's operation and can intervene if necessary, but the system operates independently by default."

Answer **True.** *Explanation:* Section 19.5 distinguishes three oversight models. Human-in-the-loop requires human approval before the system acts. Human-on-the-loop means the system acts independently but a human monitors its operation and can intervene, veto, or override if problems arise. Human-out-of-the-loop means the system operates without any human monitoring or intervention capacity. The "on-the-loop" model is considered appropriate for systems where speed is important but the consequences of error are significant enough to warrant human oversight capability.

15. "The 'responsibility gap' in autonomous systems refers to the situation where an autonomous system causes harm but no existing legal framework clearly assigns responsibility to any human or organization."

Answer **True.** *Explanation:* Section 19.5 describes the responsibility gap (a concept introduced by philosopher Andreas Matthias) as the condition that arises when autonomous systems make decisions that their developers could not have specifically predicted, their operators did not specifically authorize, and no existing legal framework clearly assigns to any party. This builds on the accountability gap from Chapter 17 but is specific to autonomous systems that act in the physical world with consequences that may be irreversible. The responsibility gap is particularly acute at Levels 4-5, where no human is meaningfully "in the loop" at the moment of decision.

Section 3: Short Answer (2 points each)

16. Explain the difference between moral agency and moral patiency in the context of autonomous systems. Why does the chapter argue that the question of moral patiency is currently less relevant than the question of moral responsibility?

Sample Answer A moral agent is an entity capable of making moral decisions and being held responsible for them — traditionally, this requires consciousness, intentionality, and the capacity for moral reasoning. A moral patient is an entity that can be wronged — that has interests or experiences that deserve moral consideration (such as the capacity for suffering). In the autonomous systems context, the moral agency question asks: Can a machine be held morally responsible for its actions? The moral patiency question asks: Can a machine be wronged — does it have interests that deserve protection? The chapter argues that moral patiency is currently less relevant because existing autonomous systems — self-driving cars, diagnostic AI, weapons systems — do not have subjective experiences, consciousness, or the capacity for suffering. They cannot be wronged in any meaningful sense. The moral agency question, by contrast, has immediate practical consequences: if machines cannot be moral agents, then responsibility for their actions must reside with humans — developers, deployers, or operators. This makes moral agency directly relevant to the design of accountability frameworks, while moral patiency remains a philosophical question without immediate governance implications. *Key points for full credit:* - Clear distinction between moral agency and moral patiency - Application to autonomous systems - Explanation of why responsibility is the more pressing governance question

17. Section 19.2.2 discusses the real engineering constraints of autonomous vehicles, including perception reliability, edge cases, and operational design domains. Explain why the concept of "operational design domain" (ODD) is important for governance and why a vehicle's behavior at the boundary of its ODD is a critical safety challenge.

Sample Answer An operational design domain (ODD) is the specific set of conditions under which an autonomous vehicle system is designed to operate — including road types, weather conditions, speed ranges, geographic areas, and lighting conditions. No current autonomous system can operate safely in all conditions; each is validated within a defined ODD. The ODD concept is critical for governance because it determines where and when autonomous vehicles can safely operate — and where they cannot. Regulators must understand a system's ODD to decide where deployment is permitted, what disclosures are required to passengers, and what safety standards apply. If an autonomous vehicle is approved for highway driving in clear weather (its ODD), it must not be operated on rural roads in snow (outside its ODD) without additional validation. The boundary of the ODD is the most dangerous zone because the vehicle must transition from autonomous operation to either human control or a safe stop. If the transition is not handled properly — the system encounters conditions outside its ODD but does not respond appropriately, or the human is not prepared to take over (the vigilance problem) — the result can be catastrophic. The Uber fatality in Tempe, Arizona, involved a system that encountered a situation its perception system could not handle: a pedestrian crossing in an unexpected location under low-light conditions. *Key points for full credit:* - Defines ODD and its governance significance - Explains the boundary problem (transition from autonomous to human control or safe stop) - Connects to the vigilance problem or a real-world example

18. The Campaign to Stop Killer Robots argues for a preemptive ban on lethal autonomous weapons. Summarize their strongest argument and identify one counterargument from Section 19.3.

Sample Answer The Campaign's strongest argument is the "human dignity" argument: the decision to take a human life is of such fundamental moral significance that it must be made by a being capable of moral judgment — of understanding the gravity of the act, exercising compassion, and bearing moral responsibility for the consequences. Delegating this decision to a machine, regardless of its accuracy, treats human life as a technical optimization problem rather than a moral concern. This is fundamentally incompatible with international humanitarian law's requirement for human judgment in the application of force. A counterargument from Section 19.3: Proponents of LAWS argue that human soldiers commit war crimes — driven by fear, rage, fatigue, and desire for revenge — that autonomous systems would not. If a LAWS could reliably distinguish combatants from civilians and apply proportional force, it might cause fewer civilian casualties than human soldiers do in practice. The moral question, in this view, is not whether machines have moral agency but whether the outcomes they produce are more consistent with humanitarian values than the outcomes humans actually produce. If autonomous weapons reduce civilian deaths, then rejecting them on principle — while accepting the higher civilian toll of human decision-making — may itself be morally questionable. *Key points for full credit:* - Accurately summarizes the Campaign's strongest argument - Identifies a specific counterargument from the chapter - Presents both positions fairly

19. The EU AI Act requires human oversight for high-risk AI systems. Using the concepts of the vigilance problem and automation bias, explain why a legal requirement for human oversight does not automatically guarantee meaningful human control.

Sample Answer A legal requirement for human oversight ensures that a human is formally designated to monitor an autonomous system — but it does not ensure that the oversight is substantive. Two cognitive phenomena undermine the assumption that designated oversight equals effective oversight. First, the vigilance problem: humans are poor at maintaining sustained attention to systems that operate correctly most of the time. A human overseeing a high-risk AI system that works well 99.5% of the time will, over time, reduce their monitoring intensity. When the 0.5% failure occurs, the human may not be cognitively prepared to intervene. Second, automation bias: even when humans are monitoring, they tend to defer to the system's recommendations. Studies consistently show that human reviewers accept automated outputs at rates far exceeding what would be expected if they were exercising independent judgment. The human is present and formally "in the loop" but functionally serves as a rubber stamp. Together, these phenomena mean that a legal oversight requirement can create a misleading appearance of accountability: the system is technically supervised, but the supervision is not meaningful. The EU AI Act's requirement must be accompanied by institutional design that supports genuine oversight — including training, rotation of oversight personnel, monitoring of override rates, and system designs that actively solicit human engagement rather than permitting passive monitoring. *Key points for full credit:* - Explains the vigilance problem and automation bias - Connects both to why legal oversight requirements are insufficient alone - Proposes or references institutional design solutions

Section 4: Applied Scenario (5 points)

20. Read the following scenario and answer all parts.

Scenario: MedAssist Level 4

City General Hospital deploys MedAssist, a Level 4 diagnostic AI system in its emergency department. MedAssist monitors patient vitals in real time, analyzes imaging and lab results, and can initiate treatment protocols for time-critical conditions (stroke, heart attack, sepsis) without waiting for physician approval. A physician is "on the loop" — monitoring MedAssist's decisions via a dashboard and able to override, but not required to approve each action in advance.

At 3:17 a.m. on a Saturday, MedAssist detects signs of a stroke in Patient A and initiates a thrombolytic (clot-dissolving) treatment protocol. The on-duty physician, Dr. Rivera, is simultaneously managing two other critical patients and does not review MedAssist's decision before it is executed. The treatment is the correct protocol for ischemic stroke — but Patient A is actually experiencing a hemorrhagic stroke (bleeding, not clotting). Thrombolytic treatment worsens the hemorrhage. Patient A suffers permanent brain damage.

MedAssist's imaging analysis misclassified the hemorrhagic stroke as ischemic — an error that a human radiologist would likely have caught by examining the full imaging context. Dr. Rivera would have caught the error if she had reviewed the images, but she was occupied with other patients. The hospital's staffing model was designed around the assumption that MedAssist would reduce the need for physician involvement in routine protocols.

(a) Map the accountability chain. Identify at least four actors, their roles, and their likely claims of non-liability. (1 point)

(b) Classify the oversight model in this scenario using the framework from Section 19.5. Was the oversight meaningful? Apply the vigilance problem and automation bias concepts. (1 point)

(c) Analyze the hospital's staffing decision — reducing physician staffing based on the assumption that MedAssist would handle routine cases. What governance principle from the chapter does this violate? (1 point)

(d) Apply the liability frameworks from Chapter 17 (strict liability, negligence, product liability) to this scenario. Which framework best fits, and who should bear primary liability? (1 point)

(e) Propose three governance measures that would prevent this scenario from recurring. Each measure should address a different element of the problem (technical, institutional, and regulatory). (1 point)

Sample Answer **(a)** Accountability chain: - **MedAssist developer:** Built the diagnostic system. Claim: "MedAssist is a decision-support tool. The physician is on the loop and responsible for oversight. We validated the system and achieved 97% accuracy on ischemic stroke classification." - **City General Hospital (deployer):** Deployed the system and set the staffing model. Claim: "We followed the vendor's deployment guidelines and maintained a physician on the loop as required." - **Dr. Rivera (on-duty physician):** Was formally responsible for oversight. Claim: "I was managing two other critical patients. The hospital's staffing model assumed MedAssist would handle routine cases. I could not review every decision in real time." - **Hospital administration (policy-maker):** Set the staffing model that reduced physician oversight capacity. Claim: "We followed industry best practices and the vendor's recommendations for staffing levels with Level 4 AI." - **Radiology AI component (sub-system):** Misclassified the stroke type. No human actor directly responsible for this specific error — it emerged from the training data and model limitations. **(b)** The scenario uses a human-on-the-loop model: MedAssist acts independently, and Dr. Rivera monitors and can override but does not approve each action. The oversight was not meaningful for two reasons. First, the vigilance problem: Dr. Rivera was occupied with other patients and could not monitor MedAssist's decisions in real time — a predictable consequence of the hospital's staffing model. Second, automation bias: the staffing model was designed around the assumption that MedAssist's decisions would generally be correct, creating institutional pressure against active physician review. The oversight was formally present but functionally absent at the moment it mattered most. **(c)** The hospital's staffing decision violates Dr. Adeyemi's principle: "Every increase in autonomy is also an increase in the governance demand." By reducing physician staffing in response to MedAssist's deployment, the hospital did the opposite — it reduced governance capacity as autonomy increased. The assumption that AI would reduce the need for human oversight inverts the relationship between autonomy and governance. A Level 4 system operating in a life-or-death domain requires *more* oversight infrastructure, not less — including sufficient physician staffing to actually review AI decisions, not just nominally monitor them. **(d)** Three frameworks apply: - **Strict liability** would hold the developer (or deployer) responsible regardless of fault. This is the strongest fit for the patient's perspective: Patient A suffered catastrophic harm from a system that was deployed to serve them. Strict liability incentivizes the developer to ensure the system's error rate is as low as possible and incentivizes the hospital to maintain adequate oversight. - **Negligence** would require showing that someone failed to exercise reasonable care. The hospital's staffing decision — reducing physician coverage based on AI capability — could constitute negligence if a reasonable hospital administrator would have maintained higher staffing levels. The developer could face negligence claims if the system was deployed without adequate warning about misclassification rates for hemorrhagic vs. ischemic stroke. - **Product liability** under a design defect theory could apply to MedAssist if the imaging classification system was unreasonably prone to the specific error that occurred. Primary liability should be shared between the hospital (for the staffing decision that made meaningful oversight impossible) and the MedAssist developer (for deploying a system that initiates irreversible treatment without adequate safeguards for misclassification). This dual assignment addresses both the institutional and technical dimensions of the failure. **(e)** Three governance measures: 1. **Technical:** MedAssist should implement a mandatory confirmation step for irreversible treatments — requiring a brief physician review window (even 90 seconds) before initiating thrombolytic therapy, with an escalation alert if the physician does not respond. The system should also display a confidence score and the specific imaging evidence supporting its classification, making it easier for the physician to identify potential errors quickly. 2. **Institutional:** The hospital should establish a minimum physician-to-AI-action ratio — ensuring that staffing levels always allow physicians to meaningfully review AI-initiated treatments. Staffing models should be validated through simulation exercises that test whether physicians can actually perform oversight under realistic workload conditions. Override rates should be monitored as an institutional metric: if physicians never override MedAssist, that is a signal of automation bias, not system perfection. 3. **Regulatory:** Health regulators should require that Level 4 medical AI systems undergo specific validation for high-consequence edge cases (such as hemorrhagic vs. ischemic stroke distinction) before deployment, with mandatory reporting of misclassification incidents. Regulators should also set minimum human oversight standards for AI-initiated treatments, including staffing requirements that the deploying institution must meet as a condition of deployment.

Scoring & Review Recommendations

Score Range Assessment Next Steps
Below 50% (< 15 pts) Needs review Re-read Sections 19.1-19.3 carefully, redo Part A exercises
50-69% (15-20 pts) Partial understanding Review specific weak areas, focus on Part B exercises for applied practice
70-85% (21-25 pts) Solid understanding Ready to proceed to Part 4; review any missed topics briefly
Above 85% (> 25 pts) Strong mastery Proceed to Part 4: Governance and Regulation
Section Points Available
Section 1: Multiple Choice 10 points (10 questions x 1 pt)
Section 2: True/False with Justification 5 points (5 questions x 1 pt)
Section 3: Short Answer 8 points (4 questions x 2 pts)
Section 4: Applied Scenario 5 points (5 parts x 1 pt)
Total 28 points