Chapter 33 Quiz: Ethics of AI Use — Disclosure, Attribution, and Fairness

15 questions covering disclosure, attribution, fairness, deception, and organizational ethics.


Question 1

The core ethical question about AI disclosure is:

A) Whether AI use violates intellectual property law B) When using AI assistance without disclosing it constitutes a form of deception, given the context and the recipient's reasonable expectations C) Whether AI tools are accurate enough to be used in professional work D) Whether organizations have the right to restrict AI use

Answer **B — When using AI assistance without disclosing it constitutes a form of deception, given the context and the recipient's reasonable expectations.** The disclosure question is fundamentally about whether non-disclosure in a specific context creates a misleading impression about the work's origin, the professional's contribution, or the nature of what is being delivered. The answer varies by context and depends on what reasonable parties expect. This is not a single rule — it requires contextual judgment guided by the principles in Section 1.

Question 2

The disclosure sliding scale (AI-polished → AI-structured → AI-drafted → AI-generated) suggests that:

A) Only fully AI-generated content requires disclosure B) The appropriate level of disclosure should be calibrated to the degree of AI's substantive contribution — minimal for mechanical polishing, increasing as AI's role becomes more generative C) All AI involvement requires the same level of disclosure D) Disclosure is only required when the recipient specifically asks

Answer **B — The appropriate level of disclosure should be calibrated to the degree of AI's substantive contribution — minimal for mechanical polishing, increasing as AI's role becomes more generative.** The sliding scale captures that AI involvement exists on a spectrum. Using AI to catch grammar errors is categorically different from AI generating the analysis. The former typically requires no disclosure in most professional contexts; the latter typically does. The relevant variable is whether AI's contribution to the substantive content would be material to how others assess the work.

Question 3

Under current law in most major jurisdictions (as of 2026), AI-generated content:

A) Is protected by copyright with the AI model as the author B) Is protected by copyright with the AI tool's developer as the author C) Cannot be protected by copyright because copyright requires human authorship D) Is in the public domain but cannot be used commercially

Answer **C — Cannot be protected by copyright because copyright requires human authorship.** The current consensus in US, EU, and most other major legal systems: copyright requires human creative authorship. AI output without sufficient human creative contribution is not protectable as intellectual property. This does not mean you can't use AI output — you can — but you cannot copyright it as if it were purely your own intellectual creation, and the implications for attribution and responsibility are significant.

Question 4

The responsibility principle in personal AI ethics states:

A) AI developers are responsible for errors in AI-generated content B) Organizations using AI are more responsible than individuals C) You remain fully responsible for AI-assisted work product — AI involvement does not transfer, dilute, or share your professional accountability D) Responsibility is shared between the practitioner and the AI tool

Answer **C — You remain fully responsible for AI-assisted work product — AI involvement does not transfer, dilute, or share your professional accountability.** Responsibility follows the person, not the tool. If AI generates an error in a report you submit, you are responsible. If AI generates code with a security vulnerability that you deploy, you are responsible. "AI wrote it" does not reduce professional, legal, or ethical accountability. This principle is the foundation of the accountability structures that make professional services work.

Question 5

The FTC's guidance on AI-generated marketing content is most clearly relevant to:

A) Whether AI can be used to write advertising copy B) Whether AI-generated fake reviews and testimonials that consumers believe are genuine constitute deception C) Whether AI should be used to personalize ads to individual consumers D) Whether marketers need to disclose when they use AI to optimize ad placement

Answer **B — Whether AI-generated fake reviews and testimonials that consumers believe are genuine constitute deception.** The FTC's existing endorsement guidelines require disclosure of material connections between endorsers and brands. AI-generated fake reviews that represent non-existent or fabricated customer experiences — presented as genuine endorsements — are deceptive under this framework. This is not an area of genuine ambiguity: fake reviews are fraudulent regardless of whether AI or humans generate them.

Question 6

"But everyone does it" (the argument that universal AI use without disclosure makes disclosure unnecessary) is considered ethically insufficient because:

A) AI use is not actually universal, so the premise is false B) The relevant ethical question is what the context requires and what reasonable expectations are — not what others do. Widespread practice doesn't create ethical license C) Professional associations have explicitly rejected this argument D) Only clearly harmful practices are governed by ethical norms, not accepted practices

Answer **B — The relevant ethical question is what the context requires and what reasonable expectations are — not what others do. Widespread practice doesn't create ethical license.** "Everyone speeds on the highway" doesn't make speeding ethical or legal. "Everyone uses AI without disclosure in this context" would mean disclosure norms have changed — which is possible — but requires asking whether the context's disclosure requirements have actually changed, not just whether others are complying. Norms evolve through explicit community consensus, not through individual non-compliance becoming widespread.

Question 7

Ghost-writing traditions are relevant to AI attribution ethics because:

A) Ghost-writing and AI writing are identical in all ethical respects B) Ghost-writing should be eliminated now that AI can do the same work C) Ghost-writing shows that human writing assistance is already accepted in some contexts, and the relevant question is whether AI assistance is materially different from existing norms for assistance in each specific context D) Ghost-writing is always unethical and AI writing is always acceptable

Answer **C — Ghost-writing shows that human writing assistance is already accepted in some contexts, and the relevant question is whether AI assistance is materially different from existing norms for assistance in each specific context.** Ghost-writing exists on a spectrum — from acknowledged co-authorship to behind-the-scenes assistance — and has different norms in different contexts (political speeches, celebrity books, executive communications). AI writing assistance is appropriately evaluated in relation to those existing norms: in contexts where human ghost-writing is accepted, AI writing assistance may not be categorically different. In contexts where individual voice and contribution is the point, it may be.

Question 8

The fairness concern about uneven AI access in competitive contexts is most acute when:

A) AI tools are used in large organizations B) Competitive contexts assume roughly equal resource access and AI differential represents a significant advantage that some competitors cannot match C) AI use gives any advantage whatsoever D) AI-assisted work is higher quality than non-AI-assisted work

Answer **B — Competitive contexts assume roughly equal resource access and AI differential represents a significant advantage that some competitors cannot match.** The fairness concern is sharpest where the competitive framework assumes a level playing field and AI access is significantly unequal. Standardized assessments, academic admissions, grant competitions, and RFP processes all assume some baseline of equal competition. When AI provides large advantages to some competitors but not others, the competition may not be measuring what it intends to measure. This is less concerning in contexts where tool advantages are explicit and expected.

Question 9

Which of the following is a clear deception bright line rather than a nuanced disclosure question?

A) Not mentioning in a cover letter that AI helped with grammar and phrasing B) Using AI to help draft a client report that you then substantially revise C) Running a social media account attributed to a human persona that is actually entirely AI-generated D) Having AI suggest some ideas for a presentation that you developed and presented yourself

Answer **C — Running a social media account attributed to a human persona that is actually entirely AI-generated.** This is active deception: representing AI content as human content with the intent to maintain the audience's false belief that they are reading from a real person. This is not a nuanced disclosure question — the intent to deceive and the deliberate misrepresentation put it on the other side of the bright line from ordinary non-disclosure. Options A, B, and D are all in the disclosure nuance zone, with varying appropriate responses based on context.

Question 10

The disclosure-resolution test asks:

A) Whether disclosing AI use increases or decreases trust with the recipient B) Whether disclosure to the relevant audience would resolve the ethical problem — if disclosure resolves it, the issue is transparency; if not, there is a deception problem that disclosure cannot fix C) Whether AI disclosures are required by law in a given context D) Whether the recipient wants to know about AI use

Answer **B — Whether disclosure to the relevant audience would resolve the ethical problem — if disclosure resolves it, the issue is transparency; if not, there is a deception problem that disclosure cannot fix.** The test distinguishes between two types of ethical concerns: those about transparency (the fix is disclosure) and those about deception (the harm is inherent in what was done, and disclosure doesn't undo it). A fake review disclosed as AI-generated is still a fake review — the problem isn't the non-disclosure, it's the fabrication. Applying this test to ambiguous cases helps identify whether you're dealing with a transparency question or a deeper ethical problem.

Question 11

Academic institutions' varying AI use policies reflect:

A) A lack of seriousness about AI ethics in academic institutions B) The appropriate range of contextual norms — different institutions have different assessments of how AI involvement affects what assessments are designed to evaluate C) Confusion that will soon be resolved into a single universal standard D) Whether students in those institutions are allowed to use computers

Answer **B — The appropriate range of contextual norms — different institutions have different assessments of how AI involvement affects what assessments are designed to evaluate.** Academic AI policies vary because legitimate questions about AI use in academic contexts have different answers in different educational contexts — a coding bootcamp that teaches AI tools is different from a law school exam in legal reasoning. The variation is not arbitrary; it reflects genuine differences in educational purpose. The practical implication for students is to know their specific institution's current policy.

Question 12

Organizational AI ethics includes the employer's right to know because:

A) Employers are legally entitled to detailed information about all employee work practices B) Material AI involvement in work that employers evaluate as personal professional effort creates a misleading representation about the nature of the employee's contribution C) Organizations must track all AI use for regulatory compliance D) Employees who use AI are doing less work and should be paid less

Answer **B — Material AI involvement in work that employers evaluate as personal professional effort creates a misleading representation about the nature of the employee's contribution.** Employers evaluating performance based on output quality and quantity are making assessments that, implicitly, involve the employee's professional capability. When AI substantially generates the output and the employer doesn't know, those assessments are based on a false premise. This is not an argument for disclosing every spell-check use — it is an argument for transparency about material AI involvement in contexts where it affects how employers understand the employee's work.

Question 13

Team fairness concerns about uneven AI usage within organizations most commonly arise when:

A) Some team members have newer computers than others B) Performance evaluations based on output metrics favor AI users in ways that may not reflect underlying contribution differences, and team norms around AI use haven't been explicitly established C) AI users produce lower quality work than non-AI users D) Organizations require all employees to use the same AI tools

Answer **B — Performance evaluations based on output metrics favor AI users in ways that may not reflect underlying contribution differences, and team norms around AI use haven't been explicitly established.** When some team members use AI extensively and others don't, output-based performance evaluation may reward AI tool access rather than genuine skill or effort differences. This is a fairness concern within teams that is largely invisible until explicitly examined. The appropriate response is not to prohibit AI use but to establish explicit team norms about AI use and to ensure evaluation frameworks account for these dynamics.

Question 14

Why is developing a personal AI ethics framework described as producing "principled frameworks rather than waiting for comprehensive rules"?

A) Rules are inherently inferior to frameworks as a form of guidance B) The ethical landscape is too dynamic and context-specific for comprehensive rules to be adequate — practitioners need principles they can apply to situations the rules don't cover C) AI ethics rules are written by people with conflicts of interest D) Comprehensive AI ethics rules will never be possible

Answer **B — The ethical landscape is too dynamic and context-specific for comprehensive rules to be adequate — practitioners need principles they can apply to situations the rules don't cover.** Disclosure norms will evolve. New forms of AI use will create new ethical questions. Legal frameworks will develop. A rule that was adequate in 2024 may be inadequate in 2026. Practitioners who have internalized the principles behind the rules — why disclosure matters, what accountability requires, what deception means — can navigate new situations. Those who have only learned rules will find them failing in novel contexts.

Question 15

The key difference between "AI writing assistance" and "presenting AI work as your own" is:

A) The length of the document — shorter AI contributions are acceptable B) Whether the AI or a human did the final editing C) The degree of substantive contribution and whether that contribution, in context, would be material to how others assess the authorship and intellectual origin of the work D) Whether the work was created for professional or personal purposes

Answer **C — The degree of substantive contribution and whether that contribution, in context, would be material to how others assess the authorship and intellectual origin of the work.** The meaningful distinction is not binary (AI touched it / AI didn't touch it) but contextual and substantive: Did AI make a material contribution to the intellectual content being attributed to you? Would that contribution be material to how the audience assesses the work's origin? In contexts where "this is my analysis" is the premise — academic, professional services, published thought leadership — substantial AI generation of the analysis without disclosure misrepresents the work's origin in a way that matters.