Chapter 34 Quiz: Legal and Intellectual Property Considerations

Reminder: This quiz is for educational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance on specific legal questions.

15 questions covering copyright, IP, privacy, liability, and regulatory considerations.


Question 1

Under current law in most major jurisdictions as of 2026, purely AI-generated content (with minimal human creative input):

A) Is automatically protected by copyright as the property of the AI tool user B) Is protected by copyright as the property of the AI developer C) Cannot be protected by copyright because copyright requires human authorship D) Is protected by copyright for 10 years, after which it enters the public domain

Answer **C — Cannot be protected by copyright because copyright requires human authorship.** The U.S. Copyright Office's position, affirmed in guidance and registration decisions, is that content generated autonomously by AI without sufficient human creative contribution is not eligible for copyright protection. The same principle applies in most other major jurisdictions. This means pure AI output can be used by anyone without seeking permission — which has both risks (your AI-generated assets can be copied) and uses (you can use others' AI-generated public domain material).

Question 2

The "sufficient human creativity" standard for AI-assisted copyright works:

A) Means any human involvement creates copyright protection B) Means only fully human-authored work can be protected C) Is a fact-specific threshold examining whether the human made substantial creative choices in directing, selecting, and arranging the expressive elements D) Is irrelevant because all AI-assisted work is automatically protected

Answer **C — Is a fact-specific threshold examining whether the human made substantial creative choices in directing, selecting, and arranging the expressive elements.** Courts and the Copyright Office look at the degree of human creative contribution — detailed creative prompts, significant selection and modification of outputs, combination with original human-authored elements — rather than simply asking whether a human was involved at all. The threshold is not yet definitively established and continues to be tested in litigation.

Question 3

What is the trade secret risk of entering proprietary business information into a consumer AI tool?

A) There is no risk because AI tools are secure and confidential B) Consumer AI tools may use input data to train models, which could potentially expose trade secret information to others indirectly; this transmission may also constitute a confidentiality breach under employment and confidentiality agreements C) Trade secrets are only at risk if the AI tool explicitly shares information with competitors D) The risk only applies to patented inventions, not other confidential business information

Answer **B — Consumer AI tools may use input data to train models, which could potentially expose trade secret information to others indirectly; this transmission may also constitute a confidentiality breach under employment and confidentiality agreements.** The trade secret risk is two-fold: (1) entering trade secret information into a consumer tool may constitute a failure to maintain reasonable confidentiality measures, potentially affecting trade secret protection, and (2) it may violate NDA, employment IP agreements, or confidentiality provisions. The risk is not hypothetical — it is a recognized legal concern that has been addressed in legal guidance for AI use in professional contexts.

Question 4

Why is it critical that PHI (Protected Health Information) must never go into consumer AI tools?

A) Because consumer AI tools are not accurate enough for medical information B) Because HIPAA requires Business Associate Agreements for any vendor handling PHI, and consumer AI tools do not provide HIPAA-compliant BAAs — making such use a compliance violation with potential civil and criminal penalties C) Because medical information is copyrighted and cannot be reproduced D) Because the AI may provide incorrect medical information to other users

Answer **B — Because HIPAA requires Business Associate Agreements for any vendor handling PHI, and consumer AI tools do not provide HIPAA-compliant BAAs — making such use a compliance violation with potential civil and criminal penalties.** HIPAA's Security Rule and Privacy Rule require specific contractual protections (BAAs) for any vendor handling PHI on behalf of a covered entity. Consumer AI tools don't have these. Using PHI in consumer AI tools is not a gray area — it is a clear compliance violation with penalties up to $1.9 million per violation category per year (civil) and potential criminal penalties for willful disclosure.

Question 5

Under the EU AI Act, which category of AI use is subject to strict compliance requirements including transparency, accuracy, human oversight, and documentation?

A) Minimal risk (most AI applications) B) High risk (AI in employment, essential services, critical infrastructure, healthcare, etc.) C) Limited risk (AI chatbots requiring disclosure) D) All commercial AI applications regardless of context

Answer **B — High risk (AI in employment, essential services, critical infrastructure, healthcare, etc.).** The EU AI Act's risk-based framework reserves strict requirements for high-risk applications: AI systems that could have significant consequences for people's rights, safety, or wellbeing. Employment applications, credit scoring, healthcare diagnosis support, educational assessment, law enforcement, and similar domains are high-risk under the Act. Most everyday productivity uses of AI are minimal-risk and have no specific compliance requirements.

Question 6

The open source license contamination risk in AI-generated code refers to:

A) The possibility that AI-generated code will accidentally include malware B) The theoretical possibility that AI-generated code substantially derived from GPL-licensed training material could require derivative works to be licensed under copyleft terms C) The risk that open source contributors will steal AI-generated code D) The risk that AI tools will be trained on malicious open source code

Answer **B — The theoretical possibility that AI-generated code substantially derived from GPL-licensed training material could require derivative works to be licensed under copyleft terms.** Copyleft licenses like GPL require derivative works to be licensed under the same terms. If AI-generated code is substantially derived from GPL-licensed training material, there is at least a theoretical risk that copyleft obligations attach to code that incorporates it. This risk is contested, not yet definitively adjudicated, and tool-specific (some tools offer IP indemnification). For commercial software, it warrants review by IP counsel.

Question 7

The professional liability principle for AI-assisted work states:

A) Liability is shared between the professional and the AI tool developer B) The AI tool developer bears primary liability for errors in AI-generated professional work C) Professional responsibility follows the professional — AI involvement does not transfer, dilute, or share accountability for work product quality D) Professionals are not liable for errors they could not reasonably have detected in AI output

Answer **C — Professional responsibility follows the professional — AI involvement does not transfer, dilute, or share accountability for work product quality.** The reasonable professional standard is assessed against the professional, not the tool. A lawyer who submits AI-generated fabricated citations is responsible for the breach of professional duty. An engineer who deploys AI-generated specifications without adequate review is responsible for errors. "The AI made a mistake" is not a legal defense — the question is whether the professional exercised appropriate oversight and care.

Question 8

Fair use analysis for using copyrighted text in AI prompts weighs which of the following factors?

A) Only the length of the copyrighted material used B) Purpose and character of use, nature of the copyrighted work, amount used, and market impact C) Only whether the use is commercial or non-commercial D) Only whether you obtained the material through legitimate means

Answer **B — Purpose and character of use, nature of the copyrighted work, amount used, and market impact.** The US fair use analysis considers all four factors together. No single factor is determinative. Transformative, non-commercial purposes weigh in favor of fair use; market substitution weighs against. Short excerpts from factual works for commentary or research are generally more defensible; large portions of creative works for commercial reproduction are not.

Question 9

GDPR data minimization requires that when processing personal data:

A) Organizations minimize the number of employees who can access data B) Organizations retain data for the minimum legally required period C) Only data strictly necessary for the specified processing purpose should be collected and used D) Organizations use the smallest possible AI models to process personal data

Answer **C — Only data strictly necessary for the specified processing purpose should be collected and used.** Data minimization is a core GDPR principle: you should not collect or process more personal data than is necessary for the stated purpose. In AI contexts, this means inputting only the personal data strictly necessary for the AI task at hand — not providing full contact databases when only names are needed, or full records when summaries would suffice.

Question 10

What is the most important distinction between consumer and enterprise AI tool tiers for professional data handling purposes?

A) Enterprise tools are more expensive but not meaningfully different in data handling B) Enterprise tiers typically provide Data Processing Agreements, data retention controls, and privacy commitments that consumer tiers do not — making them potentially appropriate for regulated data that consumer tools cannot handle C) Enterprise tools have better AI capabilities and therefore make fewer errors D) The distinction only matters for organizations with more than 100 employees

Answer **B — Enterprise tiers typically provide Data Processing Agreements, data retention controls, and privacy commitments that consumer tiers do not — making them potentially appropriate for regulated data that consumer tools cannot handle.** The enterprise/consumer distinction matters because legal compliance for regulated data (HIPAA, GDPR, confidentiality obligations) requires specific contractual commitments from data processors. Enterprise tiers may provide these; consumer tiers do not. The decision to use a specific enterprise tier with specific regulated data still requires reviewing the actual contractual terms against the specific legal requirements — it cannot be assumed.

Question 11

Client contracts with AI use clauses may include which of the following provisions?

A) Only restrictions prohibiting AI use B) Only disclosure requirements C) A range of provisions including disclosure requirements, prohibitions on certain types of AI use, approval requirements for material AI use, and liability allocation provisions D) Contractual clauses about AI are not yet recognized as enforceable

Answer **C — A range of provisions including disclosure requirements, prohibitions on certain types of AI use, approval requirements for material AI use, and liability allocation provisions.** Client contracts are increasingly including AI use provisions that range from transparency requirements to restrictions to liability allocation. The specific terms vary widely by client, industry, and project context. Professionals should review client contracts for AI provisions before using AI tools on engagements, and should consider proposing appropriate AI use language in their own standard contracts.

Question 12

The EU AI Act applies to organizations based outside the EU:

A) Never — it only applies to EU-based organizations B) When those organizations deploy AI systems that affect people within the EU, regardless of where the organization is based C) Only when those organizations have offices in EU member states D) Only when those organizations generate more than €50 million in annual revenue from EU activities

Answer **B — When those organizations deploy AI systems that affect people within the EU, regardless of where the organization is based.** The EU AI Act, like GDPR, has extraterritorial reach. Any organization deploying AI that affects EU residents is subject to applicable provisions, regardless of where the organization is incorporated or based. This is an important consideration for US-based organizations deploying customer-facing AI that reaches EU users.

Question 13

For AI-generated code in commercial software, which practice best manages legal risk?

A) Using only AI tools that are designated as "open source" B) Documenting which portions of the codebase are AI-generated, using tools with IP indemnification for commercial products, and reviewing high-risk portions for potential open source contamination C) Replacing all AI-generated code with human-written code before release D) Notifying customers that the software contains AI-generated code

Answer **B — Documenting which portions of the codebase are AI-generated, using tools with IP indemnification for commercial products, and reviewing high-risk portions for potential open source contamination.** The risk management approach combines: documentation (for defensibility and audit trail), tool selection (IP indemnification commitments from enterprise tools), and targeted review (for components where IP concerns are highest). Complete avoidance of AI-generated code is not necessary for most applications; untargeted reliance without any IP review is not adequate for commercial software with licensing obligations.

Question 14

When should a professional immediately involve legal counsel regarding AI use?

A) Every time they use an AI tool for professional work B) Only when a lawsuit has already been filed C) When they believe they may have already violated a legal obligation, when a client or third party is raising a legal concern, or when deploying AI in a high-risk domain for the first time at scale D) Only when working in regulated industries like healthcare or finance

Answer **C — When they believe they may have already violated a legal obligation, when a client or third party is raising a legal concern, or when deploying AI in a high-risk domain for the first time at scale.** The "when to involve counsel" framework is not about requiring counsel for every AI interaction — that would be impractical. It is about recognizing the specific situations where self-guidance is insufficient: actual or potential violations already occurred, disputes in progress, and material new deployments in regulated or high-risk domains. The framework allows professionals to manage most AI use independently while ensuring legal support is engaged when the stakes require it.

Question 15

The most accurate description of the US legal landscape for AI as of early 2026 is:

A) Comprehensive federal legislation comparable to the EU AI Act is in place B) Sector-specific guidance from agencies like FTC, HHS, SEC, and EEOC, active litigation on multiple fronts, evolving state legislation, and a fragmented landscape awaiting possible comprehensive federal action C) AI law in the US is completely settled and stable D) The US has no AI-specific legal requirements of any kind

Answer **B — Sector-specific guidance from agencies like FTC, HHS, SEC, and EEOC, active litigation on multiple fronts, evolving state legislation, and a fragmented landscape awaiting possible comprehensive federal action.** The US AI regulatory landscape as of early 2026 is characterized by fragmentation, not comprehensiveness: sector-specific agency guidance that varies by industry, ongoing litigation on copyright and privacy questions that haven't been finally resolved, state-level legislation that varies by state, and significant regulatory uncertainty. This is in contrast to the EU AI Act's more comprehensive framework, and is why monitoring developments in one's specific industry sector is more tractable than attempting to track all US AI regulation.