Chapter 35 Quiz: IP, Licensing, and Legal Considerations

Test your understanding of the legal concepts covered in this chapter. Each question has one best answer unless otherwise indicated. Answers are hidden below each question.

Reminder: This quiz tests educational understanding of legal concepts. It does not constitute legal advice.


Question 1

What is the current position of the U.S. Copyright Office regarding AI-generated works?

A) AI-generated works are fully copyrightable with the AI listed as author B) Works generated by AI without sufficient human creative control are not copyrightable C) All AI-generated code is automatically in the public domain D) Copyright applies only if the AI tool provider agrees to transfer rights

Answer **B) Works generated by AI without sufficient human creative control are not copyrightable.** The U.S. Copyright Office has stated that when an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. However, works containing AI-generated material may be copyrightable if they also contain sufficient human authorship (Section 35.2).

Question 2

Which jurisdiction uniquely includes a statutory provision for "computer-generated works" where there is no human author?

A) United States B) European Union C) United Kingdom D) Japan

Answer **C) United Kingdom.** Section 9(3) of the UK Copyright, Designs and Patents Act 1988 provides that for computer-generated works where there is no human author, the author is deemed to be "the person by whom the arrangements necessary for the creation of the work are undertaken" (Section 35.1).

Question 3

What type of open-source license requires that derivative works be distributed under the same or compatible license terms?

A) Permissive license B) Copyleft license C) Creative Commons license D) Proprietary license

Answer **B) Copyleft license.** Copyleft licenses (such as GPL, LGPL, and AGPL) require that derivative works be distributed under the same or compatible license terms. This is the "share-alike" requirement that makes copyleft licenses significant in the context of AI-generated code (Section 35.3).

Question 4

When an AI coding assistant produces code that is verbatim identical to code from a GPL-licensed project, what is the most prudent approach?

A) Use the code freely since the AI transformed it B) Assume the GPL terms may apply and either comply or replace the code C) Ignore the issue since you did not knowingly copy the code D) Contact the AI provider and ask them to handle licensing

Answer **B) Assume the GPL terms may apply and either comply or replace the code.** The prudent approach is to treat AI output with the same care you would treat code from an unknown source. If AI-generated code matches GPL-licensed code, the GPL's copyleft provisions may require you to license your entire project under the GPL. You should either comply with the license terms or replace the flagged code (Section 35.3).

Question 5

Which of the following is NOT typically a component of an enterprise AI usage policy?

A) Approved tools and versions B) Data classification and handling rules C) Specific AI model architecture requirements D) Code review requirements for AI-generated code

Answer **C) Specific AI model architecture requirements.** Enterprise AI usage policies typically address approved tools, data classification, code review, attribution, license compliance, and training. They do not typically specify the internal architecture of AI models, as this is a technical implementation detail of the tool provider (Section 35.4).

Question 6

Under GDPR, what is the role of an organization that sends developer code containing personal data to an AI coding service?

A) Data subject B) Data processor C) Data controller D) Data protection officer

Answer **C) Data controller.** The organization using the AI tool is typically the data controller, meaning it determines the purposes and means of processing personal data. The AI tool provider typically acts as a data processor. This distinction matters because it determines which GDPR obligations apply (Section 35.5).

Question 7

What document is required under GDPR Article 28 when an AI tool provider processes personal data on behalf of your organization?

A) Terms of Service agreement B) Non-Disclosure Agreement C) Data Processing Agreement D) Service Level Agreement

Answer **C) Data Processing Agreement (DPA).** GDPR Article 28 requires a Data Processing Agreement when a data processor handles personal data on behalf of a data controller. Many AI tool providers now offer DPAs as part of their enterprise agreements (Section 35.5).

Question 8

In the U.S. Federal Circuit case Thaler v. Vidal (2022), the court ruled that:

A) AI systems can be listed as inventors on patent applications B) The Patent Act requires a human inventor C) Patents on AI-generated inventions are automatically invalid D) Companies that own AI systems are the default inventors

Answer **B) The Patent Act requires a human inventor.** The Federal Circuit held that the Patent Act requires a human inventor, meaning an AI system cannot be listed as an inventor on a U.S. patent application. However, a human who uses AI as a tool in the inventive process may still qualify as the inventor (Section 35.7).

Question 9

What is "shadow AI" in the context of enterprise AI usage?

A) AI tools that operate in the background without user awareness B) AI tools used by employees that have not been approved or managed by the organization C) AI tools that produce code without proper attribution D) AI tools that are trained on dark web data

Answer **B) AI tools used by employees that have not been approved or managed by the organization.** Shadow AI refers to the use of AI tools that have not been approved or managed by the organization. This creates unmanaged risk because the organization loses visibility into data flows, cannot ensure license compliance, and may face security risks (Section 35.9).

Question 10

Which of the following types of data is LEAST likely to be inadvertently sent to an AI coding service during normal use?

A) Source code from open files used for context B) File paths and project structure information C) Encrypted database backups D) Configuration files with API keys

Answer **C) Encrypted database backups.** During normal use of AI coding tools, developers typically send source code, file paths, configuration files, comments, and other code-adjacent data. Encrypted database backups would not normally be part of the data sent during coding sessions. The other options are all commonly transmitted during AI tool usage (Section 35.5).

Question 11

The EU AI Act classifies AI systems into risk categories. Which category is subject to the strictest requirements (short of prohibition)?

A) Minimal risk B) Limited risk C) High risk D) Moderate risk

Answer **C) High risk.** The EU AI Act establishes four risk categories: unacceptable (prohibited), high risk (strict requirements including risk management, documentation, human oversight), limited risk (transparency requirements), and minimal risk (largely unregulated). There is no "moderate risk" category (Section 35.8).

Question 12

When evaluating an AI tool's terms of service for enterprise use, which of the following should be given the HIGHEST priority for an organization handling proprietary code?

A) The tool's user interface design B) Whether the provider uses input data for model training C) The tool's response speed D) The number of programming languages supported

Answer **B) Whether the provider uses input data for model training.** For organizations handling proprietary code, whether the AI tool provider uses input data (prompts, code context) for training is a critical consideration. If proprietary code is used for training, it could potentially be exposed to other users or incorporated into the model in ways that compromise trade secrets (Section 35.6).

Question 13

Which approach to staying current with AI law does the chapter recommend as a practical heuristic?

A) Assume legal guidance is valid indefinitely once published B) Assume legal guidance has a shelf life of roughly one year C) Only update legal understanding when sued D) Rely entirely on AI tool providers for legal guidance

Answer **B) Assume legal guidance has a shelf life of roughly one year.** The chapter recommends the "one-year rule" as a practical heuristic: assume that any specific legal guidance about AI-generated code has a shelf life of roughly one year, after which it should be verified for currency (Section 35.10).

Question 14

What is the primary difference between permissive licenses (like MIT) and copyleft licenses (like GPL) when it comes to AI-generated code?

A) Permissive licenses are free; copyleft licenses require payment B) Permissive licenses allow broad use with minimal requirements; copyleft licenses require derivative works to use the same license C) Permissive licenses apply only to source code; copyleft licenses apply to binaries D) There is no practical difference for AI-generated code

Answer **B) Permissive licenses allow broad use with minimal requirements; copyleft licenses require derivative works to use the same license.** Permissive licenses (MIT, BSD, Apache 2.0) allow broad use with minimal requirements like attribution. Copyleft licenses (GPL, AGPL) require derivative works to be distributed under the same license. This distinction is critical when AI generates code that matches licensed source material (Section 35.3).

Question 15

In the "work made for hire" doctrine, what challenge does AI-generated code present?

A) The doctrine requires code to be written in a specific programming language B) The doctrine assumes the employee is the author, but with AI the authorship question is unclear C) The doctrine only applies to code written during business hours D) The doctrine prohibits employees from using AI tools

Answer **B) The doctrine assumes the employee is the author, but with AI the authorship question is unclear.** The work-for-hire doctrine in the United States provides that code written by an employee within the scope of their employment is owned by the employer. But this assumes the employee is the author. When AI generates the code, the traditional work-for-hire framework may not apply in its usual way, creating uncertainty about ownership (Section 35.2).

Question 16

What is the recommended approach for handling AI-generated code in a compliance workflow?

A) Generate, ship, scan later B) Generate, flag, scan, review, decide, document, monitor C) Ban all AI-generated code D) Trust AI output without scanning since the tool provider handles compliance

Answer **B) Generate, flag, scan, review, decide, document, monitor.** The chapter recommends a seven-step compliance workflow: (1) Generate code with AI, (2) Flag AI-generated segments, (3) Scan for license matches, (4) Review flagged matches, (5) Decide on compliance approach, (6) Document decisions, (7) Monitor ongoing compliance (Section 35.3).

Question 17

Which of the following is a common policy pitfall identified in the chapter?

A) Making the policy too specific B) Involving too many developers in policy creation C) Being too restrictive, which may drive developers to use unauthorized tools covertly D) Reviewing the policy too frequently

Answer **C) Being too restrictive, which may drive developers to use unauthorized tools covertly.** The chapter identifies several common policy pitfalls, including being too restrictive. Blanket bans on AI tools may be unenforceable and counterproductive, as developers may use unauthorized tools covertly, creating greater risk than a well-managed approved toolset (Section 35.9).

Question 18

For patent purposes, the emerging consensus on AI's role in the inventive process is:

A) AI cannot be involved in any patentable invention B) AI can be the sole inventor if the invention is sufficiently novel C) AI can be a tool used in the inventive process, but a human must make a significant intellectual contribution D) Only the AI tool provider can patent AI-generated inventions

Answer **C) AI can be a tool used in the inventive process, but a human must make a significant intellectual contribution.** The emerging consensus treats AI as a tool analogous to a calculator or simulation software. A human who uses AI to assist in developing an invention can be the inventor, but the human must contribute to conceiving the inventive concept (Section 35.7).

Question 19

What regulatory framework applies to software used in airborne systems?

A) HIPAA B) ISO 26262 C) DO-178C D) SOC 2

Answer **C) DO-178C.** DO-178C (Software Considerations in Airborne Systems and Equipment Certification) establishes software development standards for aviation applications. ISO 26262 applies to automotive, HIPAA applies to healthcare, and SOC 2 is a general security certification framework (Section 35.8).

Question 20

When negotiating enterprise AI tool agreements, which of the following is generally negotiable?

A) The fundamental architecture of the AI model B) Custom data retention periods and data residency requirements C) The programming languages the AI supports D) The underlying training data used to build the model

Answer **B) Custom data retention periods and data residency requirements.** Enterprise agreements offer opportunities to negotiate data protection commitments, custom data retention periods, enhanced indemnification, service level agreements, audit rights, and data residency requirements. The model architecture and training data are generally not negotiable (Section 35.6).

Question 21

Apache License 2.0 is compatible with which of the following licenses?

A) GPL v2 only B) GPL v3 only C) Both GPL v2 and GPL v3 D) Neither GPL v2 nor GPL v3

Answer **B) GPL v3 only.** Apache 2.0 and GPL v3 are compatible, but Apache 2.0 and GPL v2 are not compatible due to differences in patent grant terms. This is an important distinction when managing license compatibility in projects with AI-generated code (Section 35.3).

Question 22

What is the primary purpose of maintaining an audit trail for AI-generated code?

A) To improve AI model performance B) To document code provenance for compliance, legal, and regulatory purposes C) To track developer productivity D) To reduce the cost of AI tool licenses

Answer **B) To document code provenance for compliance, legal, and regulatory purposes.** Audit trails document the provenance of AI-generated code, supporting compliance with regulatory requirements, enabling license compliance audits, and providing evidence of human review and modification for copyright purposes (Sections 35.8, 35.9).

Question 23

According to the chapter, what is the best defense against "shadow AI" in an organization?

A) Implementing strict network monitoring to block all AI tools B) Providing approved alternatives that meet developers' needs while managing organizational risk C) Requiring developers to sign pledges not to use unauthorized tools D) Ignoring the problem since shadow AI is not a significant risk

Answer **B) Providing approved alternatives that meet developers' needs while managing organizational risk.** The chapter states that the best defense against shadow AI is not prohibition but providing approved alternatives that meet developers' needs while managing organizational risk. This acknowledges that developers will seek AI tools regardless and focuses on channeling that demand productively (Section 35.9).

Question 24

Which of the following scenarios MOST likely results in copyrightable code under current U.S. guidance?

A) A developer types "write a web server" and uses the AI output unchanged B) A developer writes 80% of the code by hand and uses AI to generate utility functions that they then modify C) A developer generates an entire application from a one-sentence prompt D) An AI autonomously generates code with no human involvement

Answer **B) A developer writes 80% of the code by hand and uses AI to generate utility functions that they then modify.** Under current U.S. Copyright Office guidance, code where a developer writes significant original portions and uses AI to fill in small segments that are then modified is most likely to be copyrightable. The key factors are the proportion of human authorship and the degree of human creative input (Section 35.2).

Question 25

When the chapter recommends policy reviews "at least quarterly," what is the primary reason for this frequency?

A) To comply with specific legal requirements mandating quarterly reviews B) To match the rapid pace of change in AI tools, regulations, and legal developments C) To coordinate with quarterly financial reporting D) To align with software release cycles

Answer **B) To match the rapid pace of change in AI tools, regulations, and legal developments.** The chapter recommends at least quarterly policy reviews because the AI tool landscape, regulatory environment, and legal precedents are evolving rapidly. New tools, updated terms of service, new regulations, and court decisions can all necessitate policy updates within a quarter (Sections 35.9, 35.10).

Scoring Guide

Score Level Recommendation
23-25 Expert You have a strong grasp of AI-related legal concepts. Focus on staying current as the law evolves.
18-22 Proficient Good understanding of core concepts. Review the sections related to questions you missed.
13-17 Developing Adequate foundation but some gaps. Re-read the chapter with focus on Sections 35.2-35.5.
8-12 Beginning Significant gaps in understanding. Re-read the full chapter and complete Tier 1-2 exercises.
0-7 Needs Review Start with the chapter introduction and work through each section methodically.