Chapter 23: Quiz — Documentation and Technical Writing

Answer the following 25 questions to test your understanding of the concepts covered in this chapter.


Question 1

What is the primary reason documentation is MORE important for AI-generated codebases than for traditionally written code?

  • A) AI-generated code is lower quality and needs more explanation
  • B) AI-generated code lacks the implicit decision context that human developers build incrementally
  • C) AI-generated code does not have comments
  • D) AI-generated code is harder to read
Answer **B) AI-generated code lacks the implicit decision context that human developers build incrementally** When developers write code by hand, they build an understanding of the codebase incrementally, and each decision leaves a trace in memory. When AI generates code, those mental traces do not exist, making explicit documentation essential for preserving context.

Question 2

Which five questions should a good README answer in rapid succession?

  • A) Who, What, When, Where, Why
  • B) What, Why, Install, Use, Contribute
  • C) What, How, When, Where, Who
  • D) Purpose, Architecture, Deployment, Testing, Maintenance
Answer **B) What, Why, Install, Use, Contribute** A good README answers: What is this project? Why should I care? How do I install it? How do I use it? How do I contribute or get help?

Question 3

According to the Diátaxis framework, which category does a document titled "Why We Chose PostgreSQL Over MongoDB" fall into?

  • A) Tutorial
  • B) How-To Guide
  • C) Reference
  • D) Explanation
Answer **D) Explanation** Explanation documents are understanding-oriented and discuss concepts, decisions, and rationale. A document explaining why a technology choice was made is clearly in the Explanation category.

Question 4

What does the Field(description=...) parameter in a Pydantic model do for FastAPI documentation?

  • A) It creates a database column description
  • B) It flows into the generated OpenAPI schema as field documentation
  • C) It generates a docstring for the model
  • D) It creates a tooltip in the frontend
Answer **B) It flows into the generated OpenAPI schema as field documentation** FastAPI uses Pydantic model `Field` descriptions as part of the OpenAPI schema, which then renders in the auto-generated Swagger UI and ReDoc documentation.

Question 5

In the Nygard ADR format, what are the four main sections?

  • A) Problem, Solution, Implementation, Review
  • B) Status, Context, Decision, Consequences
  • C) Background, Analysis, Recommendation, Plan
  • D) Issue, Options, Selection, Outcome
Answer **B) Status, Context, Decision, Consequences** The Nygard format structures ADRs into Status (accepted, superseded, etc.), Context (the situation and constraints), Decision (what was decided), and Consequences (positive, negative, and neutral impacts).

Question 6

When should you write an Architecture Decision Record?

  • A) For every code change
  • B) Only when choosing between databases
  • C) When a decision is difficult to reverse, affects multiple components, or has multiple viable alternatives
  • D) Only at the start of a project
Answer **C) When a decision is difficult to reverse, affects multiple components, or has multiple viable alternatives** ADRs are warranted when you choose between viable alternatives, when the decision is hard to reverse, when it affects multiple team members or components, or when you find yourself explaining the same decision repeatedly.

Question 7

Which Python docstring style uses Parameters, Returns, and Raises with dashed underlines?

  • A) Google style
  • B) NumPy style
  • C) Sphinx style
  • D) PEP 257 style
Answer **B) NumPy style** NumPy style uses section headers like `Parameters`, `Returns`, and `Raises` followed by dashed underlines (e.g., `----------`). Google style uses `Args:`, `Returns:`, and `Raises:`. Sphinx style uses `:param:`, `:returns:`, and `:raises:`.

Question 8

What is the recommended approach for simple functions with clear type hints?

  • A) No docstring needed
  • B) A one-line docstring describing the purpose
  • C) A full multi-section docstring with all parameters documented
  • D) Only inline comments
Answer **B) A one-line docstring describing the purpose** For simple functions where the behavior is clear from the function name and type hints, a one-line docstring suffices. Detailed multi-section docstrings should be reserved for functions where behavior, constraints, or edge cases are not obvious from the signature alone.

Question 9

According to the Keep a Changelog format, under which category would you list "The legacy_export() function will be removed in v3.0"?

  • A) Removed
  • B) Changed
  • C) Deprecated
  • D) Added
Answer **C) Deprecated** Deprecated is for features that will be removed in future versions. Removed is for features that have already been removed in the current release.

Question 10

What is "documentation drift"?

  • A) Writing documentation in a different language than the code
  • B) When code changes but documentation does not update to match
  • C) Moving documentation from one tool to another
  • D) When multiple team members write conflicting documentation
Answer **B) When code changes but documentation does not update to match** Documentation drift occurs when code evolves but the corresponding documentation is not updated, leading to discrepancies that can actively mislead readers.

Question 11

In documentation-driven development, what is the correct order of steps?

  • A) Code, Test, Document
  • B) Test, Code, Document
  • C) Document, Test, Code
  • D) Document, Code, Test
Answer **C) Document, Test, Code** In documentation-driven development, you write the documentation (docstrings, README sections, API descriptions) first, then write tests that verify the documented behavior, then implement the code to match both.

Question 12

Which Sphinx extension pulls docstrings directly from Python source code?

  • A) sphinx-rtd-theme
  • B) autodoc
  • C) mkdocstrings
  • D) napoleon
Answer **B) autodoc** The `autodoc` extension in Sphinx automatically generates documentation from Python docstrings. The `napoleon` extension is a companion that allows Sphinx to parse Google and NumPy style docstrings, but `autodoc` is the one that actually pulls them from the source code.

Question 13

What is the MkDocs equivalent of Sphinx's autodoc extension?

  • A) mkdocs-material
  • B) mkdocstrings
  • C) mkdocs-autodoc
  • D) mkdocs-api
Answer **B) mkdocstrings** The `mkdocstrings` plugin for MkDocs serves the same role as Sphinx's `autodoc`, pulling docstrings from Python source code into MkDocs-generated documentation pages.

Question 14

Code comments should primarily explain:

  • A) What the code does
  • B) How the code works step by step
  • C) Why the code is written a particular way
  • D) Who wrote the code
Answer **C) Why the code is written a particular way** Comments should explain the "why," not the "what." Code itself communicates what is happening. Comments add value by explaining the reasoning, constraints, or non-obvious decisions behind the implementation.

Question 15

What should you do with commented-out code in a codebase?

  • A) Keep it for reference
  • B) Remove it; version control preserves history
  • C) Move it to a separate file
  • D) Add a TODO comment above it
Answer **B) Remove it; version control preserves history** Commented-out code clutters the codebase and creates confusion about whether it should be restored. Version control (git) exists specifically to preserve old code, so there is no need to keep dead code in comments.

Question 16

Which of the following is NOT a valid badge category recommended for README files?

  • A) Build status
  • B) License
  • C) Lines of code
  • D) Python versions supported
Answer **C) Lines of code** The chapter recommends badges that answer user questions: build status (is CI passing?), version (latest release?), Python versions (compatibility?), and license (governance?). Lines of code is a vanity metric that does not help users decide whether to adopt a project.

Question 17

When an ADR's decision is reversed, what is the correct procedure?

  • A) Delete the original ADR
  • B) Edit the original ADR with the new decision
  • C) Write a new ADR that supersedes the old one and update the original's status
  • D) Add a comment to the original ADR
Answer **C) Write a new ADR that supersedes the old one and update the original's status** Once accepted, ADRs are immutable. If a decision is reversed, you write a new ADR that supersedes the original and update the original's status to "Superseded by ADR-NNN." This preserves the historical record of decisions and their evolution.

Question 18

What is the recommended location for storing ADRs in a repository?

  • A) In the root directory
  • B) In docs/adr/
  • C) In a separate repository
  • D) In a wiki
Answer **B) In `docs/adr/`** ADRs should be stored in the repository alongside the code they describe, typically in a `docs/adr/` directory. Numbering them sequentially (e.g., `0001-use-postgresql.md`) provides a clear chronological record.

Question 19

In the context of AI-assisted documentation, what is the most effective way to generate a README?

  • A) Ask the AI to create a README with no context
  • B) Provide the AI with pyproject.toml, directory listing, and a project description
  • C) Copy a README from a similar project and modify it
  • D) Generate it from code comments only
Answer **B) Provide the AI with pyproject.toml, directory listing, and a project description** Providing structured project metadata (pyproject.toml, directory structure, and a description of the project's purpose) gives the AI enough context to produce an accurate, detailed README. The AI combines this concrete information with its knowledge of README best practices to produce a solid first draft.

Question 20

What distinguishes a tutorial from a how-to guide in the Diátaxis framework?

  • A) Tutorials are shorter than how-to guides
  • B) Tutorials are learning-oriented for beginners; how-to guides are goal-oriented for experienced users
  • C) Tutorials use code; how-to guides use prose only
  • D) Tutorials are official; how-to guides are community-created
Answer **B) Tutorials are learning-oriented for beginners; how-to guides are goal-oriented for experienced users** Tutorials guide beginners through a complete experience (the reader does not yet know what questions to ask). How-to guides help experienced users accomplish a specific task (the reader knows what they want to do but not how).

Question 21

What is the main advantage of documentation-driven development when using AI coding assistants?

  • A) It reduces the number of AI prompts needed
  • B) The documentation serves as a precise specification for the AI to implement
  • C) It eliminates the need for tests
  • D) It makes the AI write faster code
Answer **B) The documentation serves as a precise specification for the AI to implement** In DDD with AI, docstrings and API documentation become the specification you provide to the AI assistant. This is more effective than natural language prompts because docstrings are precise, structured, and unambiguous, giving the AI a clear contract to fulfill.

Question 22

Which of the following is the best approach for maintaining an [Unreleased] changelog section?

  • A) Write it all at release time from git log
  • B) Update it with each merge to the main branch
  • C) Let the project manager maintain it
  • D) Generate it automatically from commit messages only
Answer **B) Update it with each merge to the main branch** Maintaining an `[Unreleased]` section and updating it with each merge prevents the frantic, error-prone changelog writing that happens right before release. Many teams enforce this with CI checks that reject pull requests without a changelog entry.

Question 23

When reviewing AI-generated code comments, what should you remove?

  • A) All comments
  • B) Comments that explain "why" a decision was made
  • C) Comments that merely restate what the code does
  • D) Comments that reference external documentation
Answer **C) Comments that merely restate what the code does** AI-generated code sometimes includes excessive comments that explain obvious operations (e.g., "# Increment the counter" above `counter += 1`). These comments add noise without information. Keep comments that explain "why" and remove those that merely restate the "what."

Question 24

What does the "documentation as code" philosophy mean?

  • A) Writing documentation in a programming language
  • B) Treating documentation with the same rigor as code: version control, code review, CI/CD, testing, and style guides
  • C) Generating all documentation automatically from code
  • D) Storing documentation in code comments only
Answer **B) Treating documentation with the same rigor as code: version control, code review, CI/CD, testing, and style guides** The "documentation as code" philosophy means documentation lives in the repository alongside code, changes go through pull requests and code review, documentation is built and validated in CI/CD, code examples are tested, and consistent style is enforced.

Question 25

Which of the following is the best prompt pattern for asking an AI to detect documentation drift?

  • A) "Check if my documentation is good"
  • B) "Write new documentation for my code"
  • C) "Here is the function implementation and its docstring. Identify discrepancies in parameters, return values, exceptions, and examples"
  • D) "Compare my README to my code"
Answer **C) "Here is the function implementation and its docstring. Identify discrepancies in parameters, return values, exceptions, and examples"** This prompt pattern works because it provides the AI with both the code and documentation, then gives specific criteria for comparison (parameters, return values, exceptions, examples). This structured comparison is a task AI assistants handle well because it has clear, verifiable criteria.