Chapter 26: Quiz — Refactoring Legacy Code with AI

Test your understanding of the concepts covered in this chapter. Each question has one best answer unless otherwise noted.


Question 1

According to Michael Feathers' widely accepted definition, what is the primary characteristic that makes code "legacy"?

  • A) It was written more than five years ago
  • B) It uses an outdated programming language
  • C) It lacks automated tests
  • D) It was written by developers who have left the company
Answer **C) It lacks automated tests** Michael Feathers defines legacy code as "code without tests." Without tests, you cannot change the code with confidence because there is no way to verify that existing behavior is preserved. While the other options are common characteristics of legacy code, the absence of tests is the defining factor because it directly prevents safe modification.

Question 2

Which of the following is NOT a recommended use of AI when assessing a legacy codebase?

  • A) Tracing data flow through the application
  • B) Identifying circular dependencies
  • C) Automatically fixing all identified code smells
  • D) Mapping public vs. private interfaces
Answer **C) Automatically fixing all identified code smells** AI is excellent for analysis and understanding — tracing data flow, identifying dependencies, and mapping interfaces. However, automatically fixing all code smells without human oversight is dangerous because fixes may change behavior, and the AI may not understand the full context of why certain patterns exist. Refactoring should be done incrementally with human judgment guiding the process.

Question 3

What is the primary purpose of characterization tests?

  • A) To verify that code meets its original specification
  • B) To capture the current behavior of code, regardless of whether that behavior is correct
  • C) To achieve 100% code coverage
  • D) To test the performance characteristics of the system
Answer **B) To capture the current behavior of code, regardless of whether that behavior is correct** Characterization tests document what the code actually does, including any bugs or unintended behaviors. This is fundamentally different from specification tests, which verify intended behavior. The goal is to create a safety net that detects any changes in behavior during refactoring, so you can make deliberate decisions about which behaviors to preserve and which to change.

Question 4

In the strangler fig pattern, what is the recommended approach for replacing a legacy system?

  • A) Rewrite the entire system from scratch and switch over in one deployment
  • B) Incrementally replace components while maintaining a working system at every step
  • C) Freeze feature development for six months and dedicate all resources to the rewrite
  • D) Create a complete parallel system and switch users over all at once
Answer **B) Incrementally replace components while maintaining a working system at every step** The strangler fig pattern replaces legacy components one at a time, using a facade or router to direct traffic to either the old or new implementation. At every step, the system remains functional, and any individual replacement can be rolled back. This eliminates the all-or-nothing risk of a Big Bang rewrite.

Question 5

What is "shadow mode" in the context of the strangler fig pattern?

  • A) Running the application with reduced logging to improve performance
  • B) Running both old and new implementations simultaneously and comparing their results
  • C) Deploying the new system but hiding it from users
  • D) Running the legacy system on backup servers
Answer **B) Running both old and new implementations simultaneously and comparing their results** Shadow mode executes both the legacy and modern implementations for each request, compares the results, and logs any discrepancies — but returns the legacy result to the user. This allows you to build confidence in the new implementation by observing its behavior on real production data without risking user-facing issues.

Question 6

When performing extract method refactoring on legacy code, what should you do with the original method?

  • A) Delete it immediately
  • B) Rename it with a "legacy_" prefix
  • C) Preserve it as a thin wrapper that delegates to the extracted methods
  • D) Comment it out and leave it in the file
Answer **C) Preserve it as a thin wrapper that delegates to the extracted methods** Keeping the original method as a thin wrapper ensures that all existing callers continue to work without modification. This makes the refactoring safer and more incremental. The wrapper can be deprecated and eventually removed once all callers have been updated to use the new methods directly.

Question 7

Which of the following is the BEST first step when starting to refactor a legacy codebase?

  • A) Upgrade all dependencies to their latest versions
  • B) Add type hints to every function
  • C) Understand the system by exploring it with AI and add characterization tests to critical paths
  • D) Split monolithic files into smaller modules
Answer **C) Understand the system by exploring it with AI and add characterization tests to critical paths** You must understand the system before changing it, and you must have tests before you can safely refactor. Upgrading dependencies, adding type hints, and splitting files are all valuable but carry risk without a safety net. The characterization tests provide that safety net.

Question 8

What is the primary risk of AI-assisted analysis of legacy codebases?

  • A) AI is too slow to analyze large codebases
  • B) AI may miss runtime dependencies, dynamic imports, and metaprogramming
  • C) AI cannot read Python code
  • D) AI always recommends complete rewrites
Answer **B) AI may miss runtime dependencies, dynamic imports, and metaprogramming** AI analysis is based on static code reading and may not detect dynamic behaviors like monkey-patching, runtime imports, metaprogramming, or dependencies that are only visible during execution. This is especially common in legacy Python code. AI analysis should be treated as a starting point, not the final word.

Question 9

When should you refactor legacy code, according to the principles discussed in this chapter?

  • A) Whenever you have free time between feature releases
  • B) When you need to change the code — refactor what you need to modify
  • C) Only when the entire team agrees on the refactoring approach
  • D) Only when there is a complete rewrite planned
Answer **B) When you need to change the code — refactor what you need to modify** The key principle is to refactor code that you need to change. Stable, working code that rarely changes should be left alone, even if it does not meet modern standards. Your refactoring effort should be proportional to the pain the code is causing in terms of bugs, development velocity, or security issues.

Question 10

What is dependency injection, and why is it valuable during refactoring?

  • A) A technique for automatically installing Python packages; it keeps dependencies up to date
  • B) A pattern where dependencies are passed to a component rather than created internally; it improves testability
  • C) A method for injecting test data into databases; it speeds up test execution
  • D) A way to inject logging into existing functions; it improves observability
Answer **B) A pattern where dependencies are passed to a component rather than created internally; it improves testability** Dependency injection means providing a class or function with its dependencies from the outside rather than having it create them internally. This makes code testable (you can pass mock dependencies in tests), flexible (you can swap implementations), and decoupled (components do not need to know how to construct their dependencies).

Question 11

Which deployment strategy is most appropriate for deploying refactored code to production?

  • A) Deploy to all servers at once during off-peak hours
  • B) Deploy to a small subset of servers (canary), monitor metrics, then gradually expand
  • C) Deploy to staging and, if tests pass, immediately deploy to all production servers
  • D) Deploy to production with all new code behind feature flags permanently
Answer **B) Deploy to a small subset of servers (canary), monitor metrics, then gradually expand** Canary releases minimize blast radius by exposing only a small percentage of traffic to the refactored code. If metrics degrade, you can roll back quickly with minimal impact. This approach combines the safety of gradual rollout with the confidence of real production data validation.

Question 12

What is the Boy Scout Rule, and how does it apply to legacy code modernization?

  • A) Always delete code that is not being used
  • B) Leave the code better than you found it — make small improvements with every change
  • C) Write tests before writing code
  • D) Review every change with at least two other developers
Answer **B) Leave the code better than you found it — make small improvements with every change** The Boy Scout Rule, applied consistently by a team, produces steady incremental improvement. Each time a developer modifies legacy code, they make one small improvement: adding type hints, writing a test, extracting a method, or adding a docstring. Over time, these small improvements compound into significant modernization.

Question 13

When writing characterization tests, what should you do if you discover that a function has a bug?

  • A) Fix the bug immediately and write a test for the correct behavior
  • B) Write a test that captures the buggy behavior and add a comment noting the bug
  • C) Skip testing that function because the bug makes the tests unreliable
  • D) Report the bug and wait for it to be fixed before writing tests
Answer **B) Write a test that captures the buggy behavior and add a comment noting the bug** Characterization tests capture current behavior, including bugs. If you discover a bug, document it with a test that asserts the buggy behavior, and add a comment noting it. This ensures the bug is tracked and that you make a deliberate decision about whether to fix it during refactoring. Fixing bugs during refactoring adds risk because it changes behavior in addition to structure.

Question 14

What is the main advantage of running Flask and FastAPI simultaneously behind a reverse proxy during a framework migration?

  • A) It doubles the application's capacity
  • B) It allows incremental migration — each endpoint can be migrated independently
  • C) It provides automatic failover if one framework crashes
  • D) It is required by the HTTP specification for backward compatibility
Answer **B) It allows incremental migration — each endpoint can be migrated independently** Running both frameworks behind a reverse proxy is an application of the strangler fig pattern. Each endpoint can be migrated from Flask to FastAPI independently. The reverse proxy routes traffic to the appropriate framework based on the URL. At any point, the migration can be paused, and individual endpoints can be rolled back.

Question 15

Which of the following is the BEST approach for handling untestable legacy code (code with deeply embedded database calls)?

  • A) Do not refactor that code at all
  • B) Get the best test coverage you can and document the gaps
  • C) Mock everything, even if the mocks do not accurately represent real behavior
  • D) Rewrite the code from scratch so it is testable
Answer **B) Get the best test coverage you can and document the gaps** Pragmatism is key when dealing with legacy code. Some code is nearly impossible to test in isolation, but you can often test it at a higher level (integration tests) or test parts of it by extracting pure logic. Document what you cannot test so the team is aware of the risks and can compensate with manual testing or extra monitoring.

Question 16

What is the recommended allocation of sprint time for technical debt reduction?

  • A) 0% — technical debt should be addressed in dedicated cleanup sprints
  • B) 5% — just enough for minor fixes
  • C) 15-20% — consistent allocation for steady progress
  • D) 50% — half of all development time should address debt
Answer **C) 15-20% — consistent allocation for steady progress** Allocating 15-20% of each sprint to technical debt ensures steady progress without halting feature development. This is more effective than large "cleanup sprints" because it distributes the risk, maintains team familiarity with the refactoring effort, and avoids the all-or-nothing dynamics of dedicated refactoring periods.

Question 17

In the chapter's case study, what was the first thing the team addressed after the initial assessment?

  • A) Upgrading the Flask version
  • B) Splitting the monolithic views.py
  • C) Adding characterization tests to critical paths
  • D) Introducing dependency injection
Answer **C) Adding characterization tests to critical paths** Phase 2 of the case study focused on building a safety net by adding characterization tests. This had to come before any structural changes (splitting files, introducing patterns) because without tests, those changes could not be verified. The tests provided the foundation for all subsequent refactoring.

Question 18

Which technique helps you verify that a modern replacement produces the same results as the legacy code before switching users to it?

  • A) Unit testing
  • B) Shadow mode
  • C) Code review
  • D) Static analysis
Answer **B) Shadow mode** Shadow mode runs both the legacy and modern implementations for each request, compares their outputs, and logs any discrepancies while returning the legacy result to users. This provides verification with real production data and traffic patterns, which is more thorough than any testing approach alone.

Question 19

What is the recommended order for the modernization layers described in the chapter?

  • A) Polish, Architecture, Structure, Safety
  • B) Safety, Structure, Architecture, Polish
  • C) Architecture, Safety, Structure, Polish
  • D) Structure, Architecture, Safety, Polish
Answer **B) Safety, Structure, Architecture, Polish** The layers should be tackled in order: (1) Safety — add tests and CI, (2) Structure — break up monoliths, extract methods, break circular dependencies, (3) Architecture — apply design patterns, migrate frameworks, introduce proper configuration, (4) Polish — add type hints, improve naming, optimize performance. Each layer builds on the foundation laid by the previous one.

Question 20

When using AI to generate a migration plan from Flask to FastAPI, what important context should you provide?

  • A) Only the Flask route definitions
  • B) The entire Flask application including endpoints, middleware, authentication, and external integrations
  • C) Only the requirements.txt file
  • D) The FastAPI documentation
Answer **B) The entire Flask application including endpoints, middleware, authentication, and external integrations** AI needs comprehensive context to generate an accurate migration plan. A framework migration affects routes, middleware, authentication, database access patterns, background task integration, and more. Providing only partial context will result in an incomplete plan that misses critical migration tasks.

Question 21

What is a circular dependency, and why is it problematic?

  • A) When a module depends on too many other modules; it causes slow imports
  • B) When two or more modules depend on each other, forming a cycle; it prevents independent testing and deployment
  • C) When a function calls itself recursively; it causes stack overflows
  • D) When a class inherits from multiple parent classes; it causes method resolution conflicts
Answer **B) When two or more modules depend on each other, forming a cycle; it prevents independent testing and deployment** Circular dependencies (A imports B, B imports A) create tightly coupled modules that cannot be understood, tested, or deployed independently. They also cause import errors in some situations. Breaking circular dependencies is a key step in modernizing legacy code's dependency structure.

Question 22

Which of the following is a valid reason to NOT refactor legacy code?

  • A) The code is ugly but stable, rarely modified, and low risk
  • B) The code has known security vulnerabilities
  • C) The code blocks new feature development
  • D) The code causes frequent bugs in production
Answer **A) The code is ugly but stable, rarely modified, and low risk** If code works correctly, is rarely changed, and presents low risk, the cost of refactoring exceeds the benefit. Refactoring should be driven by practical needs — improving code that is actively causing problems or blocking progress — not by aesthetic preferences. The other options all describe situations where refactoring is justified.

Question 23

What is the biggest risk during refactoring, according to the chapter?

  • A) Introducing a bug
  • B) Introducing a bug that goes undetected until it causes real damage
  • C) Spending too much time on the refactoring
  • D) Losing the original source code
Answer **B) Introducing a bug that goes undetected until it causes real damage** The chapter emphasizes that the biggest risk is not introducing a bug per se — it is introducing one that you do not detect quickly. This is why the risk management strategy focuses on detection speed: comprehensive monitoring, alerting, shadow mode verification, and rapid rollback capabilities.

Question 24

What is the recommended approach when characterization tests cannot cover certain code paths?

  • A) Delete the untestable code
  • B) Rewrite the untestable code from scratch
  • C) Document the gaps and compensate with manual testing or monitoring
  • D) Ignore those code paths during refactoring
Answer **C) Document the gaps and compensate with manual testing or monitoring** Perfect test coverage is rarely achievable in legacy code. The pragmatic approach is to document what cannot be tested automatically, understand the risks those gaps represent, and compensate through other means such as manual testing, enhanced monitoring, or extra caution during code review for those specific paths.

Question 25

In the chapter's case study, what were the results after 24 weeks of incremental refactoring?

  • A) The team rewrote the entire application from scratch
  • B) Test coverage went from 12% to 78%, the monolithic file became 45 focused modules, and the framework was migrated to FastAPI
  • C) The team abandoned the refactoring effort and decided to maintain the legacy code
  • D) Only the test coverage improved; no structural changes were made
Answer **B) Test coverage went from 12% to 78%, the monolithic file became 45 focused modules, and the framework was migrated to FastAPI** The case study demonstrated that 24 weeks of disciplined, incremental refactoring — done alongside normal feature development — produced dramatic improvements: 12% to 78% test coverage, proper modularization, framework migration to FastAPI with async support, security vulnerabilities addressed, 40% performance improvement, and approximately doubled developer productivity.