Chapter 38: Quiz

Test your understanding of multi-agent development systems, agent roles, orchestration patterns, inter-agent communication, conflict resolution, and scaling strategies. Each question has one best answer unless otherwise noted.


Question 1

What are the three fundamental constraints that limit a single AI coding agent's effectiveness on complex tasks?

  • A) Cost, latency, and accuracy
  • B) Context window saturation, attention diffusion, and role confusion
  • C) Memory limits, processing speed, and network bandwidth
  • D) Token limits, model size, and training data gaps
Answer **B) Context window saturation, attention diffusion, and role confusion.** A single agent faces three fundamental limits: **context window saturation** (the task's information exceeds what the context window can hold), **attention diffusion** (juggling multiple concerns simultaneously degrades quality in each area), and **role confusion** (an agent that wrote the code cannot critically review its own work). These constraints motivate splitting work across multiple specialized agents. Reference: Section 38.1

Question 2

What are the four core agent roles in a standard multi-agent development team?

  • A) Planner, Builder, Validator, Deployer
  • B) Designer, Programmer, Debugger, Manager
  • C) Architect, Coder, Tester, Reviewer
  • D) Analyst, Developer, QA, DevOps
Answer **C) Architect, Coder, Tester, Reviewer.** The four core roles mirror a traditional software team. The **Architect** designs the system and defines interfaces. The **Coder** implements the design. The **Tester** writes and runs tests adversarially to find bugs. The **Reviewer** evaluates code quality, security, performance, and standards compliance. Each role has a distinct perspective and focused set of tools. Reference: Section 38.2

Question 3

What is the primary purpose of including explicit "Do NOT" instructions in an agent's system prompt?

  • A) To reduce the agent's token usage by limiting output scope
  • B) To prevent role bleed where agents overstep their defined responsibilities
  • C) To make the agent run faster by eliminating unnecessary reasoning
  • D) To comply with AI safety regulations
Answer **B) To prevent role bleed where agents overstep their defined responsibilities.** Without explicit negative constraints, agents naturally try to be helpful by expanding their scope -- an architect that starts writing implementation code, a tester that starts fixing bugs. Explicit "Do NOT" prohibitions keep each agent focused on its assigned role. This separation is what makes multi-agent systems effective, because each agent maintains the critical distance required for its perspective. Reference: Section 38.2

Question 4

Which orchestration pattern executes agents one after another, with each building on the output of the previous agent?

  • A) Parallel execution
  • B) Event-driven orchestration
  • C) Hierarchical delegation
  • D) Sequential pipeline
Answer **D) Sequential pipeline.** In a sequential pipeline, agents execute in strict order: Architect produces a design, Coder implements it, Tester validates it, Reviewer checks quality. Each stage receives the full output of all previous stages. This is the simplest pattern to implement and debug, though it offers no parallelism and the total execution time equals the sum of all individual agent times. Reference: Section 38.3

Question 5

In the parallel execution orchestration pattern, when is parallelism most appropriate?

  • A) When one agent's output is required as input for the next agent
  • B) When multiple agents need to analyze the same input independently
  • C) When the task is too simple for multiple agents
  • D) When agents must negotiate a shared decision
Answer **B) When multiple agents need to analyze the same input independently.** Parallel execution is most useful when agents can work on the same input without depending on each other's output. The classic example is running the Tester, Reviewer, and Security agents simultaneously against the same implementation code. Each produces an independent report, and the results are aggregated afterward. Reference: Section 38.3

Question 6

In hierarchical delegation, what is the role of the lead agent?

  • A) It implements the most complex component of the task
  • B) It decomposes a complex task into subtasks and assigns each to a specialist worker
  • C) It reviews all other agents' work for quality
  • D) It manages API rate limits and token budgets
Answer **B) It decomposes a complex task into subtasks and assigns each to a specialist worker.** In hierarchical delegation, the lead agent analyzes a complex task, breaks it into sub-tasks suited for different specialists (e.g., backend, frontend, database), delegates each to the appropriate worker agent, and then assembles the results. This mirrors how a human tech lead delegates work to team members while maintaining the overall vision. Reference: Section 38.3

Question 7

What is the recommended starting orchestration pattern when building your first multi-agent system?

  • A) Event-driven orchestration for maximum flexibility
  • B) Hierarchical delegation for optimal task decomposition
  • C) Parallel execution for speed
  • D) Sequential pipeline for simplicity
Answer **D) Sequential pipeline for simplicity.** The chapter recommends starting with a sequential pipeline because it is the simplest to build, debug, and reason about. You should add parallel execution when you identify independent tasks that bottleneck the pipeline, and move to hierarchical delegation when tasks become too complex for a single design-implement-test-review cycle. Most production systems end up using a hybrid approach. Reference: Section 38.3

Question 8

What are the three primary inter-agent communication mechanisms described in the chapter?

  • A) REST APIs, WebSockets, and gRPC
  • B) Shared context, message passing, and artifact exchange
  • C) File I/O, database queries, and environment variables
  • D) Event queues, pub-sub, and RPC calls
Answer **B) Shared context, message passing, and artifact exchange.** The three primary communication mechanisms are: **Shared context**, where all agents read from and write to a shared workspace; **Message passing**, where agents send structured messages routed by the orchestrator; and **Artifact exchange**, where agents produce and consume files, documents, and reports. For software development, artifact exchange is usually the best starting point because it maps naturally to existing development workflows. Reference: Section 38.4

Question 9

Why is artifact exchange recommended as the starting communication mechanism for software development multi-agent systems?

  • A) It is the fastest communication method
  • B) It maps naturally to software development workflows where outputs are already files
  • C) It uses the least amount of token budget
  • D) It eliminates the need for an orchestrator
Answer **B) It maps naturally to software development workflows where outputs are already files.** Software development is already organized around artifacts -- source files, test files, configuration files, documentation. Agents that produce and consume files integrate naturally with existing development tools like Git, CI/CD systems, and IDEs. You can layer message passing on top for coordination metadata without replacing the artifact-based core. Reference: Section 38.4

Question 10

What is the purpose of context summarization in inter-agent communication?

  • A) To compress data for faster network transmission
  • B) To create role-appropriate summaries so downstream agents receive only the information relevant to their role
  • C) To encrypt sensitive content before sharing between agents
  • D) To convert code into natural language for non-technical agents
Answer **B) To create role-appropriate summaries so downstream agents receive only the information relevant to their role.** Context summarization creates concise, targeted summaries of agent outputs for downstream consumption. For example, the architect's 5,000-word design document can be summarized differently for the coder (who needs interface definitions and constraints) versus the tester (who needs expected behaviors and edge cases). This keeps downstream agents focused on relevant information while managing context window limits. Reference: Section 38.4

Question 11

According to the chapter, how should conflicts between agents be viewed?

  • A) As bugs in the multi-agent system that need to be eliminated
  • B) As valuable information that reveals ambiguities, gaps, or unconsidered constraints
  • C) As a sign that agent system prompts are poorly written
  • D) As evidence that multi-agent systems are unreliable
Answer **B) As valuable information that reveals ambiguities, gaps, or unconsidered constraints.** Conflicts are a feature, not a bug. When agents disagree, it often reveals an ambiguity in the requirements, a gap in the design, or a constraint that was not initially considered. A multi-agent system that never produces conflicts is probably not getting enough diverse perspectives. The goal is not to eliminate conflicts but to resolve them efficiently and learn from them. Reference: Section 38.5

Question 12

In the priority hierarchy conflict resolution strategy, which agent role typically has the highest priority?

  • A) Architect
  • B) Coder
  • C) Security
  • D) Reviewer
Answer **C) Security.** In a typical priority hierarchy, security concerns take the highest priority because security issues can have catastrophic consequences. The full hierarchy shown in the chapter is: Security > Architect > Reviewer > Tester > Coder. However, this approach can be too rigid if applied blindly, so it is often combined with evidence-based resolution for complex conflicts. Reference: Section 38.5

Question 13

What is the "Three-Strike Rule" in the context of multi-agent feedback loops?

  • A) An agent is permanently removed from the team after three total failures
  • B) Each agent gets three attempts to resolve feedback before escalating to a human
  • C) The pipeline must complete in three or fewer total iterations
  • D) A maximum of three conflicts are allowed per pipeline run
Answer **B) Each agent gets three attempts to resolve feedback before escalating to a human.** The Three-Strike Rule prevents infinite loops while giving agents a fair chance to self-correct. If the coder cannot fix a failing test after three attempts, or the implementation cannot pass review after three rounds of feedback, the system escalates to a human. Bounding feedback loops is essential for practical workflow automation. Reference: Section 38.6

Question 14

What is cross-agent verification?

  • A) A technique where agents verify each other's credentials before communicating
  • B) Having one agent check another agent's work, leveraging their different perspectives
  • C) Running the same task through two identical agents and comparing outputs
  • D) A security measure that prevents unauthorized agent-to-agent communication
Answer **B) Having one agent check another agent's work, leveraging their different perspectives.** Cross-agent verification is the most powerful quality assurance technique in multi-agent systems. Each agent brings a different perspective and different biases. For example, a verification agent can check that the coder's implementation matches every component, interface, and constraint in the architect's design -- catching drift that neither the architect nor coder would notice independently. Reference: Section 38.7

Question 15

What is the adversarial testing pattern?

  • A) Using one agent to generate deliberately malicious code to test security
  • B) Specifically designing a testing agent to find flaws in another agent's work, with success measured by bugs found rather than tests passed
  • C) Having two agents compete to produce the best solution
  • D) Running the pipeline against intentionally broken requirements
Answer **B) Specifically designing a testing agent to find flaws in another agent's work, with success measured by bugs found rather than tests passed.** An adversarial tester is prompted to break the code rather than verify it. It specifically targets failure modes: null inputs, empty strings, extremely large inputs, Unicode characters, concurrent access, unavailable services. Its incentive structure is inverted -- it succeeds when code fails -- so it finds bugs that a standard tester would miss. Reference: Section 38.7

Question 16

What is the "defense in depth" concept as applied to multi-agent quality assurance?

  • A) Using progressively stronger encryption at each pipeline stage
  • B) Having multiple independent verification layers so that if one agent misses an issue, another catches it
  • C) Running the pipeline multiple times and accepting the most common output
  • D) Adding more agents to the team until zero bugs remain
Answer **B) Having multiple independent verification layers so that if one agent misses an issue, another catches it.** Defense in depth uses multiple review layers, each focused on a different quality dimension (correctness, security, performance, maintainability). No single agent is responsible for all quality dimensions. The system's reliability comes from the combination of independent checks, just as security systems use multiple protection layers so that if one fails, others catch the threat. Reference: Section 38.7

Question 17

What is the "coordination tax" in multi-agent systems?

  • A) The financial cost of running multiple AI agents simultaneously
  • B) The overhead of coordinating multiple agents, including messages, conflicts, context sharing, and monitoring
  • C) The API rate limit penalty for making too many concurrent requests
  • D) The time spent configuring agent system prompts
Answer **B) The overhead of coordinating multiple agents, including messages, conflicts, context sharing, and monitoring.** Each additional agent adds coordination overhead: more messages to route, more potential conflicts to resolve, more context to share, and more failure points to monitor. This coordination tax follows a pattern familiar from human teams -- beyond a certain point, the overhead exceeds the benefit of the additional agent. For most tasks, the sweet spot is 3-5 agents. Reference: Section 38.8

Question 18

What is the recommended sweet spot for agent team size for most development tasks?

  • A) 1-2 agents
  • B) 3-5 agents
  • C) 6-8 agents
  • D) 10+ agents
Answer **B) 3-5 agents.** The relationship between team size and productivity follows a curve where small increases produce large improvements, but beyond a certain point, the coordination tax exceeds the benefit. For most development tasks, 3-5 agents provides the best balance of specialization benefits and coordination costs. Beyond that, you should use hierarchical teams or domain-based partitioning to manage complexity. Reference: Section 38.8

Question 19

What is the purpose of dynamic team composition?

  • A) To randomly assign agents to tasks for variety
  • B) To select agents based on task requirements, keeping team size small for simple tasks and scaling up for complex ones
  • C) To rotate agent roles between pipeline runs
  • D) To automatically upgrade agents to newer model versions
Answer **B) To select agents based on task requirements, keeping team size small for simple tasks and scaling up for complex ones.** Dynamic team composition assembles the agent team based on what the specific task needs. A simple bug fix might only need a coder and tester. A feature touching security-sensitive code adds a security agent. A feature modifying the database adds a database agent. This keeps the coordination tax proportional to task complexity. Reference: Section 38.8

Question 20

Which resource management strategy involves using less expensive models for routine tasks and more capable models for complex tasks?

  • A) Token budgets
  • B) Concurrency limits
  • C) Model tiering
  • D) Caching
Answer **C) Model tiering.** Model tiering assigns different quality (and cost) models to different agents based on task complexity. Routine tasks like linting and formatting can use less expensive models, while complex tasks like architecture design and security review use more capable models. This optimizes cost without sacrificing quality where it matters most. Reference: Section 38.8

Question 21

What are the three categories of metrics to monitor in a multi-agent system?

  • A) Speed, cost, and accuracy
  • B) Agent-level metrics, pipeline-level metrics, and communication metrics
  • C) Input metrics, processing metrics, and output metrics
  • D) Model metrics, tool metrics, and user metrics
Answer **B) Agent-level metrics, pipeline-level metrics, and communication metrics.** Agent-level metrics track individual agent performance (execution time, token usage, success rate). Pipeline-level metrics track overall workflow performance (total time, end-to-end success rate, cost per run). Communication metrics track inter-agent interactions (message volume, context sizes, conflict counts and resolution rates). Together they provide comprehensive observability. Reference: Section 38.9

Question 22

Why is structured logging important for multi-agent systems?

  • A) It is required by compliance regulations
  • B) It enables searchable, analyzable records of every agent action for debugging complex distributed interactions
  • C) It reduces storage costs compared to unstructured logs
  • D) It makes logs human-readable without any tooling
Answer **B) It enables searchable, analyzable records of every agent action for debugging complex distributed interactions.** Multi-agent systems are significantly more complex than single agents. When something goes wrong, you need visibility into what each agent did, what it produced, how long it took, and where the failure occurred. Structured log entries with fields like agent_role, run_id, output_type, and conflict_type can be searched and analyzed systematically, making debugging feasible. Reference: Section 38.9

Question 23

What is a mediator agent?

  • A) An agent that translates between different programming languages
  • B) A dedicated agent that receives conflicting recommendations from two agents and produces an impartial resolution
  • C) An agent that mediates communication between the pipeline and external APIs
  • D) An agent that summarizes long documents for other agents
Answer **B) A dedicated agent that receives conflicting recommendations from two agents and produces an impartial resolution.** A mediator agent is a conflict resolution strategy. When two agents disagree and automatic resolution (by severity or priority) does not apply, the mediator receives both positions with their evidence and context, then produces a resolution with rationale. The mediator must be impartial and consider both perspectives rather than defaulting to either position. Reference: Section 38.5

Question 24

What principle should guide which tools each agent is allowed to access?

  • A) Maximum capability -- give every agent every tool for flexibility
  • B) Principle of least privilege -- each agent gets only the tools needed for its role
  • C) Shared access -- all agents share one universal tool set
  • D) Rotating access -- agents take turns using restricted tools
Answer **B) Principle of least privilege -- each agent gets only the tools needed for its role.** Each agent should have access only to the tools required for its specific role. The architect gets file reading and design doc writing but not code editing. The coder gets file editing and linting but not test running. The tester gets test running but not code editing. This restriction reinforces role focus -- an agent that cannot edit code will produce clear feedback instead of trying to fix issues itself. Reference: Section 38.2

Question 25

When building a production multi-agent pipeline, why is idempotency important?

  • A) It ensures all agents produce identical output every time
  • B) It prevents duplicated work if the pipeline is interrupted and restarted
  • C) It guarantees that the pipeline can run without an internet connection
  • D) It allows the pipeline to run on different operating systems
Answer **B) It prevents duplicated work if the pipeline is interrupted and restarted.** If a pipeline is interrupted by a timeout, rate limit, or transient error and then restarted, idempotency ensures that each step executes at most once. Without it, the pipeline might re-run completed steps, producing duplicate artifacts or making redundant API calls. Checkpoints and artifact deduplication are the key mechanisms for achieving idempotency. Reference: Section 38.10