Chapter 29: Quiz — DevOps and Deployment
Test your understanding of DevOps principles, containerization, CI/CD, cloud deployment, and monitoring.
Question 1
What is the primary purpose of Docker in a DevOps workflow?
A) To replace virtual machines entirely B) To provide portable, reproducible runtime environments C) To eliminate the need for testing D) To manage source code repositories
Answer
**B) To provide portable, reproducible runtime environments** Docker packages an application with its entire runtime environment (OS libraries, language runtime, dependencies) into a container image. This ensures the application runs identically regardless of where the container is deployed, solving the classic "it works on my machine" problem.Question 2
In a Dockerfile, why should COPY requirements.txt . and RUN pip install appear before COPY . .?
A) It is required by Docker syntax B) It ensures requirements.txt is not overwritten C) It optimizes layer caching so dependencies are not reinstalled when only source code changes D) It prevents security vulnerabilities
Answer
**C) It optimizes layer caching so dependencies are not reinstalled when only source code changes** Docker caches each layer. If `requirements.txt` has not changed, the `pip install` layer is cached and reused, even if the application source code has changed. Placing `COPY . .` first would invalidate the cache for every code change, forcing a full dependency reinstall on every build.Question 3
What is the key difference between Continuous Delivery and Continuous Deployment?
A) Continuous Delivery is faster B) Continuous Delivery requires manual approval before production deployment; Continuous Deployment is fully automatic C) Continuous Deployment does not require testing D) They are the same thing
Answer
**B) Continuous Delivery requires manual approval before production deployment; Continuous Deployment is fully automatic** In Continuous Delivery, every commit that passes CI is *ready* to deploy but a human decides when to deploy. In Continuous Deployment, every passing commit is automatically deployed to production without human intervention.Question 4
Which of the following is NOT one of the Four Golden Signals of monitoring?
A) Latency B) Traffic C) Code coverage D) Saturation
Answer
**C) Code coverage** The Four Golden Signals are Latency, Traffic, Errors, and Saturation. Code coverage is a testing metric, not a production monitoring signal. These signals were defined in Google's Site Reliability Engineering book.Question 5
What is a multi-stage Docker build?
A) A Dockerfile that builds multiple applications
B) A build process that uses multiple FROM statements to create intermediate stages, keeping build tools out of the final image
C) A Docker build that runs on multiple machines
D) A build that requires multiple Docker Compose files
Answer
**B) A build process that uses multiple `FROM` statements to create intermediate stages, keeping build tools out of the final image** Multi-stage builds use a first stage (with build tools, compilers, etc.) to compile or install dependencies, then copy only the necessary artifacts into a clean, minimal final stage. This reduces image size and attack surface.Question 6
In the context of Infrastructure as Code, what does terraform plan do?
A) Deploys the infrastructure immediately B) Destroys all existing infrastructure C) Shows a preview of changes that will be made without applying them D) Initializes the Terraform project
Answer
**C) Shows a preview of changes that will be made without applying them** `terraform plan` compares the desired state (defined in `.tf` files) with the current state of the infrastructure and displays what actions Terraform would take (create, modify, destroy) without actually making any changes. This allows you to review changes before applying them.Question 7
What is structured logging?
A) Logging messages in alphabetical order B) Writing log entries as machine-parseable records (typically JSON) with consistent fields C) Logging only error messages D) Using a specific file naming convention for log files
Answer
**B) Writing log entries as machine-parseable records (typically JSON) with consistent fields** Structured logging outputs log entries in a consistent, machine-readable format (usually JSON) with predefined fields (timestamp, level, message, etc.). This makes logs searchable, filterable, and analyzable by log aggregation tools, unlike free-form text logs.Question 8
In a blue-green deployment, what happens during a rollback?
A) The database is restored from backup B) The application code is reverted in Git C) Traffic is switched back from the new (green) environment to the old (blue) environment D) All containers are restarted
Answer
**C) Traffic is switched back from the new (green) environment to the old (blue) environment** In blue-green deployment, both environments remain running after a switch. Rollback is simply re-routing traffic back to the previous environment at the load balancer level, which is nearly instantaneous and does not require redeploying anything.Question 9
Which of the following should NEVER be committed to a Git repository?
A) Dockerfile B) docker-compose.yml C) .env file containing production database credentials D) GitHub Actions workflow file
Answer
**C) .env file containing production database credentials** Production credentials, API keys, and other secrets must never be committed to source control. They should be managed through secrets management tools (AWS Secrets Manager, GitHub Secrets, etc.) and injected as environment variables at runtime. The `.env` file should be listed in `.gitignore`.Question 10
What is the purpose of a health check endpoint in a production application?
A) To display the application version to users B) To allow load balancers and orchestrators to determine if the application is functioning correctly C) To run automated tests in production D) To reset the application when it crashes
Answer
**B) To allow load balancers and orchestrators to determine if the application is functioning correctly** Health check endpoints are used by load balancers, container orchestrators (Docker, Kubernetes), and monitoring systems to determine whether an application instance can receive traffic. If a health check fails, the instance can be removed from rotation or restarted.Question 11
What is the expand-contract pattern in database migrations?
A) A pattern where you expand the database to multiple servers and then contract back to one B) A multi-step approach: add new schema elements, migrate data, then remove old elements across separate deployments C) A pattern for compressing database storage D) A method of scaling database connections up and down
Answer
**B) A multi-step approach: add new schema elements, migrate data, then remove old elements across separate deployments** The expand-contract pattern handles schema changes safely by: (1) expanding the schema (adding new columns/tables alongside old ones), (2) deploying code that writes to both old and new structures, (3) migrating existing data, (4) deploying code that uses only new structures, and (5) contracting (removing old columns/tables). This avoids the need for a single risky migration.Question 12
In a canary deployment, what does "canary" refer to?
A) A yellow warning indicator B) A small percentage of traffic routed to a new version to test it before full rollout C) A deployment that happens at dawn D) A backup deployment environment
Answer
**B) A small percentage of traffic routed to a new version to test it before full rollout** Named after the "canary in a coal mine" practice, a canary deployment routes a small fraction of traffic (e.g., 5%) to the new version while most traffic continues to the old version. If the canary shows problems (errors, latency), the rollout is stopped and traffic is routed entirely to the old version.Question 13
Which tool is commonly used for container orchestration in production environments?
A) Docker Compose B) Kubernetes C) Git D) Terraform
Answer
**B) Kubernetes** Kubernetes is the industry-standard container orchestration platform for production environments. It handles scheduling containers across a cluster, scaling, self-healing, load balancing, and rolling deployments. Docker Compose is used mainly for local development and simple deployments. Terraform manages infrastructure provisioning but not container orchestration. Git is for source code management.Question 14
What is the Twelve-Factor App methodology's recommendation for configuration?
A) Store configuration in XML files within the application B) Store configuration in environment variables, strictly separated from code C) Hardcode configuration for each environment D) Use a configuration database
Answer
**B) Store configuration in environment variables, strictly separated from code** The Twelve-Factor App methodology (Factor III: Config) requires strict separation of configuration from code. Configuration that varies between environments (database URLs, API keys, feature flags) should be stored in environment variables, not in code, config files committed to the repo, or build artifacts.Question 15
What is alert fatigue?
A) When monitoring servers run out of memory B) When too many non-actionable alerts cause operators to ignore or miss real incidents C) When alert notifications are delivered too slowly D) When alerts are not configured at all
Answer
**B) When too many non-actionable alerts cause operators to ignore or miss real incidents** Alert fatigue occurs when operators receive so many alerts — especially false positives and non-actionable notifications — that they become desensitized and start ignoring them. This can cause real incidents to be missed. The remedy is to ensure every alert is actionable and requires a human response.Question 16
What does the USER instruction in a Dockerfile do?
A) Creates a new Linux user account B) Sets the username for Docker Hub authentication C) Specifies which user subsequent instructions and the container process should run as D) Adds a user to the Docker group
Answer
**C) Specifies which user subsequent instructions and the container process should run as** The `USER` instruction sets the user (and optionally group) for subsequent `RUN`, `CMD`, and `ENTRYPOINT` instructions, as well as the runtime user when the container starts. Running as a non-root user is a security best practice that limits the damage if the container is compromised.Question 17
Which PaaS platform allows you to deploy by simply pushing to a Git remote?
A) AWS EC2 B) Heroku C) Terraform Cloud D) Prometheus
Answer
**B) Heroku** Heroku popularized the Git-push deployment model. Running `git push heroku main` triggers Heroku to detect the application type, install dependencies, build the application, and deploy it automatically. Other PaaS platforms like Railway and Render have adopted similar Git-based deployment workflows.Question 18
What is the purpose of a .dockerignore file?
A) To list Docker images that should not be pulled B) To exclude files and directories from the Docker build context C) To specify which containers should not be started D) To prevent Docker from being installed
Answer
**B) To exclude files and directories from the Docker build context** The `.dockerignore` file tells Docker which files and directories to exclude when sending the build context to the Docker daemon. This reduces build time, image size, and prevents accidentally including sensitive files (`.git`, `.env`, `node_modules`, `__pycache__`).Question 19
In GitHub Actions, what does the needs keyword do?
A) Specifies required environment variables B) Lists packages that need to be installed C) Defines job dependencies, ensuring one job completes before another starts D) Requests additional compute resources
Answer
**C) Defines job dependencies, ensuring one job completes before another starts** The `needs` keyword creates a dependency between jobs. A job with `needs: [test]` will only run after the `test` job completes successfully. This enables you to create pipelines where build only happens after tests pass, and deployment only happens after the build succeeds.Question 20
What is the difference between a liveness check and a readiness check?
A) There is no difference B) A liveness check determines if the application process is running; a readiness check determines if it can serve traffic C) A liveness check runs once at startup; a readiness check runs continuously D) A liveness check is for databases; a readiness check is for web servers
Answer
**B) A liveness check determines if the application process is running; a readiness check determines if it can serve traffic** A liveness check answers "is the process alive?" — if it fails, the container should be restarted. A readiness check answers "can this instance handle requests?" — if it fails, the instance should be removed from the load balancer until it recovers (perhaps it is still starting up or waiting for a database connection).Question 21
What is the primary advantage of serverless deployment (e.g., AWS Lambda)?
A) Lower latency than containers B) No servers to manage; automatic scaling to zero when not in use C) Better security than containers D) Supports all programming languages
Answer
**B) No servers to manage; automatic scaling to zero when not in use** Serverless platforms handle all infrastructure management. You pay only for the compute time your code actually uses. When there are no requests, the service scales to zero and costs nothing. The tradeoff is cold start latency, execution time limits, and less control over the runtime environment.Question 22
What is the ELK stack?
A) A programming language framework B) Elasticsearch, Logstash, and Kibana — a log aggregation and analysis platform C) A container orchestration tool D) A cloud provider's managed database service
Answer
**B) Elasticsearch, Logstash, and Kibana — a log aggregation and analysis platform** The ELK stack is a popular open-source solution for centralized logging. Elasticsearch stores and indexes log data. Logstash (or Fluentd in the "EFK" variant) collects and processes logs from multiple sources. Kibana provides a web interface for searching, visualizing, and analyzing logs.Question 23
Why should you store Terraform state remotely?
A) To make builds faster B) To enable team collaboration, prevent state corruption, and provide locking to prevent concurrent modifications C) To reduce cloud costs D) Because local state files are not supported
Answer
**B) To enable team collaboration, prevent state corruption, and provide locking to prevent concurrent modifications** Remote state storage (e.g., in S3 with DynamoDB locking) ensures all team members work with the same state, prevents the state file from being lost (if stored only on a developer's laptop), and provides locking so two people cannot run `terraform apply` simultaneously and corrupt the infrastructure.Question 24
What is a correlation ID in the context of logging?
A) A database primary key B) A unique identifier assigned to each request that is included in all related log entries across services C) A metric that correlates CPU usage with memory D) An ID used to correlate alerts with incidents
Answer
**B) A unique identifier assigned to each request that is included in all related log entries across services** A correlation ID (also called a request ID or trace ID) is generated when a request enters the system and propagated through all services that handle that request. By filtering logs by this ID, you can see the complete journey of a single request across multiple services, making debugging significantly easier.Question 25
An AI assistant generated a deployment script for your production application. Which of the following actions should you take FIRST?
A) Run it immediately in production B) Review the script to understand what it does, verify it matches your infrastructure, and test it in a staging environment C) Ask the AI to generate a different version D) Commit it to Git without reviewing