You know how to write Python. You can design classes, write tests, handle errors, organize code into modules, and track changes with Git. If this were a cooking analogy, you've mastered individual techniques — you can sear, braise, bake, and plate...
Learning Objectives
- Describe the phases of the software development lifecycle (SDLC)
- Compare development methodologies: Waterfall, Agile, Scrum
- Write a project specification and break it into manageable tasks
- Apply code review practices to improve code quality
- Understand technical debt and how to manage it
In This Chapter
- Chapter Overview
- 26.1 From Idea to Software (the SDLC Overview)
- 26.2 Waterfall Model
- 26.3 Agile Development
- 26.4 Scrum Framework
- 26.5 Writing a Project Specification
- 26.6 Code Review
- 26.7 Technical Debt
- 26.8 Documentation
- 26.9 Introduction to CI/CD
- 26.10 Project Checkpoint: TaskFlow v2.5
- Chapter Summary
- What's Next
Chapter 26: Software Development Lifecycle
"Plans are worthless, but planning is everything." — Dwight D. Eisenhower
Chapter Overview
You know how to write Python. You can design classes, write tests, handle errors, organize code into modules, and track changes with Git. If this were a cooking analogy, you've mastered individual techniques — you can sear, braise, bake, and plate. But nobody walks into a restaurant kitchen and improvises a five-course dinner for 200 guests. There's a menu, a prep schedule, a station assignment, a service plan, and a post-dinner debrief. The process of running a kitchen is at least as important as any individual cooking skill.
Software development works the same way. The code you write is only one piece of a much larger process that starts long before anyone opens an editor and continues long after the first version ships. That process has a name: the software development lifecycle (SDLC). And understanding it is the difference between someone who can write code and someone who can build software.
This chapter is about the professional process — the practices, methodologies, and habits that teams use to turn ideas into working software without losing their minds along the way. Some of these concepts will feel obvious. Others will make you wonder why anyone would bother. By the time you're working on a real team, every one of them will have saved you at least once.
In this chapter, you will learn to: - Explain the phases of the software development lifecycle - Compare Waterfall and Agile methodologies and evaluate which fits a given project - Write user stories and break a project into sprint-sized tasks - Conduct and participate in effective code reviews - Recognize, measure, and manage technical debt - Write documentation that people actually read - Understand the basics of continuous integration and deployment
🏃 Fast Track: If you're primarily focused on coding skills, sections 26.6 (Code Review) and 26.8 (Documentation) will have the most immediate practical value. Skim the methodology sections (26.2-26.4) for vocabulary and come back when you're working on a team.
🔬 Deep Dive: The case studies for this chapter walk through a real sprint and examine technical debt disasters. Both are worth reading if you're interested in software engineering as a career.
26.1 From Idea to Software (the SDLC Overview)
Every piece of software — from a weekend hack to a system that runs air traffic control — goes through the same basic phases, whether the team recognizes them or not:
- Requirements: What should the software do? Who is it for? What problem does it solve?
- Design: How should we build it? What components do we need? How will they fit together?
- Implementation: Write the code.
- Testing: Does it work? Does it handle edge cases? Does it do what the requirements said?
- Deployment: Get it in front of users.
- Maintenance: Fix bugs, add features, keep it running.
These phases aren't always sequential. They're not always formal. And they definitely aren't always separate — on a small project, "requirements" might be a sticky note and "deployment" might be emailing a script to your friend. But the activities happen regardless, whether you plan for them or ignore them.
💡 Intuition Builder: Think about TaskFlow. Over the past 25 chapters, you've been doing SDLC phases without naming them. Chapter 1 was requirements ("we need a task manager"). Chapter 14 was design (choosing OOP over procedural). Every chapter was implementation and testing. Chapter 25 was deployment preparation (Git, README). You've been living the lifecycle.
The key insight is that implementation — actually writing code — is usually the shortest phase. In professional software development, more time is spent on requirements, design, testing, and maintenance than on typing code into an editor. The industry joke is that writing the first version takes 10% of the effort; maintaining it takes the other 90%.
What Makes SDLC Methodologies Different
The six phases above are universal. What differs between methodologies is how you move through them:
- Do you finish all requirements before writing any code? Or do you figure out requirements as you go?
- Do you test at the end or test continuously?
- Do you plan everything up front or adapt as you learn?
These aren't trivial questions. Companies have risen and fallen based on how they answered them. Let's look at the two most influential approaches.
26.2 Waterfall Model
The Waterfall model is the oldest formal software development methodology, first described by Winston Royce in 1970. It works exactly like it sounds: progress flows in one direction, like water over a cliff, from one phase to the next.
Requirements → Design → Implementation → Testing → Deployment → Maintenance
Each phase must be completed before the next begins. Requirements are documented exhaustively. Design is specified in detail. Only then does coding start. Testing happens after all code is written. Deployment happens after all tests pass.
When Waterfall Works
Waterfall gets a bad reputation in modern software circles, but it's not inherently bad. It works well when:
-
Requirements are stable and well-understood. Building a payroll system where every calculation is defined by tax law? Waterfall is reasonable — the requirements aren't going to change mid-project because the government published them.
-
The cost of change is very high. Embedded software for a medical device or a satellite can't be updated after deployment. You need to get it right the first time, which means thorough up-front planning.
-
Regulatory compliance requires documentation. Some industries (healthcare, aviation, finance) require evidence that you completed each phase. Waterfall's sequential nature makes this straightforward.
-
The project is small and well-defined. A one-person script that converts CSV files to JSON doesn't need Agile ceremonies. Plan it, build it, test it, ship it.
When Waterfall Fails
Waterfall breaks down when any of these assumptions are wrong:
-
Requirements change. If you spend three months gathering requirements and then the market shifts, your competitor launches a similar product, or your users realize they actually need something different — you've wasted three months. Waterfall has no good mechanism for absorbing change.
-
You don't understand the problem well. If you're building something novel — a product no one has built before — you literally can't write complete requirements up front because you don't know what you don't know yet.
-
The project is long. The longer a Waterfall project runs, the greater the risk that the world changes before you ship.
-
Feedback comes too late. In Waterfall, users don't see the software until the deployment phase. If what you built isn't what they needed, you've spent the entire budget before discovering the problem.
⚠️ Common Pitfall: Don't dismiss Waterfall as "the old way that doesn't work." Many successful systems — including the software that runs the Space Shuttle, nuclear power plants, and banking infrastructure — were built using Waterfall or similar plan-driven processes. The question isn't whether Waterfall works; it's whether it works for your project.
26.3 Agile Development
In 2001, seventeen software developers met at a ski resort in Utah and wrote the Agile Manifesto, a one-page document that changed how most of the industry builds software. The core values:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
Note the careful phrasing: "over" doesn't mean "instead of." The manifesto acknowledges that processes, documentation, contracts, and plans have value — but when forced to choose, the items on the left matter more.
The Agile Approach
Instead of completing all requirements before writing any code, Agile teams work in short iterations (usually 1-4 weeks). Each iteration produces a small, working piece of software that real users can see and react to. The team then adjusts based on feedback and starts the next iteration.
This means: - Requirements are discovered gradually, not defined completely up front - Design evolves as the team learns more about the problem - Testing happens continuously, not at the end - Users see working software early and often - Plans are updated regularly based on reality
User Stories
Agile teams capture requirements as user stories — short, plain-language descriptions of what a user needs to accomplish. The standard format is:
As a [type of user], I want to [do something] so that [reason/benefit].
For example:
As a TaskFlow user, I want to set a due date on a task
so that I can see which tasks are overdue.
As a team lead, I want to view all tasks assigned to a team member
so that I can balance the workload.
User stories are deliberately small. If a story can't be completed in a single iteration, it's too big and needs to be split. This forces the team to break work into genuinely manageable pieces — the same decomposition skill you learned in Chapter 1, applied to project management.
🔗 Connection to Chapter 14: Remember the Single Responsibility Principle from OOP? Each class should have one reason to change. User stories apply the same thinking at the project level — each story describes one thing the user needs to do, and nothing else.
Sprints
A sprint is a fixed-length iteration, typically one or two weeks. At the start of each sprint, the team picks a set of user stories from the backlog (the prioritized list of all remaining stories) and commits to completing them during the sprint.
At the end of the sprint: - Completed work is demonstrated to stakeholders - The team reflects on what went well and what didn't - The backlog is re-prioritized based on what they learned - The next sprint begins
The MVP Concept
A central idea in Agile is the MVP — Minimum Viable Product. Instead of building the entire application before releasing anything, you build the smallest version that delivers value and release it. Then you improve it based on real user feedback.
Think about TaskFlow's evolution: - v0.1 was an MVP — it just greeted the user. Not useful, but functional. - v0.5 added core functionality — add, list, delete. That's a real MVP. - Every version since then has been an iteration adding features based on what was missing.
You've been building in an Agile-like way this entire course without calling it that.
Elena's Team Adopts Agile
Elena Vasquez's nonprofit has grown. She's no longer the only person working on the report automation tool — her manager hired two junior developers, and they need a way to coordinate.
Under the old approach (informal Waterfall), Elena would have written a detailed specification, assigned tasks to each developer, and hoped everything came together at the end. Instead, they adopt Agile:
- Week 1: The team writes 30 user stories on index cards. They prioritize them: the donor report generator is urgent, the volunteer scheduling feature can wait.
- Sprint 1 (2 weeks): Each developer picks 2-3 stories. They meet for 15 minutes every morning to report progress and flag blockers. At the end of the sprint, they demo working features to Elena's manager.
- Adjustment: The manager's feedback reveals that the donor report needs a feature nobody anticipated — filtering by donation amount range. They add a new story and prioritize it for Sprint 2.
- Sprint 2: The unexpected requirement gets built. If they'd been using Waterfall, this change would have required revising the specification, re-approving the design, and potentially delaying the entire project.
The result isn't perfect — Agile never is. But the team delivers working software every two weeks, adapts to changes quickly, and the manager feels involved in the process instead of waiting months for a big reveal.
26.4 Scrum Framework
Scrum is the most widely used implementation of Agile. It adds specific roles, ceremonies, and artifacts to the Agile philosophy.
Scrum Roles
| Role | Responsibility |
|---|---|
| Product Owner | Decides what to build. Manages the backlog, prioritizes stories, represents the user/customer. |
| Scrum Master | Facilitates the process. Removes obstacles, runs ceremonies, protects the team from distractions. Not a manager — more like a coach. |
| Development Team | The people who build the software. Self-organizing, cross-functional (ideally includes design, development, testing). Typically 3-9 people. |
Scrum Ceremonies
| Ceremony | When | Duration | Purpose |
|---|---|---|---|
| Sprint Planning | Start of sprint | 1-2 hours | Team selects stories from backlog, estimates effort, commits to sprint goals |
| Daily Standup | Every day | 15 minutes | Each person answers: What did I do yesterday? What will I do today? What's blocking me? |
| Sprint Review | End of sprint | 1 hour | Demo completed work to stakeholders; gather feedback |
| Sprint Retrospective | End of sprint | 1 hour | Team reflects: What went well? What didn't? What will we change? |
Scrum Artifacts
- Product Backlog: The master list of everything the product needs, prioritized by the Product Owner. Living document that changes constantly.
- Sprint Backlog: The subset of stories the team committed to for this sprint.
- Increment: The working software produced at the end of each sprint. Must be "done" — tested, documented, potentially shippable.
📊 Comparison Table: Waterfall vs. Agile
Dimension Waterfall Agile / Scrum Planning Extensive up-front planning Just enough planning for the next sprint Requirements Fixed at the start Evolve continuously Delivery One big release at the end Working software every 1-4 weeks User feedback After deployment After every sprint Change Resisted (scope change = risk) Welcomed (change = learning) Testing Phase at the end Continuous throughout Documentation Comprehensive, formal Lightweight, just enough Team structure Specialized roles (analyst, designer, coder, tester) Cross-functional teams Risk Discovered late (big surprises) Discovered early (small surprises) Best for Stable, well-understood, regulated domains Uncertain, evolving, user-facing products Cost of change High (rework entire phase) Low (adjust next sprint) Project visibility Low until late phases High throughout 💡 Intuition Builder: Neither methodology is universally superior. Choosing between them is itself an evaluate-level skill — you need to assess the project's constraints, the team's capabilities, and the level of uncertainty before deciding. Many real teams use a hybrid: Agile for feature development, Waterfall-like processes for compliance and documentation.
26.5 Writing a Project Specification
Whether you're using Waterfall or Agile, you need to capture what you're building before you build it. In Waterfall, that's a formal specification document. In Agile, it's a collection of user stories and acceptance criteria. Either way, the goal is the same: make sure everyone agrees on what "done" looks like.
What a Good Specification Includes
- Project overview: One paragraph describing what the software does and who it's for.
- Goals and non-goals: What the project will do and what it explicitly won't do. Non-goals are surprisingly important — they prevent scope creep.
- User stories or requirements: The specific things users need to accomplish.
- Acceptance criteria: How you'll know each requirement is met. These become your test cases.
- Technical constraints: Language, frameworks, performance requirements, compatibility.
- Timeline and milestones: When things need to be done.
- Open questions: Things you don't know yet. Admitting ignorance up front is a sign of maturity, not weakness.
User Story Format (Expanded)
A complete user story includes acceptance criteria:
User Story: Task Due Dates
As a TaskFlow user, I want to set a due date on a task
so that I can see which tasks are overdue.
Acceptance Criteria:
- [ ] User can optionally add a due date when creating a task
- [ ] Due date is stored in ISO format (YYYY-MM-DD)
- [ ] Tasks past their due date are displayed with an "OVERDUE" label
- [ ] User can filter to show only overdue tasks
- [ ] Tasks with no due date are never shown as overdue
Each acceptance criterion is testable. You can write a pytest test for every single one. That's the point — acceptance criteria bridge the gap between "what the user wants" and "what the code needs to do."
🔗 Connection to Chapter 13: Remember TDD? Acceptance criteria are essentially test cases written in plain English. In a well-run project, writing the acceptance criteria is the first step of writing the tests.
26.6 Code Review
Code review is the practice of having another developer read your code before it's merged into the main codebase. It's one of the highest-value practices in professional software development, and it serves multiple purposes:
- Catching bugs. A second pair of eyes catches mistakes you're blind to.
- Knowledge sharing. Reviewers learn about parts of the codebase they haven't worked on.
- Maintaining quality. Reviews enforce consistent style, patterns, and practices.
- Mentoring. Junior developers learn from senior developers' feedback; senior developers learn from junior developers' fresh perspectives.
What to Look For in a Code Review
When reviewing someone else's code, focus on these areas:
Correctness: - Does the code do what the user story/requirement asks for? - Are there edge cases that aren't handled? - Could any inputs cause crashes or unexpected behavior?
Design: - Is the code in the right place? (Right module, right class, right function) - Are functions and classes doing one thing (Single Responsibility Principle)? - Is there unnecessary complexity?
Readability: - Are names clear and descriptive? - Would someone unfamiliar with this code understand it? - Are there comments where the code isn't self-explanatory?
Testing: - Are there tests for the new functionality? - Do the tests cover edge cases? - Do existing tests still pass?
Style: - Does the code follow the project's conventions? - Is formatting consistent?
The Text Adventure Team's Code Review
The Crypts of Pythonia team has been building features independently. Marcus wrote a new combat system, and Priya is reviewing his pull request. Here's what the review looks like:
Marcus's code (before review):
def fight(p, m):
while p.hp > 0 and m.hp > 0:
dmg = p.atk - m.defense
if dmg < 0:
dmg = 0
m.hp = m.hp - dmg
if m.hp <= 0:
break
dmg2 = m.atk - p.defense
if dmg2 < 0:
dmg2 = 0
p.hp = p.hp - dmg2
if p.hp > 0:
return True
return False
Priya's review comments:
Line 1: The function name
fightis okay, but the parameter namespandmare unclear. Useplayerandmonsterso the code reads like prose.Lines 3-5 and 8-10: The damage calculation with floor-at-zero is repeated. Extract it into a helper function
calculate_damage(attacker, defender)so the logic exists in one place. (DRY principle from Chapter 6.)Line 6: Using
m.hp = m.hp - dmgmodifies the monster object directly. Is that intentional? If we want to support "preview combat" or undo, we'd need immutable state. For now it's fine, but add a comment noting the mutation.Lines 12-13:
return p.hp > 0is cleaner than the if/else.General: No docstring. What does this function return? What does
Truemean — player won? Add a docstring. Also, no tests. Can you add tests for: player wins, monster wins, player with zero attack, monster with defense higher than player's attack?
Marcus's code (after review):
def calculate_damage(attacker, defender):
"""Calculate damage dealt, floored at zero."""
raw_damage = attacker.attack - defender.defense
return max(0, raw_damage)
def resolve_combat(player, monster):
"""Simulate turn-based combat between player and monster.
Modifies player.hp and monster.hp in place.
Returns:
True if the player survives, False otherwise.
"""
while player.hp > 0 and monster.hp > 0:
# Player attacks first
damage_to_monster = calculate_damage(player, monster)
monster.hp -= damage_to_monster
if monster.hp <= 0:
break
# Monster retaliates
damage_to_player = calculate_damage(monster, player)
player.hp -= damage_to_player
return player.hp > 0
The improved version is longer, but dramatically more readable, testable, and maintainable. That's what code review buys you.
Giving and Receiving Feedback
Code review is a social skill as much as a technical one:
As a reviewer:
- Critique the code, not the person. "This function is hard to follow" not "You wrote confusing code."
- Ask questions instead of making demands. "Could this be simplified by...?" invites collaboration.
- Acknowledge what's done well. "Nice use of max() here — that's cleaner than the if/else" costs nothing and builds trust.
- Distinguish between "must fix" (bugs, missing tests) and "nice to have" (style preferences). Label them clearly.
As the author: - Don't take feedback personally. The reviewer is improving the code, not judging you. - Assume good intent. If a comment seems harsh, it's probably just tersely written — reviewers are busy. - Push back when you disagree, but explain your reasoning. "I chose X because of Y" is productive; "I like it this way" is not.
🔗 Connection to Chapter 25: Pull requests (PRs) on GitHub are the standard vehicle for code review. You create a branch, push your changes, open a PR, and your teammates review the diff before it gets merged into
main. The PR conversation thread is where reviews happen.
26.7 Technical Debt
Technical debt is a metaphor coined by Ward Cunningham (one of the creators of the Agile Manifesto) to describe the accumulated cost of shortcuts, hacks, and "good enough for now" decisions in a codebase.
The metaphor works like financial debt: - Taking on debt: You write quick-and-dirty code to meet a deadline. The feature works, but the code is fragile, hard to test, or poorly structured. - Interest payments: Every time someone needs to modify that code, it takes longer than it should because of the mess. That's the "interest" — ongoing time wasted. - Paying down principal: Refactoring the messy code into clean code is "paying off the debt." It costs time now but saves time forever after.
Types of Technical Debt
| Type | Example | Cause |
|---|---|---|
| Deliberate, prudent | "We know this isn't ideal, but shipping now and refactoring next sprint is the right business decision." | Conscious trade-off |
| Deliberate, reckless | "We don't have time for tests." | Cutting corners knowingly |
| Inadvertent, prudent | "Now that we've built it, we realize a better design. Let's plan a refactor." | Learning from experience |
| Inadvertent, reckless | "What's a design pattern?" | Lack of knowledge |
💡 Intuition Builder: Technical debt in code is like dishes in the sink. Leaving one plate is fine — you'll wash it later. But if you leave every plate for a month, eventually the kitchen is unusable, and you spend an entire Saturday cleaning instead of cooking. The daily habit of washing as you go (refactoring as you code) is far more efficient than periodic deep cleans.
How Technical Debt Accumulates
Look at how it happens in a project like TaskFlow:
- Sprint 1: You store tasks in a simple list. It works fine for 10 tasks.
- Sprint 3: You need to search by keyword. You write a linear scan. Fine for 100 tasks.
- Sprint 5: You need to search by category, date, and keyword simultaneously. You add three separate functions that each scan the entire list. It's getting slow.
- Sprint 8: Someone reports that the app takes 5 seconds to load their 10,000-task file. The root cause? That decision in Sprint 1 to use a flat list instead of a more appropriate data structure.
The original decision wasn't wrong — a list was perfectly reasonable for the first version. But the debt accumulated as the project grew, and nobody paid it down along the way.
Managing Technical Debt
-
Make it visible. Track technical debt items in your backlog alongside feature requests. If it's not tracked, it's invisible, and invisible debt just grows.
-
Allocate time for it. Many teams reserve 10-20% of each sprint for paying down technical debt. It's not glamorous, but it prevents the codebase from becoming unmaintainable.
-
Refactor incrementally. You don't need to rewrite everything at once. The Boy Scout Rule applies: "Leave the code cleaner than you found it." Every time you touch a file, improve one thing.
-
Write tests first. Before refactoring messy code, write tests that verify its current behavior. Then refactor. If the tests still pass, you haven't broken anything. This is exactly the TDD approach from Chapter 13, applied to existing code.
🔗 Connection to Chapter 16: The refactoring techniques you learned in OOP Design — extracting methods, renaming for clarity, applying design patterns — are your primary tools for paying down technical debt. Refactoring isn't rewriting; it's improving structure without changing behavior.
26.8 Documentation
There's a saying in software: "Code tells you how; documentation tells you why." Good documentation is the difference between a project that other people (including future-you) can understand and one that nobody wants to touch.
The Four Levels of Documentation
Level 1: Code-Level (Docstrings and Inline Comments)
You learned about docstrings in Chapter 6. They belong on every public function, class, and module:
def add_task(title: str, priority: str = "medium",
due_date: str | None = None) -> dict:
"""Create a new task and add it to the task list.
Args:
title: The task description. Must not be empty.
priority: One of "high", "medium", or "low".
Defaults to "medium".
due_date: Optional due date in ISO format (YYYY-MM-DD).
Returns:
A dictionary representing the created task, including
an auto-generated ID and creation timestamp.
Raises:
ValueError: If title is empty or priority is invalid.
Example:
>>> add_task("Buy groceries", priority="high")
{'id': 1, 'title': 'Buy groceries', 'priority': 'high', ...}
"""
Inline comments should explain why, not what. The code already says what:
# Bad — restates the code
x = x + 1 # increment x by 1
# Good — explains the reasoning
x = x + 1 # account for zero-based indexing in the display
Level 2: Module-Level (Module Docstrings)
Each Python file should have a module docstring at the top explaining its purpose:
"""Task storage and persistence layer.
Handles reading and writing tasks to JSON files, including
backup creation and data migration between format versions.
This module is the only place that directly touches the filesystem
for task data — all other modules go through the public API here.
"""
Level 3: Project-Level (README)
The README is the front door of your project. When someone encounters your code — on GitHub, in a shared drive, wherever — the README is the first thing they read. A good README includes:
- Project name and one-line description
- Installation instructions (how to get it running)
- Usage examples (the most common operations)
- Configuration (environment variables, settings files)
- Contributing guidelines (for open-source projects)
- License
Here's a minimal README for TaskFlow:
# TaskFlow
A command-line task manager for organizing your work.
## Installation
Requires Python 3.12+.
git clone https://github.com/yourname/taskflow.git
cd taskflow
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
## Usage
python -m taskflow
### Common Operations
add "Buy groceries" --priority high --due 2025-03-15
list --filter overdue
search "groceries"
done 3
## Running Tests
pytest tests/ -v
## License
MIT
Level 4: User Documentation
For larger projects, you'll need guides that go beyond the README: tutorials for getting started, how-to guides for common tasks, reference documentation for the API, and explanations of design decisions. This level is beyond our scope, but know that it exists.
When to Document
The best time to write documentation is while you're writing the code — not after. If you wait until the end, you'll forget the reasoning behind your decisions, and the documentation will describe what the code does (which anyone can see by reading it) instead of why it does it (which only you know right now).
⚠️ Common Pitfall: Outdated documentation is worse than no documentation. If your README says "run
python app.py" but the entry point was renamed topython -m taskflowthree months ago, every new developer will waste time on a wild goose chase. Treat documentation like code: when you change functionality, update the docs in the same commit.
26.9 Introduction to CI/CD
CI/CD stands for Continuous Integration and Continuous Deployment (or Continuous Delivery). It's the practice of automatically building, testing, and deploying software every time someone pushes code.
Continuous Integration (CI)
Continuous Integration means that every developer's changes are automatically integrated and tested as soon as they're pushed. Here's what a typical CI pipeline does:
- Developer pushes code to a branch on GitHub.
- A CI service (like GitHub Actions, GitLab CI, or Jenkins) automatically: - Checks out the code - Installs dependencies - Runs the full test suite - Runs a linter (code style checker) - Reports the results back on the pull request
If any step fails, the pull request is flagged and can't be merged until the issue is fixed.
Here's what a simple GitHub Actions workflow looks like for a Python project:
# .github/workflows/ci.yml
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install -r requirements.txt
- run: pytest tests/ -v --tb=short
This is a YAML configuration file (you don't need to memorize the syntax). The important thing is what it does: every time someone pushes to main or opens a pull request, GitHub automatically runs pytest. If a test fails, the PR gets a red "X" and the team knows something is broken before the code is merged.
💡 Intuition Builder: CI is like having a robot teammate who runs all the tests every time anyone makes a change. Before CI existed, teams would often go weeks or months without running the full test suite, only to discover on "integration day" that ten different people's changes were all incompatible. CI catches those problems within minutes of the change being pushed.
Continuous Deployment (CD)
Continuous Deployment extends CI by automatically deploying the software after all tests pass:
- Developer merges a pull request into
main. - CI runs all tests.
- If tests pass, the CD pipeline automatically deploys the new version to production (the live server that users access).
Not every team does full continuous deployment — it requires confidence in your test suite and monitoring systems. A more cautious version is Continuous Delivery, where the deployment is automated but requires a human to press a "deploy" button.
CI/CD for TaskFlow
For TaskFlow, CI/CD might seem like overkill — it's a command-line tool, not a web application. But even for CLI tools, CI provides value:
- Every pull request is automatically tested across Python 3.12 and 3.13
- The linter catches style inconsistencies before code review
- New contributors can submit PRs knowing that the test suite will verify their changes
- You'll never accidentally merge code that breaks existing tests
🔗 Connection to Chapter 13: CI is the natural culmination of the testing practices you learned. Writing tests is valuable on its own. CI makes those tests automatic — they run without anyone remembering to run them. That's the difference between "we should test" and "we always test."
26.10 Project Checkpoint: TaskFlow v2.5
TaskFlow has come a long way. Over 25 chapters, it's grown from a "Hello, TaskFlow!" greeting to a full-featured task manager with classes, persistence, error handling, tests, regex search, virtual environments, rich output, web integration, and Git version control.
For v2.5, we're going to step back from coding new features and focus on the professional practices from this chapter. This is what real teams do periodically — pause feature work to improve process, documentation, and planning.
Task 1: Write Comprehensive Documentation
Your TaskFlow project needs three levels of documentation:
1. Module docstrings. Every .py file in your project should have a module-level docstring. Check models.py, storage.py, display.py, and cli.py.
2. Function/class docstrings. Every public function and class needs a docstring with at minimum: a one-line summary, parameter descriptions, return value description, and any exceptions raised.
3. README.md. Write a complete README following the template from Section 26.8. Include installation instructions, usage examples, and a section on running tests.
Task 2: Create a Project Board (Conceptual)
If you have a GitHub account, create a GitHub Project board for TaskFlow. If not, create it on paper or in a document. The board should have four columns:
| Backlog | To Do (This Sprint) | In Progress | Done |
|---|---|---|---|
| Stories not yet scheduled | Stories committed for current sprint | Stories actively being worked on | Completed stories |
Populate the backlog with at least 10 user stories for features TaskFlow doesn't have yet. Prioritize them. Select 3-5 for a hypothetical "Sprint 1."
Task 3: Plan the v3.0 Feature Roadmap
Write a brief specification for TaskFlow v3.0. Include:
- Vision: One sentence describing what v3.0 adds.
- User stories: At least 5 user stories for new features.
- Non-goals: At least 3 things v3.0 will not do.
- Technical debt: List 3 areas of the current codebase that should be refactored.
- Open questions: At least 2 things you're unsure about.
See the code directory for a comprehensive example of TaskFlow v2.5 with full docstrings and documentation structure.
Spaced Review
🔄 From Chapter 14 (OOP Design): Look at your Task class hierarchy. Does it still follow the Single Responsibility Principle? Would your code review checklist flag any design issues? Write a brief code review of your own
Taskclass as if you were reviewing a teammate's pull request.🔄 From Chapter 25 (Git): Is your Git history clean? Do your commit messages follow the conventions from Chapter 25? If someone read only your commit log, would they understand how TaskFlow evolved? Consider whether your branch strategy would work for a team of three developers.
Chapter Summary
Building software is more than writing code. The software development lifecycle encompasses requirements, design, implementation, testing, deployment, and maintenance — and professional teams use structured methodologies to navigate these phases effectively.
Waterfall works well for stable, well-understood projects but struggles with change. Agile embraces change through iterative development, delivering working software in short sprints. Scrum adds structure to Agile with specific roles (Product Owner, Scrum Master, Development Team), ceremonies (planning, standup, review, retrospective), and artifacts (backlogs, increments).
User stories capture requirements in plain language. Code review catches bugs, shares knowledge, and maintains quality. Technical debt accumulates when shortcuts aren't paid back, and managing it requires deliberate allocation of time for refactoring. Documentation at four levels (code, module, project, user) ensures that your software is understandable and maintainable. CI/CD automates testing and deployment, ensuring that broken code never reaches production.
None of these practices are optional extras — they're the difference between software that works for a weekend and software that works for a career.
What's Next
In Chapter 27, we'll step back and look at the big picture: where does CS1 lead? You'll survey the landscape of computer science — from data structures and algorithms to databases, networking, AI, and web development — and build a personal roadmap for your next steps. We'll also give every running example their final update: where are Elena, Dr. Patel, the Grade Calculator student, and the Text Adventure team heading next?
But first: the exercises below will give you practice writing user stories, conducting code reviews, and evaluating technical decisions. And the two case studies — one following a real sprint from start to finish, the other examining technical debt disasters — will show you what these concepts look like at professional scale.