> "We stand on the shoulders of giants — and in Python, those giants have published their work on PyPI."
Learning Objectives
- Create and manage virtual environments with venv
- Install, update, and remove packages with pip
- Use requirements.txt and pyproject.toml for dependency management
- Explore key libraries for common domains: requests, pandas, matplotlib, flask, rich
- Evaluate third-party packages for quality, maintenance, and security
In This Chapter
- Chapter Overview
- 23.1 Why Virtual Environments?
- 23.2 Creating and Managing Virtual Environments
- 23.3 pip: Installing Packages
- 23.4 requirements.txt and pyproject.toml
- 23.5 Exploring Key Libraries
- 23.6 Evaluating Third-Party Packages
- 23.7 Common Pitfalls
- 23.8 Project Checkpoint: TaskFlow v2.2
- Chapter Summary
- Looking Ahead
Chapter 23: Libraries and Virtual Environments
"We stand on the shoulders of giants — and in Python, those giants have published their work on PyPI." — Adapted from Isaac Newton
Chapter Overview
Here's a scenario that will happen to you sooner than you think. You're working on two Python projects. Project A uses version 1.4 of a library called requests. Project B needs version 2.31 of the same library. You install the newer version for Project B — and Project A breaks. You downgrade for Project A — and Project B breaks. You can't have both versions installed at the same time, and now you're trapped.
This is dependency hell, and it's one of the most common frustrations in professional software development. The solution? Virtual environments — isolated Python installations where each project gets its own private set of packages. It's like giving each project its own apartment instead of making them share a studio.
In Chapter 12, you learned to organize your own code into modules and packages. This chapter takes the next step: tapping into the enormous ecosystem of other people's code. There are over 500,000 packages on the Python Package Index (PyPI), covering everything from web development to machine learning to generating ASCII art. Knowing how to find, install, evaluate, and manage these packages is a skill every Python developer needs.
By the end of this chapter, you'll be able to set up a clean, isolated environment for any project, install exactly the packages you need, lock your dependencies so your code works the same way on every machine, and make informed decisions about which third-party packages to trust.
In this chapter, you will learn to:
- Create and manage virtual environments with venv
- Install, update, and remove packages with pip
- Use requirements.txt and pyproject.toml for reproducible dependency management
- Survey key libraries across common Python domains
- Evaluate third-party packages for quality, security, and long-term viability
🏃 Fast Track: If you're comfortable with the concept of environments and just need the commands, jump to the Action Checklist in section 23.2.3, then skim section 23.4 for
requirements.txtsyntax.🔬 Deep Dive: After this chapter, explore Appendix B for advanced environment configurations including
pyenvfor managing multiple Python versions andcondafor scientific computing stacks.
23.1 Why Virtual Environments?
Let's start with a question: when you type pip install requests in your terminal, where does that package go?
If you haven't created a virtual environment, it goes into your system Python — the Python installation that ships with your operating system (or the one you installed in Chapter 2). Every project on your computer shares that single installation, including every package you've ever installed.
This works fine when you have one project. It falls apart when you have two.
23.1.1 The Dependency Conflict Problem
Remember Elena's nonprofit report pipeline from Chapters 10 and 21? She installed pandas version 1.5 and built her entire data processing workflow around it. Six months later, she starts a new project — a grant tracking dashboard — that requires pandas 2.0 because it needs some new features.
She installs pandas 2.0. Her new project works beautifully. But the next time she runs her old report pipeline, it crashes. A function she was using was deprecated in pandas 2.0 and now raises a FutureWarning. Another function changed its default behavior. Her tested, working code is broken — not because she changed it, but because she changed a dependency.
# Elena's old code that worked fine with pandas 1.5
import pandas as pd
df = pd.read_csv("donors.csv")
# This method's default behavior changed in pandas 2.0
result = df.append(new_row) # DeprecationWarning in 2.0, removed in 2.1!
This is the dependency conflict: two projects on the same machine need different versions of the same library, and the system Python can only have one version installed at a time.
💡 Intuition: Think of your system Python as a shared kitchen in a dormitory. If one roommate rearranges all the pans and another can't find anything, both suffer. Virtual environments give each project its own private kitchen.
23.1.2 What a Virtual Environment Actually Is
A virtual environment is a self-contained directory that holds a private copy of the Python interpreter and its own independent set of installed packages. When you activate a virtual environment, any packages you install go into that environment only — they don't affect your system Python or any other environment.
Here's what happens under the hood:
- Python creates a new directory (typically called
venvor.venv) inside your project folder. - It copies (or symlinks) the Python interpreter into that directory.
- It creates a
lib/subdirectory for packages and aScripts/(Windows) orbin/(macOS/Linux) directory for executables. - When you activate the environment, your shell's
PATHvariable is temporarily modified so thatpythonandpippoint to the copies inside the environment instead of the system ones.
The result: pip install pandas inside an activated virtual environment installs pandas into that environment's lib/ directory. Your system Python never knows it happened. Another environment for a different project can have a completely different version of pandas — or none at all.
my-project/
├── venv/ # The virtual environment (don't edit manually!)
│ ├── bin/ (or Scripts/) # Python and pip executables
│ ├── lib/ # Installed packages go here
│ └── pyvenv.cfg # Configuration file
├── src/ # Your project code
│ ├── main.py
│ └── utils.py
├── requirements.txt # List of dependencies
└── README.md
⚠️ Pitfall: Never manually edit files inside the
venv/directory. Never commit it to version control either — it's generated, not authored. You'll learn to use.gitignoreto exclude it in Chapter 25.
23.1.3 When You Need Virtual Environments
The short answer: always. Even for small projects, creating a virtual environment takes ten seconds and saves you from future headaches. Here's the decision framework:
| Situation | Without venv | With venv |
|---|---|---|
| Single project, few packages | Works (for now) | Still better — keeps system clean |
| Multiple projects, same packages | Version conflicts likely | Each project isolated |
| Sharing code with others | "Works on my machine" problems | requirements.txt ensures reproducibility |
| Deploying to a server | Unknown dependency state | Exact same packages in dev and prod |
| Upgrading Python | All packages might break | Each project can migrate independently |
Professional developers create a virtual environment for every project, period. It's like buckling your seatbelt — you don't debate whether this particular drive needs it.
🔗 Connection: In Chapter 12, you learned about namespaces — how Python keeps names from colliding by scoping them to modules. Virtual environments apply the same principle at a higher level: they namespace your packages so different projects don't collide.
23.2 Creating and Managing Virtual Environments
Time to get your hands dirty. This section is a step-by-step guide you can follow along with right now.
23.2.1 Creating a Virtual Environment
Python ships with the venv module in the standard library (since Python 3.3). No installation required.
Open your terminal, navigate to your project directory, and run:
# Create a virtual environment named ".venv"
python -m venv .venv
That's it. Python creates a .venv directory in your current folder containing a private Python installation. The name .venv is a convention (the leading dot hides it on macOS/Linux) — you could call it anything, but .venv is what most developers and tools expect.
📊 Convention note: You'll see both
venvand.venvused in the wild. The.venvconvention (with the leading dot) has become standard because it's hidden by default in Unix file listings, and many tools like VS Code auto-detect it. We'll use.venvthroughout this book.
23.2.2 Activating and Deactivating
Creating the environment doesn't use it — you need to activate it. The activation command differs by operating system and shell:
Windows (Command Prompt):
.venv\Scripts\activate
Windows (PowerShell):
.venv\Scripts\Activate.ps1
macOS/Linux (bash/zsh):
source .venv/bin/activate
When activation succeeds, your terminal prompt changes to show the environment name:
(.venv) C:\Users\elena\grant-tracker>
That (.venv) prefix is your visual confirmation that you're working inside the virtual environment. Every python and pip command you run now uses the environment's private copies.
To leave the environment and return to your system Python:
deactivate
The (.venv) prefix disappears, and you're back to normal.
⚠️ Pitfall (Windows PowerShell): If you get an error about "execution of scripts is disabled," you need to adjust your PowerShell execution policy. Run PowerShell as Administrator and execute:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser. This is a one-time setup step. See Appendix B for details.
23.2.3 Action Checklist: Virtual Environment Setup
Here is the complete workflow, start to finish. Bookmark this section — you'll use it at the start of every new project.
✅ Action Checklist: Setting Up a Virtual Environment
Step 1: Create the project directory
bash mkdir my-project cd my-projectStep 2: Create the virtual environment
bash python -m venv .venvStep 3: Activate it - Windows CMD:
.venv\Scripts\activate- Windows PowerShell:.venv\Scripts\Activate.ps1- macOS/Linux:source .venv/bin/activateStep 4: Verify you're in the environment ```bash python --version pip --version
Both should show paths inside .venv
```
Step 5: Install your packages
bash pip install requests pandas richStep 6: Freeze your dependencies
bash pip freeze > requirements.txtStep 7: When done working, deactivate
bash deactivate
23.2.4 VS Code Integration
VS Code detects virtual environments automatically. When you open a project folder that contains a .venv directory, VS Code will prompt you to select it as the Python interpreter. If it doesn't:
- Open the Command Palette (Ctrl+Shift+P or Cmd+Shift+P)
- Type "Python: Select Interpreter"
- Choose the Python executable inside
.venv
Once selected, VS Code's integrated terminal will automatically activate the virtual environment when you open a new terminal pane. The status bar at the bottom of the window shows which interpreter is active.
23.2.5 Recreating an Environment
Virtual environments are disposable. If something goes wrong — a bad install, a corrupt package, an experiment that went sideways — you can delete the entire .venv directory and recreate it from scratch:
# Delete the old environment
# Windows:
rmdir /s /q .venv
# macOS/Linux:
rm -rf .venv
# Recreate it
python -m venv .venv
# Activate and reinstall from requirements.txt
source .venv/bin/activate # or Windows equivalent
pip install -r requirements.txt
This is why requirements.txt matters so much — it's your recipe for rebuilding the environment from nothing. Without it, you'd have to remember every package you installed and every version you used.
23.3 pip: Installing Packages
pip is Python's package installer. It downloads packages from the Python Package Index (PyPI) — a repository of over 500,000 open-source Python packages — and installs them into your current environment.
23.3.1 Core pip Commands
Here are the commands you'll use daily:
Install a package:
pip install requests
Install a specific version:
pip install requests==2.31.0
Install with a version constraint:
pip install "requests>=2.28,<3.0"
Upgrade an installed package:
pip install --upgrade requests
Uninstall a package:
pip uninstall requests
List all installed packages:
pip list
Show details about an installed package:
pip show requests
Freeze current packages (for requirements.txt):
pip freeze
The pip freeze command outputs every installed package and its exact version in a format suitable for requirements.txt:
certifi==2023.11.17
charset-normalizer==3.3.2
idna==3.6
requests==2.31.0
urllib3==2.1.0
Notice something interesting: you installed one package (requests), but five packages appeared. That's because requests depends on certifi, charset-normalizer, idna, and urllib3. These are transitive dependencies — packages that your packages need. pip resolves and installs the entire dependency tree automatically.
23.3.2 Understanding Semantic Versioning
Most Python packages follow semantic versioning (semver), a numbering system that communicates the nature of changes:
MAJOR.MINOR.PATCH
2 . 31 . 0
| Component | What changes | Example |
|---|---|---|
| MAJOR | Breaking changes (your code may need updates) | 1.x → 2.0 |
| MINOR | New features added, backward compatible | 2.30 → 2.31 |
| PATCH | Bug fixes only, backward compatible | 2.31.0 → 2.31.1 |
This matters when you're specifying version constraints:
requests==2.31.0 # Exact version (most restrictive)
requests>=2.28 # This version or newer
requests>=2.28,<3.0 # At least 2.28, but not 3.0 (safe range)
requests~=2.31.0 # Compatible release: >=2.31.0, <2.32.0
💡 Intuition: The
>=2.28,<3.0constraint says: "I need the features introduced in 2.28, and I trust that minor updates won't break me, but a major version bump might." This is the most common pattern in production code.
23.3.3 Where Packages Come From: PyPI
The Python Package Index (PyPI, pronounced "pie-pee-eye") at pypi.org is the central repository for Python packages. When you run pip install something, pip contacts PyPI, downloads the package, and installs it.
Each package on PyPI has a project page with: - A description and README - Version history - Download statistics - Links to source code (usually on GitHub) - A list of dependencies - Supported Python versions
Packages are distributed as either source distributions (.tar.gz files containing Python source code) or wheels (.whl files, pre-built packages that install faster). Modern pip prefers wheels when available because they don't require compilation.
📊 Scale of PyPI: As of 2025, PyPI hosts over 500,000 projects with over 5 million releases. About 400,000 packages are downloaded every minute. When you
pip installsomething, you're connecting to one of the largest software ecosystems in the world.
23.4 requirements.txt and pyproject.toml
You've created an environment. You've installed some packages. Now the critical question: how do you make sure your collaborator — or your future self — can recreate this exact setup?
23.4.1 requirements.txt: The Classic Approach
The requirements.txt file is a plain-text list of packages and their versions. It's the most common way to record dependencies in the Python ecosystem.
Generating it:
pip freeze > requirements.txt
Installing from it:
pip install -r requirements.txt
A typical requirements.txt looks like this:
# Core dependencies
requests==2.31.0
rich==13.7.0
# Data processing
pandas==2.1.4
numpy==1.26.3
# Transitive dependencies (installed automatically)
certifi==2023.11.17
charset-normalizer==3.3.2
idna==3.6
urllib3==2.1.0
markdown-it-py==3.0.0
mdurl==0.1.2
pygments==2.17.2
You can add comments with # and organize packages logically. Some teams maintain two files:
requirements.txt— production dependencies onlyrequirements-dev.txt— development tools (pytest, linters, etc.)
# requirements-dev.txt
-r requirements.txt # Include production deps
pytest==7.4.4
black==24.1.0
mypy==1.8.0
The -r requirements.txt line means "include everything from the production file, plus these additional packages."
23.4.2 pyproject.toml: The Modern Approach
While requirements.txt works well for applications (code that runs directly), the Python community has been moving toward pyproject.toml as a unified configuration file for Python projects. It was introduced in PEP 518 and has become the standard for new projects.
A basic pyproject.toml looks like this:
[project]
name = "taskflow"
version = "2.2.0"
description = "A command-line task manager"
requires-python = ">=3.12"
dependencies = [
"rich>=13.0,<14.0",
"requests>=2.28,<3.0",
]
[project.optional-dependencies]
dev = [
"pytest>=7.0",
"black>=24.0",
]
[build-system]
requires = ["setuptools>=68.0"]
build-backend = "setuptools.backends._legacy:_Backend"
Key differences from requirements.txt:
| Feature | requirements.txt |
pyproject.toml |
|---|---|---|
| Format | Plain text, one package per line | TOML (structured configuration) |
| Version specs | Typically exact pins (==) |
Typically ranges (>=,<) |
| Metadata | None (just packages) | Name, version, description, Python version |
| Use case | Applications, quick projects | Libraries, publishable packages |
| Dev dependencies | Separate file needed | Built-in ([project.optional-dependencies]) |
💡 Best practice: For your coursework and personal projects,
requirements.txtis perfectly fine. When you start building packages you want to share or publish, usepyproject.toml. Many professional projects use both —pyproject.tomlfor package metadata and broad compatibility ranges, plus a pinnedrequirements.txtfor exact reproducibility.
23.4.3 Version Pinning Strategy
How strict should your version constraints be? There are two philosophies:
Strict pinning (exact versions):
requests==2.31.0
pandas==2.1.4
- Pro: 100% reproducible; you get exactly what was tested
- Con: No automatic bug fixes or security patches; harder to update
Range constraints:
requests>=2.28,<3.0
pandas>=2.0,<3.0
- Pro: You get compatible updates, including security fixes
- Con: A new minor version could introduce an unexpected behavior change
The common professional pattern is to use ranges in pyproject.toml (what versions could work) and strict pins in requirements.txt (what versions did work when you last tested). The pip freeze output is always strict-pinned.
23.5 Exploring Key Libraries
One of Python's greatest strengths is its ecosystem. Whatever problem you're solving, someone has probably written a library for it. Here's a guided tour of libraries you'll encounter frequently.
23.5.1 requests — HTTP for Humans
You met requests in Chapter 21 when you worked with APIs. It's the most popular Python library for making HTTP requests, and it's a great example of what a well-designed library looks like.
import requests
response = requests.get("https://api.github.com/repos/python/cpython")
data = response.json()
print(f"CPython has {data['stargazers_count']:,} stars on GitHub")
Why it's loved: the standard library's urllib is functional but clunky. requests wraps it in an API that reads like English. Compare urllib.request.urlopen(req).read().decode() with requests.get(url).json(). Same result; dramatically different developer experience.
23.5.2 pandas — Data Analysis
pandas is the workhorse of data science in Python. It provides DataFrame — a powerful table-like structure for loading, cleaning, analyzing, and transforming data.
import pandas as pd
# Read a CSV file into a DataFrame
df = pd.read_csv("donors.csv")
# Filter, group, and analyze in one chain
top_categories = (
df[df["amount"] > 100]
.groupby("category")["amount"]
.sum()
.sort_values(ascending=False)
.head(5)
)
print(top_categories)
Elena uses pandas for her nonprofit reports. What used to take her 4 hours of spreadsheet work now runs in 30 seconds. If you're going into data science, data analysis, or any field that involves working with tabular data, pandas will become one of your most-used tools.
23.5.3 matplotlib — Data Visualization
matplotlib is Python's foundational plotting library. It can create line charts, bar charts, scatter plots, histograms, and virtually any other kind of visualization.
import matplotlib.pyplot as plt
months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun"]
donations = [4200, 3800, 5100, 4700, 6200, 5900]
plt.figure(figsize=(8, 5))
plt.bar(months, donations, color="steelblue")
plt.title("Monthly Donations — 2025")
plt.ylabel("Amount ($)")
plt.tight_layout()
plt.savefig("donations.png")
plt.show()
Elena creates charts like this for her monthly board reports. The combination of pandas for data processing and matplotlib for visualization is one of the most common workflows in Python.
23.5.4 Flask — Web Development
Flask is a lightweight web framework that lets you build web applications with minimal boilerplate. It's popular for small to medium projects and APIs.
from flask import Flask
app = Flask(__name__)
@app.route("/")
def home():
return "<h1>Welcome to TaskFlow Web!</h1>"
@app.route("/tasks")
def tasks():
return {"tasks": ["Write report", "Review PR", "Deploy update"]}
if __name__ == "__main__":
app.run(debug=True)
Running this script starts a web server on your machine. Visit http://localhost:5000 in your browser, and you'll see your web page. Flask is a great "next step" after this course if you're interested in web development.
23.5.5 rich — Beautiful Terminal Output
rich is a library for creating beautiful, formatted output in the terminal. It supports colors, tables, progress bars, syntax highlighting, markdown rendering, and more.
from rich.console import Console
from rich.table import Table
console = Console()
table = Table(title="TaskFlow — Active Tasks")
table.add_column("ID", style="cyan", justify="right")
table.add_column("Task", style="white")
table.add_column("Priority", style="bold")
table.add_column("Status", style="green")
table.add_row("1", "Write Chapter 23", "[red]High[/red]", "In Progress")
table.add_row("2", "Review pull request", "[yellow]Medium[/yellow]", "Pending")
table.add_row("3", "Update README", "[green]Low[/green]", "Done")
console.print(table)
This produces a professionally formatted table with colors, borders, and alignment — far beyond what print() can do. We'll add rich to TaskFlow in the project checkpoint.
23.5.6 Quick Reference: Libraries by Domain
| Domain | Library | What it does |
|---|---|---|
| HTTP/APIs | requests |
Make HTTP requests |
| Data analysis | pandas |
DataFrames and data manipulation |
| Visualization | matplotlib, seaborn |
Charts and plots |
| Web frameworks | flask, django, fastapi |
Build web applications |
| Terminal UI | rich, click, typer |
Beautiful CLI applications |
| Testing | pytest, hypothesis |
Test frameworks |
| Scientific | numpy, scipy |
Numerical computing |
| Machine learning | scikit-learn, pytorch |
ML models |
| Bioinformatics | biopython |
Biological sequence analysis |
| Web scraping | beautifulsoup4, scrapy |
Parse HTML/XML |
| Databases | sqlalchemy, psycopg2 |
Database access |
| Image processing | Pillow |
Image manipulation |
Dr. Patel installed biopython to analyze DNA sequences. She set up a fresh virtual environment specifically for her bioinformatics pipeline, installed biopython and its dependencies, and kept them completely separate from her teaching project's environment. Two projects, two environments, zero conflicts.
🔗 Connection: In Chapter 21, you used
requeststo call APIs. In Chapter 24, you'll usebeautifulsoup4to parse HTML. Both are third-party packages you'll install with pip into a virtual environment using exactly the workflow described in this chapter.
23.6 Evaluating Third-Party Packages
PyPI has over 500,000 packages. Some are maintained by dedicated teams, rigorously tested, and used by millions. Others were uploaded once by a college student and never touched again. How do you tell the difference?
23.6.1 The Evaluation Checklist
Before adding a dependency to your project, run through this checklist:
✅ Package Evaluation Checklist
Popularity: How many downloads does it get? (Check pypistats.org or the PyPI page.) High download counts don't guarantee quality, but very low counts are a warning sign.
Maintenance: When was the last release? When was the last commit to the source code? A package that hasn't been updated in three years may have unpatched security issues or Python compatibility problems.
GitHub stars and activity: Check the GitHub repository. Are issues being responded to? Are pull requests being reviewed? Active maintenance matters.
Documentation: Does the package have clear, up-to-date documentation? Poor documentation is a strong predictor of poor quality.
Test coverage: Does the project have tests? Does it use CI/CD? A well-tested package is less likely to have bugs.
License: Is the license compatible with your project? MIT and BSD are permissive. GPL has requirements you need to understand. No license means you technically can't use it legally.
Dependencies: How many dependencies does the package pull in? Fewer is generally better — each dependency is a potential point of failure.
Alternatives: Is this the best package for the job, or is there a more standard option? For example,
requestsis the standard for HTTP; a package calledsuper-http-magicwith 12 downloads is probably not what you want.
23.6.2 Red Flags
Watch out for these warning signs:
- No source code repository. If you can't read the code, don't trust it.
- Typosquatting. Malicious packages sometimes use names similar to popular ones (
requetsinstead ofrequests). Always double-check the package name. - Excessive permissions. If a "color formatting" library asks for network access, something is wrong.
- No version history. A single version with no updates is risky for anything important.
- Dependency explosion. If installing one package pulls in 50 others, consider whether the functionality is worth the risk surface.
23.6.3 The Cost of Every Dependency
Every package you add to your project is a commitment. You're depending on someone else's code to keep working, stay maintained, remain secure, and stay compatible with future Python versions. Professional teams carefully evaluate each dependency because:
- A security vulnerability in a dependency is a vulnerability in your project.
- An abandoned dependency will eventually break when Python updates.
- A dependency with many transitive dependencies multiplies these risks.
The principle: add dependencies deliberately, not casually. If you can accomplish something with the standard library in 10 lines, that's often better than adding a third-party package that does it in 1 line but brings along 15 transitive dependencies.
⚖️ Trade-off: This isn't a rule to avoid all dependencies — that would be absurd.
requestsis better than writing your own HTTP client.pandasis better than parsing CSVs by hand. The goal is to be intentional: choose well-maintained, widely-used packages for things that are genuinely complex, and use the standard library for simple tasks.
23.7 Common Pitfalls
Every developer hits these problems. Knowing about them in advance will save you hours of confusion.
23.7.1 Installing to the Wrong Python
This is the single most common beginner mistake. You have a virtual environment, but you forgot to activate it. You run pip install rich, and it installs into your system Python. Then you activate your environment, import rich, and get ModuleNotFoundError. The package is installed — just not where your environment is looking.
How to check: After activating your environment, run:
which python # macOS/Linux
where python # Windows
The path should point to your .venv directory, not your system Python. If it doesn't, you haven't activated the environment.
# What you want to see:
(.venv) $ which python
/home/elena/grant-tracker/.venv/bin/python
# What means trouble:
$ which python
/usr/bin/python3
⚠️ Pitfall: On some systems,
pythonrefers to Python 2 andpython3refers to Python 3. Ifpython -m venv .venvfails, trypython3 -m venv .venv. Similarly, usepip3instead ofpipif needed. Inside an activated virtual environment,pythonandpipalways point to the correct version.
23.7.2 Dependency Hell
Dependency hell occurs when your project's dependencies have conflicting requirements. Package A requires numpy>=1.24, but Package B requires numpy<1.24. pip can't satisfy both, and you get an error or — worse — a silent compatibility issue.
ERROR: pip's dependency resolver does not currently take into
account all the packages that are installed.
Package A 1.0.0 requires numpy>=1.24, but you have numpy 1.23.5
which is incompatible.
Strategies for dealing with dependency hell:
1. Update everything. Often the conflict resolves when all packages are at their latest versions: pip install --upgrade package-a package-b.
2. Check compatibility. Read the release notes for both packages to find versions that are compatible with each other.
3. Use fewer dependencies. Sometimes the solution is to drop one of the conflicting packages and find an alternative that doesn't conflict.
4. Use a dependency resolver. Tools like pip-tools or poetry provide more sophisticated dependency resolution than basic pip.
23.7.3 The "Works on My Machine" Problem
You build a project, it works perfectly, and you send it to a classmate. They run it and get errors everywhere. Why? Because they don't have the same packages installed. Or they have different versions. Or they're on a different operating system.
This is why requirements.txt exists. Without it, your collaborator has to guess which packages you used and which versions you tested with. With it, they can recreate your exact environment in one command:
python -m venv .venv
source .venv/bin/activate # or Windows equivalent
pip install -r requirements.txt
🔴 Common error: Students often forget to update
requirements.txtafter installing a new package. Make it a habit: every time youpip installsomething, runpip freeze > requirements.txtimmediately afterward.
23.7.4 Platform-Specific Packages
Some packages behave differently on Windows, macOS, and Linux. A few packages only work on certain platforms. Common gotchas:
readlineis built into Python on macOS/Linux but requires a separate package on Windows.- Some packages require C compilation, which needs additional tools on Windows (Visual C++ Build Tools).
- File path separators differ (
/vs.\), which can affect packages that work with the filesystem.
When sharing a project across platforms, test on each target platform or note platform requirements in your README.
23.7.5 Virtual Environment Anti-Patterns
| Anti-Pattern | Why It's Bad | What to Do Instead |
|---|---|---|
Committing .venv/ to git |
Bloats repository; platform-specific | Add .venv/ to .gitignore; commit requirements.txt |
| One global environment for all projects | Back to dependency conflicts | One environment per project |
Installing with sudo pip install |
Modifies system Python; can break OS tools | Always use a virtual environment |
Ignoring pip freeze |
No record of what's installed | Freeze after every install |
| Never upgrading packages | Accumulates security vulnerabilities | Review and upgrade periodically |
23.8 Project Checkpoint: TaskFlow v2.2
It's time to give TaskFlow a professional project structure with a virtual environment, a requirements.txt, and a visual upgrade using the rich library.
23.8.1 What We're Building
In Chapter 22 (v2.1), you added regex-powered search to TaskFlow. Now you'll:
- Create a virtual environment for the TaskFlow project
- Install the
richlibrary for beautiful terminal output - Generate a
requirements.txtthat records the dependency - Refactor the display layer to use
richfor colorful, formatted output - Add a
rich-powered welcome banner, task table, and status indicators
23.8.2 Setup Steps
Follow the action checklist from section 23.2.3 to set up TaskFlow's environment:
# Navigate to your TaskFlow project
cd taskflow
# Create the virtual environment
python -m venv .venv
# Activate it
# Windows CMD: .venv\Scripts\activate
# Windows PowerShell: .venv\Scripts\Activate.ps1
# macOS/Linux: source .venv/bin/activate
# Install rich
pip install rich
# Freeze dependencies
pip freeze > requirements.txt
Your requirements.txt should now contain:
markdown-it-py==3.0.0
mdurl==0.1.2
Pygments==2.17.2
rich==13.7.0
Notice that rich brought along three transitive dependencies. That's normal and expected.
23.8.3 The Updated Code
Here's TaskFlow v2.2 with rich integration (see code/project-checkpoint.py for the complete file):
"""TaskFlow v2.2 — Virtual Environment + Rich Terminal Output."""
from rich.console import Console
from rich.table import Table
from rich.panel import Panel
from rich.text import Text
from datetime import datetime
console = Console()
# --- Task storage ---
tasks: list[dict] = []
PRIORITY_STYLES = {
"high": "bold red",
"medium": "bold yellow",
"low": "bold green",
}
def show_welcome():
"""Display a colorful welcome banner using rich."""
banner = Text("TaskFlow v2.2", style="bold white on blue")
subtitle = Text("Now with colorful terminal output!", style="italic cyan")
console.print(Panel(
Text.assemble(banner, "\n", subtitle),
title="[bold]Welcome[/bold]",
border_style="blue",
))
console.print()
def display_tasks():
"""Show all tasks in a rich-formatted table."""
if not tasks:
console.print("[dim]No tasks yet. Add one with 'add'.[/dim]")
return
table = Table(title="Your Tasks", show_lines=True)
table.add_column("#", style="cyan", justify="right", width=4)
table.add_column("Task", style="white", min_width=20)
table.add_column("Priority", justify="center", width=10)
table.add_column("Category", style="magenta", width=12)
table.add_column("Created", style="dim", width=19)
for i, task in enumerate(tasks, 1):
priority = task["priority"]
style = PRIORITY_STYLES.get(priority, "white")
table.add_row(
str(i),
task["name"],
f"[{style}]{priority.upper()}[/{style}]",
task.get("category", "general"),
task["created"],
)
console.print(table)
def add_task():
"""Add a new task with rich-styled prompts."""
name = console.input("[bold cyan]Task name:[/bold cyan] ").strip()
if not name:
console.print("[red]Task name cannot be empty.[/red]")
return
priority = console.input(
"[bold cyan]Priority[/bold cyan] (high/medium/low): "
).strip().lower()
if priority not in PRIORITY_STYLES:
priority = "medium"
console.print("[yellow]Defaulting to medium priority.[/yellow]")
category = console.input(
"[bold cyan]Category[/bold cyan] (or Enter for 'general'): "
).strip() or "general"
task = {
"name": name,
"priority": priority,
"category": category,
"created": datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
}
tasks.append(task)
console.print(f"[green]Added:[/green] {name} [{priority}]")
def main():
"""Main menu loop."""
show_welcome()
while True:
console.print("\n[bold]Commands:[/bold] add | list | quit")
command = console.input("[bold]> [/bold]").strip().lower()
if command == "add":
add_task()
elif command == "list":
display_tasks()
elif command == "quit":
console.print("[bold blue]Goodbye![/bold blue] 👋")
break
else:
console.print(f"[red]Unknown command:[/red] {command}")
if __name__ == "__main__":
main()
The key change: instead of print(), we use console.print() from rich, which understands markup like [bold red]text[/bold red]. The Table class produces professionally formatted tables with borders, alignment, and colors. The Panel class creates bordered boxes for headers and notices.
23.8.4 What Changed from v2.1 to v2.2
| Aspect | v2.1 (Chapter 22) | v2.2 (This chapter) |
|---|---|---|
| Environment | System Python | Virtual environment with .venv |
| Dependencies | None tracked | requirements.txt with rich |
| Display | Plain print() |
rich Console, Table, Panel |
| Task list | Text-based numbered list | Formatted table with colors |
| Priority | Text labels | Color-coded (red/yellow/green) |
| Welcome | Simple print banner | Styled Panel with border |
🔗 Spaced review (Chapter 12): TaskFlow v1.1 split the project into modules:
models.py,storage.py,display.py,cli.py. Therichupgrade here primarily affects thedisplay.pymodule — the module boundary you created in Chapter 12 makes this kind of targeted upgrade clean and contained.🔗 Spaced review (Chapter 21): If you added the weather API integration in v2.0,
requestswould be another dependency in yourrequirements.txt. Every third-party library you use should be tracked.
Chapter Summary
This chapter covered the professional workflow for managing Python packages and project environments. You learned to create virtual environments that keep projects isolated, use pip to install and manage packages from PyPI, record your dependencies in requirements.txt and pyproject.toml for reproducibility, and evaluate third-party packages before adding them to your projects.
The virtual environment workflow is one of those things that feels like overhead until the first time it saves you. Every professional Python developer uses virtual environments for every project. Every open-source project you'll contribute to will have a requirements.txt or pyproject.toml. This isn't optional knowledge — it's table stakes.
Here's the one thing to remember: every new project starts with python -m venv .venv. Make it a reflex, like saving your file before running it. Your future self — and every collaborator who touches your code — will thank you.
Looking Ahead
In Chapter 24, you'll use everything from this chapter in practice. You'll create a virtual environment, install beautifulsoup4 and requests, and build a web scraping and automation pipeline. The dependency management skills you just learned will let you set up that project's environment in seconds — and share it with anyone who wants to run your code.
"The best time to create a virtual environment was when you started the project. The second best time is now."