20 min read

> "The quality of the code an AI produces is directly proportional to the quality of the prompt you give it."

Chapter 8: Prompt Engineering Fundamentals

"The quality of the code an AI produces is directly proportional to the quality of the prompt you give it."


Learning Objectives

By the end of this chapter, you will be able to:

  1. Remember the five pillars of effective code prompts: clarity, specificity, context, constraints, and output formatting. (Bloom's: Remember)
  2. Understand why prompt quality is the single most important factor in vibe coding success. (Bloom's: Understand)
  3. Apply the five-pillar framework to transform vague prompts into precise, effective instructions. (Bloom's: Apply)
  4. Analyze existing prompts to identify weaknesses and anti-patterns. (Bloom's: Analyze)
  5. Evaluate prompt effectiveness using concrete metrics and quality indicators. (Bloom's: Evaluate)
  6. Create reusable prompt templates for common coding tasks in your workflow. (Bloom's: Create)

Introduction

If vibe coding is a conversation between you and an AI assistant, then prompts are the words you choose to speak. And just like in human communication, the words you choose matter enormously. A vague request produces vague results. A precise, well-structured prompt produces code that matches your intent with remarkable accuracy.

This chapter is the foundation of Part II and, in many ways, the most important chapter in this entire book. In Part I, you learned what vibe coding is (Chapter 1), how AI coding assistants work under the hood (Chapter 2), surveyed the tool landscape (Chapter 3), set up your development environment (Chapter 4), reviewed essential Python concepts (Chapter 5), completed your first vibe coding session (Chapter 6), and learned to read and understand AI-generated code (Chapter 7). All of that groundwork leads here: to the skill of writing prompts that reliably produce the code you need.

Prompt engineering is not about memorizing magic phrases or discovering secret tricks. It is a systematic discipline built on clear principles. In this chapter, you will learn those principles, see them in action through dozens of before-and-after examples, identify common mistakes to avoid, and build a personal library of templates you can use immediately.

Let us begin.


8.1 The Anatomy of an Effective Code Prompt

Every effective code prompt, whether it is a one-liner or a detailed specification, contains some combination of five fundamental components. We call these the Five Pillars of Effective Prompts:

  1. Clarity — Is the prompt unambiguous? Can it be interpreted only one way?
  2. Specificity — Does it include the right level of detail?
  3. Context — Does it provide the background information the AI needs?
  4. Constraints — Does it define boundaries, requirements, and limitations?
  5. Output Formatting — Does it specify how the response should be structured?

Not every prompt needs all five pillars at maximum strength. A simple prompt like "Write a Python function that reverses a string" has reasonable clarity and specificity for a trivial task. But as tasks grow in complexity, neglecting any pillar leads to disappointing results.

Intuition Box: The Restaurant Analogy

Think of prompting an AI like ordering at a restaurant. Saying "bring me food" (no clarity, no specificity) will get you something, but probably not what you wanted. Saying "I'd like the grilled salmon, medium-rare, with the lemon butter sauce on the side, and steamed broccoli instead of the fries" gives the kitchen everything it needs to deliver exactly what you want. Prompt engineering is learning to order precisely.

The Five Pillars Visualized

Consider this mental model of how the five pillars work together:

┌─────────────────────────────────────────────┐
│           EFFECTIVE CODE PROMPT              │
│                                             │
│  ┌─────────┐ ┌───────────┐ ┌─────────┐    │
│  │ CLARITY │ │SPECIFICITY│ │ CONTEXT │    │
│  │         │ │           │ │         │    │
│  │ What do │ │ How much  │ │ What    │    │
│  │ you     │ │ detail?   │ │ does AI │    │
│  │ mean?   │ │           │ │ need to │    │
│  │         │ │           │ │ know?   │    │
│  └─────────┘ └───────────┘ └─────────┘    │
│                                             │
│  ┌────────────┐  ┌──────────────────┐      │
│  │CONSTRAINTS │  │ OUTPUT FORMAT    │      │
│  │            │  │                  │      │
│  │ What are   │  │ How should the   │      │
│  │ the rules? │  │ response look?   │      │
│  └────────────┘  └──────────────────┘      │
└─────────────────────────────────────────────┘

Let us look at a concrete example showing how these pillars transform a prompt:

Weak prompt (missing most pillars):

Make a login function.

Strong prompt (all five pillars present):

Write a Python function called `authenticate_user` that takes an email
(string) and password (string) as parameters. It should:

1. Validate that the email matches a standard email regex pattern.
2. Hash the password using bcrypt.
3. Query a SQLAlchemy User model to find a matching user.
4. Return a dictionary with keys "success" (bool), "user_id" (int or None),
   and "error" (string or None).

Constraints:
- Use Python 3.11+ type hints throughout.
- Raise ValueError for empty email or password.
- Do not store or log the plaintext password.
- Follow PEP 8 style conventions.

Include a docstring with parameters, return type, and example usage.

The weak prompt could produce a login function in any language, for any system, with any interface, using any authentication strategy. The strong prompt leaves very little room for misinterpretation.


8.2 Clarity: Saying Exactly What You Mean

Clarity is the first and most fundamental pillar. A clear prompt communicates your intent without ambiguity. When you write a clear prompt, every competent developer reading it would understand the same thing.

The Ambiguity Problem

Natural language is inherently ambiguous, and AI coding assistants interpret your words based on statistical patterns in their training data. When your prompt is ambiguous, the AI makes its best guess about what you meant, and that guess may not match your intent.

Example of an ambiguous prompt:

Handle the error in the process function.

What does "handle" mean here? Should it catch the exception and log it? Re-raise it? Return a default value? Show a user-friendly message? And which error? There may be multiple failure modes in the function.

Clarified version:

In the `process_payment` function, add a try-except block around the
Stripe API call. Catch `stripe.error.CardError` and return a dictionary
with {"success": False, "error": str(e)}. Catch `stripe.error.APIConnectionError`
and retry up to 3 times with exponential backoff before raising a
custom `PaymentServiceUnavailable` exception.

Strategies for Achieving Clarity

1. Use precise verbs. Instead of "handle," "fix," or "deal with," use verbs that describe the exact action: "catch," "validate," "transform," "filter," "aggregate," "raise."

Vague Verb Precise Alternatives
Handle Catch, validate, reject, retry, log, re-raise
Fix Replace X with Y, add null check, correct the off-by-one in the loop range
Make Create, generate, implement, define, instantiate
Change Rename, refactor, extract, move, convert, swap
Improve Reduce time complexity from O(n^2) to O(n log n), add caching, remove redundant queries

2. Name things explicitly. Do not say "the function" when you can say "calculate_shipping_cost." Do not say "the variable" when you can say "the total_price variable in line 42."

3. Avoid pronouns with unclear antecedents. In a multi-turn conversation, "it" and "that" can refer to many things. When in doubt, restate the noun.

Common Pitfall: The Curse of Knowledge

You know what you mean, so your prompt seems clear to you. But the AI does not share your mental context. A useful test: could a junior developer who just joined your team understand exactly what you want from this prompt alone? If not, add more detail.

4. One task per prompt (when starting out). Complex prompts that ask for multiple things at once can confuse the AI about priorities and relationships. As a beginner, keep prompts focused on a single, well-defined task. As you gain experience, you will learn when and how to combine multiple tasks effectively (covered in Chapters 12 and 13).

Before and After: Clarity

Before:

Make the data processing better.

After:

Refactor the `clean_data` function in data_pipeline.py to:
1. Replace the nested for-loops (lines 45-62) with pandas vectorized operations.
2. Add input validation that raises TypeError if the input is not a pandas DataFrame.
3. Add a `verbose` parameter (default False) that prints row counts before
   and after cleaning when set to True.

The "before" prompt is so vague that the AI has to guess what "better" means. The "after" prompt specifies exactly what changes to make, where to make them, and how they should behave.


8.3 Specificity: The Right Level of Detail

Specificity is about calibrating how much detail you include. Too little detail and the AI guesses; too much detail and you waste time writing a prompt that is longer than the code would be. The goal is to hit the sweet spot.

The Specificity Spectrum

Too Vague          Just Right          Too Detailed
|────────────────────|────────────────────|
"Sort a list"      "Sort a list of      "Create a function
                    dictionaries by      named sort_records
                    the 'created_at'     that takes parameter
                    key in descending    named records which
                    order, handling      is a list where each
                    None values by       element is a dict
                    placing them last"   that has a key called
                                        'created_at' whose
                                        value is a string
                                        in ISO 8601 format
                                        specifically YYYY-MM-
                                        DDTHH:MM:SS and..."

When to Be More Specific

You should increase specificity when:

  • The task has multiple valid interpretations. "Parse the date" could mean parsing from a string, extracting from a larger text, converting between formats, or handling time zones.
  • You have strong preferences about the implementation. If you want a particular algorithm, library, or pattern, say so.
  • The consequences of a wrong guess are high. Security-sensitive code, database migrations, or code that processes financial data all warrant extra specificity.
  • You are working with domain-specific terminology. AI assistants may not know that "spread" means something different in finance than in statistics.

When to Be Less Specific

You can afford less specificity when:

  • The task is genuinely straightforward. "Write a Python function that checks if a number is prime" is specific enough for most contexts.
  • You want to see the AI's approach first. Sometimes you do not have a strong opinion about implementation and want to see what the AI suggests, then iterate.
  • The conventions are well-established. "Write a Django model for a blog post" carries enough implied specificity because Django model conventions are well-known.

Best Practice: The Goldilocks Rule of Specificity

Include enough detail that the AI cannot reasonably produce the wrong thing, but not so much that you are effectively writing pseudocode. If your prompt is more detailed than the code it would produce, you have gone too far.

Before and After: Specificity

Before (too vague):

Write a function to validate user input.

After (appropriately specific):

Write a Python function `validate_registration_form` that takes a dictionary
with keys "username", "email", and "password". Validate:
- username: 3-20 characters, alphanumeric and underscores only
- email: valid email format (use a regex or the `email-validator` library)
- password: minimum 8 characters, at least one uppercase, one lowercase, one digit

Return a dictionary mapping field names to lists of error message strings.
Return an empty dict if all validations pass.

The second prompt provides the right amount of detail: it specifies the input format, validation rules, and return format without dictating every line of the implementation.


8.4 Context: Providing Background Information

Context is the background information the AI needs to produce code that fits into your specific situation. Without context, the AI writes generic code. With context, it writes code that integrates smoothly with your project.

Types of Context

1. Technology Stack Context

I'm using Flask 3.0 with SQLAlchemy 2.0, Python 3.12, and PostgreSQL 16.
The project follows the application factory pattern.

This tells the AI which libraries, versions, and architectural patterns to target, preventing it from generating code for the wrong framework or using deprecated APIs.

2. Codebase Context

Here is my existing User model:

class User(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    email = db.Column(db.String(120), unique=True, nullable=False)
    password_hash = db.Column(db.String(256), nullable=False)
    created_at = db.Column(db.DateTime, default=datetime.utcnow)

Write a function that creates a new user, fitting this existing model.

Showing the AI your existing code ensures it generates compatible code rather than inventing its own data structures.

3. Domain Context

This is for an e-commerce platform where products can have multiple
variants (size, color). A single SKU maps to exactly one variant
combination. Prices are stored in cents (integers) to avoid
floating-point issues.

Domain context helps the AI make correct assumptions about business logic, naming conventions, and data representations.

4. Problem Context

Users are reporting that the search function returns duplicate results
when a product belongs to multiple categories. The current implementation
uses a SQL JOIN across the products and categories tables without DISTINCT.

Explaining the problem you are solving helps the AI target the right fix rather than guessing what is wrong.

Real-World Application: The Context Window Budget

As you learned in Chapter 2, AI models have a finite context window. Every token of context you provide uses part of that budget. For large projects, you cannot paste your entire codebase. Instead, provide the relevant context: the specific files, functions, and data structures the AI needs to see. Chapter 9 covers context management in depth.

How Much Context Is Enough?

A useful heuristic: provide enough context that the AI could write code passing a code review without needing follow-up questions. If a human reviewer would ask "What database are you using?" or "What does the User model look like?" then the AI needs that same information.

Before and After: Context

Before (no context):

Write an endpoint to get user orders.

After (rich context):

I'm building a REST API with FastAPI and SQLAlchemy (async). Here are
my existing models:

class User(Base):
    __tablename__ = "users"
    id: Mapped[int] = mapped_column(primary_key=True)
    email: Mapped[str] = mapped_column(String(120), unique=True)

class Order(Base):
    __tablename__ = "orders"
    id: Mapped[int] = mapped_column(primary_key=True)
    user_id: Mapped[int] = mapped_column(ForeignKey("users.id"))
    total_cents: Mapped[int]
    status: Mapped[str] = mapped_column(String(20))
    created_at: Mapped[datetime]

Write a GET endpoint at /users/{user_id}/orders that:
- Returns paginated orders (page/page_size query params, default 1/20)
- Filters by optional status query parameter
- Returns 404 if user doesn't exist
- Uses the async session pattern
- Returns orders sorted by created_at descending

The context-rich prompt gives the AI everything it needs to write an endpoint that fits seamlessly into the existing codebase.


8.5 Constraints: Defining Boundaries and Requirements

Constraints tell the AI what it must do, what it must not do, and what limits apply. Without constraints, the AI optimizes for what it considers the "most likely" good solution, which may not match your requirements.

Types of Constraints

Functional Constraints define what the code must accomplish:

- Must handle up to 10,000 records without exceeding 512MB of memory
- Must complete within 2 seconds for a typical input of 1,000 items
- Must return results in the same order as the input

Technical Constraints define implementation requirements:

- Use only the Python standard library (no third-party packages)
- Compatible with Python 3.9+
- Must be thread-safe
- Do not use recursion (stack depth is limited in production)

Style Constraints define code quality expectations:

- Follow PEP 8 with a maximum line length of 88 characters (Black formatter)
- Include type hints for all function parameters and return values
- Add docstrings in Google style format
- Use meaningful variable names (no single-letter names except loop counters)

Security Constraints define safety requirements:

- Sanitize all user inputs before passing to SQL queries
- Do not log or print any personally identifiable information (PII)
- Use parameterized queries exclusively (no string concatenation for SQL)
- Validate file uploads: max 5MB, allowed extensions: .jpg, .png, .pdf

Negative Constraints (what NOT to do) are particularly powerful:

- Do NOT use global variables
- Do NOT modify the input data structure in place
- Do NOT catch broad Exception—catch specific exception types only
- Do NOT use deprecated datetime.utcnow(); use datetime.now(timezone.utc)

Advanced Box: Constraint Prioritization

When you have multiple constraints, consider stating their priority. For example: "Security constraints take precedence over performance. If there is a conflict between readability and clever optimization, choose readability." This helps the AI make trade-off decisions that align with your values.

Before and After: Constraints

Before (no constraints):

Write a function to resize images.

After (well-constrained):

Write a Python function `resize_image` using the Pillow library that:
- Takes a file path (str) and target dimensions (tuple of width, height)
- Preserves aspect ratio (fit within target dimensions, don't stretch)
- Supports JPEG, PNG, and WebP formats
- Raises ValueError for unsupported formats
- Does NOT modify the original file; saves to a new path with "_resized" suffix
- Maintains EXIF data from the original image
- Uses Lanczos resampling for quality
- Handles images up to 50MP without loading the entire image into memory
  (use Pillow's thumbnail method for large images)

The constrained version eliminates an enormous range of possible implementations that would not meet your needs, guiding the AI toward exactly the solution you want.


8.6 Output Formatting: Controlling the Response Shape

The fifth pillar controls how the AI structures its response. Without formatting guidance, you get whatever format the AI considers default, which may not be what you need.

Common Format Instructions

Requesting structured output:

Return the results as a list of dictionaries, where each dictionary has:
- "name": string
- "score": float between 0.0 and 1.0
- "passed": boolean

Example output:
[
    {"name": "test_login", "score": 0.95, "passed": True},
    {"name": "test_signup", "score": 0.72, "passed": False}
]

Requesting specific code structure:

Structure the code as:
1. A dataclass for the configuration
2. A main processing function
3. A helper function for validation
4. An if __name__ == "__main__" block with example usage

Do not include import statements for standard library modules at the top—
instead, add a comment listing required imports.

Requesting explanation alongside code:

For each function you write, include:
1. The code itself
2. A brief comment (2-3 sentences) explaining the design decision
3. An example of calling the function with realistic test data

Requesting specific documentation format:

Use Google-style docstrings with the following sections:
- Summary line
- Args (with types)
- Returns (with type)
- Raises (list specific exceptions)
- Example (runnable doctest)

Best Practice: Show, Don't Just Tell

When specifying output format, include a concrete example of what you want. An example is worth a hundred words of description. Instead of "return it as JSON," show a sample JSON structure with realistic placeholder data.

Before and After: Output Formatting

Before (no format guidance):

Analyze this code for potential bugs.

After (explicit format):

Analyze the following Python function for potential bugs. For each bug found,
provide:

1. **Line number**: The specific line where the issue occurs
2. **Severity**: Critical / Warning / Info
3. **Issue**: One-sentence description of the bug
4. **Fix**: The corrected code snippet
5. **Explanation**: Why this is a bug and how the fix resolves it

Format the response as a numbered list. If no bugs are found, explicitly
state "No bugs found" rather than leaving the response empty.

The formatted version ensures you get actionable, consistently structured feedback rather than a free-form paragraph that might miss details.


8.7 The Prompt Quality Spectrum

Not all prompts are equal, and understanding where a prompt falls on the quality spectrum helps you calibrate your effort. Here is a framework for thinking about prompt quality across five levels:

Level 1: The Wish (Poor)

Make me a website.

This prompt has no clarity about what kind of website, no specificity about features, no context about technology, no constraints, and no format guidance. The AI will produce something, but it is almost certainly not what you need.

Level 2: The Topic (Below Average)

Create a login page with HTML and CSS.

This is better. It has a topic (login page) and some technology context (HTML and CSS). But it lacks details about layout, styling, validation, responsive design, and integration with a backend. The AI will produce a generic login page that you will spend significant time modifying.

Level 3: The Specification (Good)

Create a responsive login page with:
- HTML5 semantic elements
- CSS using Flexbox for centering
- Fields: email and password
- A "Remember Me" checkbox
- A "Forgot Password?" link
- Client-side validation (required fields, email format)
- Accessible: proper labels, ARIA attributes, keyboard navigation
- Color scheme: dark blue (#1a365d) primary, white background

This prompt hits most of the five pillars. It has clear intent, specific features, some technical context, accessibility constraints, and implied output format (HTML + CSS code).

Level 4: The Blueprint (Very Good)

Create a responsive login page for our SaaS application. Here is what I need:

Technology: HTML5, CSS3 (using CSS custom properties), vanilla JavaScript
Design System: Our brand uses Inter font, 8px spacing grid, border-radius: 6px

Layout:
- Centered card (max-width: 400px) on a gradient background (#1a365d to #2d3748)
- Company logo placeholder at top (48px height)
- "Welcome back" heading (h1, 24px)
- Subtext: "Sign in to your account" (p, 14px, gray-500)

Form fields:
- Email: type="email", required, with envelope icon
- Password: type="password", required, with eye toggle for visibility
- "Remember me" checkbox (left-aligned)
- "Forgot password?" link (right-aligned, same line as checkbox)
- Submit button: full-width, blue (#3182ce), white text, hover darkens 10%

Validation:
- Client-side: required fields, email format regex
- Show inline error messages below each field in red (#e53e3e)
- Disable submit button until both fields have content

Accessibility:
- All inputs have associated labels (visually hidden if using placeholders)
- ARIA live region for error announcements
- Full keyboard navigation with visible focus indicators
- Meets WCAG 2.1 AA contrast requirements

Responsive:
- Mobile: card is full-width with 16px padding
- Tablet+: card is centered with shadow

Do NOT include: social login buttons, registration link, or any JavaScript
framework. This should be plain HTML/CSS/JS in a single file.

This is a comprehensive blueprint. It covers all five pillars thoroughly and would produce a login page very close to what the developer envisions.

Level 5: The Contract (Expert)

Level 5 prompts read like a formal specification or API contract. They are most appropriate for complex tasks where precision is paramount, such as writing a payment processing module or a data migration script. We will explore this level in Chapter 10 (Specification-Driven Prompting).

Intuition Box: Matching Prompt Level to Task Complexity

You do not need a Level 4 prompt for a simple utility function. A Level 2 or 3 prompt is fine for "write a function that reverses a string." The goal is to match your prompt's detail level to the task's complexity and the cost of getting it wrong. Simple, low-risk tasks need simple prompts. Complex, high-risk tasks need detailed prompts.


8.8 Common Prompt Anti-Patterns

Knowing what to avoid is just as important as knowing what to do. Here are the most common prompt anti-patterns that undermine vibe coding effectiveness.

Anti-Pattern 1: The "Just Do It" Prompt

Fix the bug.

Why it fails: The AI does not know which bug, in which file, what the expected behavior is, or what the current behavior is. It may not even know what language you are working in if no code is visible in the conversation.

Better approach:

In the `calculate_tax` function (tax_utils.py, line 34), the tax rate is
applied before the discount is subtracted, resulting in overcharging.
The correct logic: apply discount first, then calculate tax on the
discounted amount. The discount is stored in `order.discount_percent`
as a whole number (e.g., 15 for 15%).

Anti-Pattern 2: The Wall of Text

I need a function that takes data from our database which is PostgreSQL
running on AWS RDS and the data includes user information like names and
emails and also their order history and I need to combine all of this
into a report that shows the total spending per user and also which
products they bought most frequently and the function should also handle
the case where a user has no orders and it should be efficient because
we have millions of users and also it needs to work with our existing
ORM which is SQLAlchemy and output a CSV file...

Why it fails: It is a single run-on sentence with no structure, making it hard for both humans and AI to parse. Important requirements get buried in the stream of consciousness.

Better approach: Break it into structured sections:

Goal: Generate a user spending report as a CSV file.

Input: PostgreSQL database (AWS RDS) accessed via SQLAlchemy ORM.

Tables involved:
- users (id, name, email)
- orders (id, user_id, total_cents, created_at)
- order_items (id, order_id, product_id, quantity)
- products (id, name, price_cents)

Report columns:
1. user_id
2. user_name
3. user_email
4. total_spending (sum of order totals, formatted as dollars)
5. order_count
6. most_purchased_product (product with highest total quantity)

Requirements:
- Include users with zero orders (show 0 spending, 0 orders, "N/A" for product)
- Must handle millions of users efficiently (use batch processing, not load all)
- Output to a CSV file with a header row

Use SQLAlchemy 2.0 query syntax.

Anti-Pattern 3: The Contradictory Prompt

Write a simple function with comprehensive error handling that is concise
but thoroughly documented. Keep it minimal but cover all edge cases.
Make it fast but prioritize readability.

Why it fails: Every other sentence contradicts the previous one. "Simple" and "comprehensive" are in tension. "Concise" and "thoroughly documented" pull in opposite directions. The AI has to make trade-off decisions without knowing your priorities.

Better approach: State your priorities clearly:

Write a function with these priorities (in order):
1. Readability (clear variable names, simple logic flow)
2. Correctness (handle None inputs and empty lists gracefully)
3. Documentation (Google-style docstring with args, returns, raises)

Performance is not a concern for this function—it processes small datasets.
Keep it under 30 lines of code excluding the docstring.

Anti-Pattern 4: The Assumption Dump

Write the API endpoint.

Why it fails: This assumes the AI knows which endpoint, which API, which framework, which HTTP method, what the request and response formats are, and what business logic to implement. It relies on context that may exist in the developer's head but was never communicated.

Anti-Pattern 5: The Kitchen Sink Prompt

Write a complete user management system with registration, login,
password reset, email verification, two-factor authentication,
role-based access control, user profiles, avatar uploads, activity
logging, admin dashboard, API endpoints for all operations, unit tests,
integration tests, database migrations, and deployment configuration.

Why it fails: This is not a prompt; it is a project specification that would take a team weeks to implement. AI coding assistants work best on focused, well-defined tasks. Attempting to generate an entire system in one prompt produces superficial, incomplete code for each component.

Better approach: Break it into individual tasks and build incrementally:

Let's build a user management system step by step.

Step 1: Write a SQLAlchemy User model with fields for id, email,
password_hash, is_active, is_verified, and created_at. Include
a method to set and check passwords using bcrypt.

Then in subsequent prompts, add registration, login, and so on, each building on the previous code.

Anti-Pattern 6: The Implicit Standard

Write good code for this.

Why it fails: "Good" is subjective and means different things in different contexts. Good for performance? Good for readability? Good for maintainability? Good according to which style guide?

Common Pitfall: Prompt Length versus Prompt Quality

Longer prompts are not automatically better prompts. A concise, well-structured prompt with clear sections and bullet points often outperforms a long prose paragraph. Quality comes from structure and precision, not word count.


8.9 Prompt Templates for Common Tasks

Templates give you a reusable starting structure for common coding tasks. Customize them for your specific needs rather than starting from scratch every time.

Template 1: Function Generation

Write a [language] function called `[function_name]` that [primary purpose].

Parameters:
- [param_name] ([type]): [description]
- [param_name] ([type]): [description]

Returns:
- [type]: [description of return value]

Behavior:
1. [Step 1 of the logic]
2. [Step 2 of the logic]
3. [Step 3 of the logic]

Error handling:
- Raise [ExceptionType] if [condition]
- Return [default] if [condition]

Constraints:
- [constraint 1]
- [constraint 2]

Include a docstring and type hints.

Example usage of this template:

Write a Python function called `calculate_compound_interest` that computes
the future value of an investment with compound interest.

Parameters:
- principal (float): The initial investment amount in dollars
- annual_rate (float): The annual interest rate as a decimal (e.g., 0.05 for 5%)
- years (int): The number of years to compound
- compounds_per_year (int): How many times interest compounds per year (default: 12)

Returns:
- float: The future value rounded to 2 decimal places

Behavior:
1. Apply the compound interest formula: A = P(1 + r/n)^(nt)
2. Round the result to 2 decimal places
3. Return the future value

Error handling:
- Raise ValueError if principal is negative
- Raise ValueError if annual_rate is negative
- Raise ValueError if years is less than 1

Constraints:
- Use only the Python standard library (math module is fine)
- Must handle very large values without floating-point overflow

Include a docstring and type hints.

Template 2: Bug Fix

Bug description: [What is happening vs. what should happen]

File: [filename]
Function/method: [name]
Line(s): [approximate line numbers if known]

Current behavior: [What the code does now]
Expected behavior: [What it should do]

Relevant code:
```[language]
[paste the relevant code]

Additional context: - [How to reproduce] - [Error message if any] - [What you have already tried]

Please fix the bug and explain what was wrong.


### Template 3: Code Refactoring

Refactor the following [language] code to improve [specific goal: readability / performance / maintainability / testability].

Current code: ```[language] [paste code]


Specific improvements needed:
1. [improvement 1]
2. [improvement 2]

Constraints:
- Maintain the same public interface (same function signatures)
- Do not change the behavior (all existing tests should still pass)
- [additional constraints]

Explain each change you make and why.

Template 4: Test Generation

Write [test framework] tests for the following [language] function:

```[language]
[paste the function to test]

Test categories to cover: 1. Happy path: [describe normal usage scenarios] 2. Edge cases: [describe boundary conditions] 3. Error cases: [describe expected failures]

Testing conventions: - Use descriptive test names following the pattern: test_[what][condition][expected] - Each test should test exactly one behavior - Use [assertion style / matchers] - [additional conventions]

Aim for [number] tests total.


### Template 5: Code Review

Review the following [language] code for: 1. Bugs or logical errors 2. Security vulnerabilities 3. Performance issues 4. Style and readability concerns 5. Missing error handling

```[language] [paste code]


For each issue found, provide:
- Severity (Critical / Warning / Suggestion)
- Location (line number or section)
- Description of the issue
- Recommended fix with code

If the code is well-written in any aspect, mention that too.

Best Practice: Build Your Own Template Library

Start with these templates and customize them for your projects and coding style. Over time, you will develop templates specific to your technology stack, team conventions, and the types of tasks you perform most frequently. Chapter's case study 2 walks through building a team template library in detail. The accompanying code examples include a Python-based template system you can use as a starting point.


8.10 Measuring Prompt Effectiveness

How do you know if a prompt is effective? You need concrete metrics and evaluation strategies, not just a vague feeling that "it worked."

The Four Measures of Prompt Effectiveness

1. First-Attempt Success Rate

The most direct measure: did the AI produce usable code on the first try? A well-crafted prompt should produce code that is correct (or very close to correct) without requiring multiple follow-up corrections.

Track this informally by noting how many prompts need follow-up: - Excellent: Code works correctly with no modifications needed. - Good: Code works with minor modifications (variable names, small logic tweaks). - Acceptable: Code has the right structure but needs moderate fixes. - Poor: Code is fundamentally wrong or addresses the wrong problem.

2. Iteration Count

How many follow-up prompts does it take to get from the initial response to working code? Effective prompts minimize this number. If you regularly need 5+ follow-up prompts, your initial prompts need work.

3. Code Quality Score

Even if the code works, is it good code? Evaluate on: - Does it follow the style conventions you specified? - Is it well-documented? - Does it handle edge cases? - Is it efficiently implemented? - Would it pass a code review?

4. Reusability

Can you reuse the prompt (with minor modifications) for similar tasks in the future? Effective prompts are often templates waiting to happen. If you find yourself writing similar prompts repeatedly, extract the common structure into a template.

The Prompt-Response Feedback Loop

Effective vibe coding is an iterative process:

┌──────────────┐     ┌───────────────┐     ┌──────────────┐
│ Write prompt │────>│ Evaluate      │────>│ Refine prompt│
│              │     │ response      │     │ or accept    │
└──────────────┘     └───────────────┘     └──────┬───────┘
       ^                                          │
       └──────────────────────────────────────────┘

After each response, ask yourself: 1. Did the AI understand what I wanted? (Clarity issue if not) 2. Did it include the right details? (Specificity issue if not) 3. Did the code fit my project? (Context issue if not) 4. Did it follow my requirements? (Constraints issue if not) 5. Is the response in a usable format? (Output format issue if not)

The answer to "which pillar failed?" tells you exactly how to improve your next prompt.

Real-World Application: Prompt Journaling

Keep a log of your best prompts and the results they produced. Over time, this becomes an invaluable personal reference. Many experienced vibe coders maintain a document or repository of their most effective prompts organized by task type. This practice accelerates learning because you can review what worked, identify patterns, and reuse successful approaches. The code examples accompanying this chapter include a prompt analyzer tool that can help you evaluate your prompts against the five pillars automatically.

Quantitative Scoring

You can score a prompt on each of the five pillars using a simple 1-5 rubric:

Score Clarity Specificity Context Constraints Format
1 Ambiguous, multiple interpretations No details No background info No requirements stated No format guidance
2 Main intent clear, details fuzzy Some details missing Minimal context Few constraints Vague format hints
3 Clear intent, minor ambiguities Adequate detail for task Key context provided Important constraints stated General format specified
4 Unambiguous, well-structured Precise detail level Comprehensive context Thorough constraints Detailed format with example
5 Crystal clear, single interpretation Perfect detail calibration Complete relevant context Prioritized constraints Full format specification with sample

A prompt scoring 3+ on all pillars will generally produce good results. A prompt scoring 4+ on all pillars will typically produce excellent results on the first attempt.


Connecting the Pillars: A Complete Example

Let us tie everything together with a complete example that demonstrates all five pillars working in harmony.

The task: You need a function that processes CSV files containing sales data and generates a summary report.

A prompt using all five pillars:

[CONTEXT]
I'm building a sales analytics tool in Python 3.11. The project uses
pandas 2.1 for data manipulation and pathlib for file handling. The CSV
files come from our POS system and have these columns: transaction_id,
date (YYYY-MM-DD), product_name, category, quantity (int), unit_price
(float), store_id.

[CLARITY — What I Need]
Write a function called `generate_sales_summary` that reads a CSV file
and produces a summary dictionary with sales metrics.

[SPECIFICITY — The Details]
The summary should include:
1. Total revenue (quantity * unit_price, summed across all rows)
2. Number of unique products sold
3. Top 5 products by revenue (list of dicts with product_name and revenue)
4. Revenue breakdown by category (dict mapping category to total revenue)
5. Daily revenue trend (dict mapping date string to total revenue)
6. Store with highest revenue (dict with store_id and revenue)

[CONSTRAINTS]
- Handle missing values: skip rows where quantity or unit_price is NaN
- Handle empty files gracefully: return a summary with zero values
- Raise FileNotFoundError if the CSV file doesn't exist
- Raise ValueError if required columns are missing
- All monetary values should be rounded to 2 decimal places
- The function should handle files up to 1GB efficiently
- Do not modify the original CSV file

[OUTPUT FORMAT]
- Include type hints for the function signature and the return dict
- Use a TypedDict or dataclass for the return type definition
- Include a Google-style docstring with Args, Returns, Raises, and Example
- Add inline comments for any non-obvious logic

This prompt provides complete information across all five pillars. An AI assistant receiving this prompt would produce a function that closely matches the developer's intent with minimal need for iteration.


Chapter Summary

Prompt engineering is the foundational skill of vibe coding. In this chapter, you learned:

  1. The Five Pillars Framework: Every effective prompt is built on clarity, specificity, context, constraints, and output formatting. These pillars work together to minimize ambiguity and maximize the chance of getting the code you need on the first try.

  2. Clarity means removing ambiguity so your prompt can only be interpreted one way. Use precise verbs, name things explicitly, and avoid vague language.

  3. Specificity means calibrating the detail level to the task. Include enough detail to prevent wrong guesses, but not so much that you are writing pseudocode.

  4. Context means providing the background information the AI needs: your technology stack, existing code, domain knowledge, and the problem you are solving.

  5. Constraints mean defining what the code must do, must not do, and the limits that apply. Include functional, technical, style, and security constraints as appropriate.

  6. Output formatting means controlling how the AI structures its response, including code organization, documentation style, and response format.

  7. The Quality Spectrum ranges from vague wishes (Level 1) to detailed contracts (Level 5). Match your prompt's level to the task's complexity and risk.

  8. Anti-patterns to avoid include the "just do it" prompt, the wall of text, contradictory requirements, assumption dumps, kitchen sink prompts, and implicit standards.

  9. Templates provide reusable starting structures for common tasks: function generation, bug fixes, refactoring, testing, and code review.

  10. Measuring effectiveness using first-attempt success rate, iteration count, code quality, and reusability helps you improve systematically over time.

As you continue through Part II, you will build on these fundamentals. Chapter 9 covers context management in depth, Chapter 10 introduces specification-driven prompting, Chapter 11 teaches iterative refinement techniques, and Chapter 12 explores advanced prompting strategies. The five pillars you learned here form the foundation for everything that follows.

The best way to internalize these concepts is practice. Work through the exercises, experiment with the prompt templates, and use the prompt analyzer tool in the code examples to evaluate your prompts. Like any skill, prompt engineering improves with deliberate practice and honest self-assessment.


Key Vocabulary

Term Definition
Prompt The text instruction given to an AI coding assistant to produce a desired output
Five Pillars The framework of clarity, specificity, context, constraints, and output formatting
Clarity The quality of being unambiguous and having only one reasonable interpretation
Specificity The calibrated level of detail included in a prompt
Context Background information that helps the AI understand the project environment
Constraints Boundaries, requirements, and limitations that the generated code must respect
Output formatting Instructions that control the structure and presentation of the AI's response
Anti-pattern A common but counterproductive approach to prompt writing
Prompt template A reusable prompt structure with placeholders for task-specific details
First-attempt success rate The percentage of prompts that produce usable code without follow-up
Iteration count The number of follow-up prompts needed to reach working code
Context window The maximum amount of text an AI model can process in a single interaction
Negative constraint A constraint specifying what the code should NOT do
Prompt quality spectrum A five-level scale from vague wishes to detailed contracts

Next chapter: Chapter 9 — Context Management: Giving AI the Information It Needs