Appendix F: Prompt Notation and Conventions
This appendix defines the notation, formatting, and typographic conventions used throughout this book when presenting prompts, conversations, code examples, and instructional elements. Understanding these conventions will help you read the book efficiently and apply the techniques accurately in your own practice.
F.1 How Prompts Are Formatted in Text
Throughout the book, prompts are presented in several formats depending on the context.
Inline Prompts
Short prompts that appear within a paragraph are formatted in double quotation marks and monospace font:
Tell the assistant:
"Write a Python function that validates email addresses using regex."
When a prompt is only a few words and serves as an example of phrasing rather than a literal instruction, it appears in regular double quotes without monospace:
You might begin with something like "explain this function" or "add error handling."
Block Prompts
Prompts that span multiple lines or represent a complete interaction are displayed in fenced code blocks with the language identifier text. This signals to the reader that the enclosed text is meant to be typed or pasted into an AI assistant, not executed as code. For example:
```text
Write a Python function called validate_email that:
- Accepts a single string parameter
- Returns True if the string is a valid email address
- Returns False otherwise
- Uses a regex pattern for validation
- Includes a docstring with examples
```
When a prompt includes code that the user is pasting to the AI (as opposed to code generated by the AI), the code appears indented within the prompt block:
```text
Refactor the following function to use list comprehension:
def get_even_numbers(numbers):
result = []
for n in numbers:
if n % 2 == 0:
result.append(n)
return result
```
Prompt-in-Context Blocks
When presenting a prompt alongside the system context or preceding conversation turns, we use labeled sections. Each role label (System, User, Assistant) is bold and followed by a colon:
System: You are a senior Python developer. Follow PEP 8 conventions. Use type hints.
User: Create a data class for representing a blog post with title, author, content, publication date, and a list of tags.
Assistant: Here is a data class for a blog post...
This format mirrors the underlying message structure of LLM API calls, where each message has a role and content.
F.2 Notation for Prompt Components
System Prompts
System prompts set the overall behavior and persona for the AI assistant. In this book, system prompts are always labeled explicitly and appear in a distinct block:
System Prompt: You are an expert Python developer specializing in REST APIs. You write clean, well-tested code following PEP 8 conventions. You always include type hints and Google-style docstrings.
When discussing system prompts in prose, they are referred to as "the system prompt" or "system-level instructions." The term is italicized on first introduction in each chapter.
User Messages
User messages represent what you type to the AI assistant. They are labeled User: in conversation blocks. In running text, user messages are described as "your prompt," "the user's input," or "the instruction."
Assistant Responses
Assistant responses represent the AI's output. They are labeled Assistant: in conversation blocks. When only a fragment of the response is relevant, an ellipsis (...) indicates omitted content:
Assistant: Here is the implementation:
def validate_email(address: str) -> bool: ...The function uses the
remodule to...
Tool Use and Function Calls
When demonstrating agent tool use (Chapters 36--40), tool calls and their results are formatted with distinct labels:
Tool Call:
read_file(path="src/main.py")Tool Result:
[File: src/main.py | 42 lines] import flask...
F.3 Conversation Turn Conventions
Multi-turn conversations are presented as sequences of labeled messages. Each turn is separated by a blank line. The turn order always reflects the actual conversation flow:
User: Create a function that sorts a list of dictionaries by a given key.
Assistant: Here is a function that does that...
User: Good, but add error handling for when the key does not exist.
Assistant: Here is the updated version with error handling...
Conversation Numbering
When referencing specific turns in a long conversation, we use Turn N notation:
In Turn 3, the user asked the assistant to add error handling. The assistant's response in Turn 4 correctly added a
KeyErrorhandler.
Ellipsis in Conversations
When only certain turns of a long conversation are relevant, we use labeled ellipsis blocks:
User: [initial prompt]
Assistant: [initial response]
...3 turns of iterative refinement...
User: Now add unit tests for this function.
Branching Conversations
When illustrating how different prompts lead to different results, we present alternatives side by side or in labeled blocks:
Approach A (vague prompt): User: Write some tests.
Approach B (specific prompt): User: Write pytest test cases for the validate_email function covering valid emails, invalid formats, empty strings, and None input.
F.4 Code Blocks Within Prompts
Code in Prompt Input
When a prompt includes source code that the user is providing as context (e.g., "explain this code" or "refactor this"), the code appears within the prompt block, indented with four spaces to distinguish it from the prompt's natural language:
User: Explain what this function does and suggest improvements:
def proc(d, k): r = [] for i in d: if k in i: r.append(i[k]) return r
Code in AI Responses
When showing code that the AI generated, we use standard fenced code blocks with the appropriate language identifier (typically python):
Assistant: Here is the improved version:
def extract_values(records: list[dict], key: str) -> list: """Extract values for a given key from a list of dictionaries.""" return [record[key] for record in records if key in record]
Annotations in Code
Within code examples (both in prompt blocks and in standalone code files), annotations are provided as inline comments. Comments that explain something to the reader (pedagogical comments) are preceded by a # and written as complete sentences:
# This comprehension filters out records missing the key.
return [record[key] for record in records if key in record]
Comments that are part of the code itself (functional comments) follow standard Python conventions.
F.5 Template Variable Notation
Throughout the book, when presenting reusable prompt templates, variable placeholders are enclosed in curly braces:
Analyze the {language} code in {file_path} and identify any
{issue_type} issues. Focus on {specific_area}.
In running text, template variables are formatted as {variable_name} in monospace. When a template is introduced, each variable is defined in a table or list immediately following the template:
| Variable | Description | Example |
|---|---|---|
{language} |
The programming language of the code | Python, JavaScript |
{file_path} |
Path to the file being analyzed | src/models/user.py |
{issue_type} |
Category of issues to look for | security, performance |
{specific_area} |
Focus area within the code | input validation, error handling |
When a template variable is optional (may be omitted from the prompt), it is marked with a ? suffix in the definition: {context?}.
Nested Templates
Some advanced prompts use nested templates where one variable contains another template. These are shown with distinct bracket styles to avoid confusion:
Given this requirement: [{requirement}]
And this existing code: [{existing_code}]
Generate an implementation that satisfies the requirement.
Square brackets [] denote context boundaries, while curly braces {} denote individual variables.
F.6 Callout Box Meanings
Every chapter uses callout boxes to highlight specific types of information. Each callout type has a consistent meaning across the entire book:
Intuition
Intuition
Intuition callouts build gut-level understanding of why something works the way it does. They use analogies, mental models, and simplified explanations to make abstract concepts concrete. These callouts are particularly useful for readers encountering a concept for the first time.
Example usage: Explaining why temperature affects LLM output by comparing it to a cautious writer versus a creative poet.
When you see this: Pause and make sure you can explain the concept in your own words. If the analogy resonates, it will serve as a mental anchor for the more technical details that follow.
Real-World Application
Real-World Application
These callouts connect textbook concepts to production software development. They describe how a concept is actually used in industry, what scale it operates at, and what practical constraints apply.
Example usage: Describing how a specific company uses AI-assisted code review in their CI/CD pipeline to catch issues before human reviewers see them.
When you see this: Consider how the concept might apply to your own projects or workplace.
Common Pitfall
Common Pitfall
Common Pitfall callouts warn about mistakes that are frequently made by developers learning vibe coding. These are drawn from real experience and are designed to help you avoid hours of debugging or frustration.
Example usage: Warning that asking an AI to "make the code better" is too vague and will produce unpredictable results. Instead, specify what "better" means: faster, more readable, more secure.
When you see this: Pay close attention. These pitfalls are common precisely because they are not obvious. Bookmark the page if the pitfall is relevant to your current work.
Advanced
Advanced
Advanced callouts provide deeper technical detail for readers who want to go beyond the chapter's core content. These sections may reference research papers, discuss edge cases, or explore theoretical foundations. They can be safely skipped by readers focused on practical application.
Example usage: Explaining the mathematical relationship between attention heads and context window size in transformer architectures.
When you see this: Read if you are curious or if the topic is relevant to your work. Skip without guilt if you are focused on building practical skills.
Best Practice
Best Practice
Best Practice callouts present established conventions and recommendations that represent the professional consensus on how to do something well. These are the "right way" to do things according to experienced practitioners.
Example usage: Always include type hints and docstrings when asking an AI to generate Python code, because this constrains the output toward higher-quality results.
When you see this: Adopt these practices in your own workflow. They represent lessons learned from many developers over significant time.
Note
Note
Note callouts provide additional context, clarifications, or tangential information that enriches understanding but is not essential to the chapter's core content. Notes may point to related chapters, clarify terminology, or provide historical context.
Example usage: Noting that the term "vibe coding" was coined by Andrej Karpathy in early 2025 and quickly entered mainstream developer vocabulary.
When you see this: Read for enrichment. Notes often connect the current topic to broader themes in the book.
F.7 Exercise Tier Definitions
Every chapter contains exercises organized into five tiers of increasing cognitive complexity, based on Bloom's taxonomy of educational objectives. Each tier targets a different level of learning.
Tier 1: Recall
Bloom's Level: Remember and Understand
What it tests: Whether you can recall key concepts, definitions, and facts from the chapter. These exercises verify basic comprehension of the material.
Typical format: Multiple choice, fill-in-the-blank, matching, short-answer factual questions, and labeling diagrams or pseudocode.
Example: "List five characteristics that distinguish an AI coding agent from a conversational AI assistant."
How to use: Complete all Tier 1 exercises before moving to higher tiers. If you struggle with these, reread the relevant chapter section. You should be able to answer Tier 1 questions from memory after a single careful reading.
Tier 2: Apply
Bloom's Level: Apply
What it tests: Whether you can use the concepts from the chapter in straightforward, well-defined scenarios. These exercises require you to write code, design solutions, or apply techniques to given problems.
Typical format: Implement a class, write a function, design a configuration, build a small tool.
Example: "Implement a PermissionChecker class that accepts a configuration of allowed directories, blocked commands, and allowed file extensions."
How to use: Do at least three to five Apply exercises per chapter. These build practical skill and muscle memory.
Tier 3: Analyze
Bloom's Level: Analyze
What it tests: Whether you can break down complex situations, identify patterns, compare approaches, and evaluate tradeoffs. These exercises require deeper thinking and often have multiple valid answers.
Typical format: Analyze a scenario, compare strategies, identify failure modes, critique an implementation, decompose a task.
Example: "Given the following agent trace, identify where the agent made a suboptimal decision, where it could have been more efficient, and what guardrails should have been in place."
How to use: Attempt one to two Analyze exercises per chapter. Write out your analysis in full sentences, as the process of articulating your reasoning deepens understanding.
Tier 4: Create
Bloom's Level: Synthesis and Creation
What it tests: Whether you can combine concepts from the chapter (and potentially earlier chapters) to design and build something new. These are substantial exercises that may take one to several hours.
Typical format: Design a system, build a working tool, create a specification, implement a complete feature.
Example: "Design a complete code review agent that reads a pull request diff, analyzes each change for bugs and security issues, and generates inline comments."
How to use: Complete at least one Create exercise per chapter. These exercises produce artifacts (code, designs, specifications) that demonstrate your competence and can serve as portfolio pieces.
Tier 5: Challenge
Bloom's Level: Evaluation, Creation, and Cross-Chapter Integration
What it tests: Whether you can integrate material from multiple chapters, evaluate competing approaches, conduct research, and produce professional-quality work. Challenge exercises are intentionally open-ended and may require external research.
Typical format: Research projects, multi-component system design, comparative analysis, production-quality implementations with full test coverage.
Example: "Design and implement an adaptive guardrail system that starts with strict permissions, tracks the agent's behavior over time, gradually relaxes permissions for safe actions, and tightens permissions if guardrails are triggered."
How to use: Tackle Challenge exercises only after you are confident with the chapter material. These exercises are designed to stretch your abilities and may take a full day or more. They are excellent preparation for real-world projects.
F.8 Difficulty Ratings
Some exercises and code examples include a difficulty rating to help you calibrate your expectations:
| Rating | Label | Description |
|---|---|---|
| 1 | Beginner | Suitable for readers new to programming or AI-assisted development. Requires only concepts from Parts I--II. |
| 2 | Intermediate | Requires familiarity with core programming concepts and basic prompting techniques. Typical of Parts II--III. |
| 3 | Advanced | Requires solid programming experience and understanding of software architecture. Typical of Parts IV--V. |
| 4 | Expert | Requires deep understanding of multiple topics and significant implementation experience. Typical of Part VI. |
| 5 | Research | Open-ended problems at the frontier of current practice. May require reading external papers or documentation. |
When a difficulty rating appears, it is shown in parentheses after the exercise title:
Exercise 14: Autonomy Level Assessment (Difficulty: 3)
F.9 Other Typographic Conventions
The following table summarizes all typographic conventions used in the book. Consult this reference whenever you encounter unfamiliar formatting.
| Convention | Meaning | Example |
|---|---|---|
monospace |
Code, commands, file names, function names, and technical identifiers | validate_email(), src/main.py |
| Bold | Key terms on first introduction, emphasis, callout labels | agent loop, guardrail |
| Italic | Book and paper titles, emphasis, new concepts being introduced | Attention Is All You Need, vibe coding |
>>> |
Python REPL (interactive interpreter) prompt | >>> print("hello") |
$` | Shell/terminal command prompt | `$ python -m pytest tests/ |
||
# comment |
Explanatory comments in code examples | # This validates the input format. |
{variable} |
Template variable placeholder in prompts | {file_path}, {language} |
[text] |
Context boundary in nested templates | [{requirement}] |
... |
Omitted content (in code or conversation) | def process(...): |
| Turn N | Reference to a specific conversation turn | Turn 3 |
| (Ch. N) | Cross-reference to another chapter | (Ch. 12) |
| (Appendix X) | Cross-reference to an appendix | (Appendix A) |
| > blockquote | Callout box, example prompt, or quoted text | See callout box definitions above |
F.10 File and Directory Naming Conventions
All code examples in this book follow a consistent naming scheme:
- Chapter code directories:
code/within each chapter directory - Example files:
example-NN-descriptive-name.py(e.g.,example-01-simple-agent.py) - Exercise solution files:
exercise-solutions.py - Case study code:
case-study-code.py - Test files:
test_*.py(following pytest conventions) - Data files:
data/subdirectory withincode/ - Configuration files: Named descriptively (e.g.,
guardrail-config.yaml)
All Python files include a module-level docstring explaining their purpose, usage instructions, and which chapter section they relate to.
F.11 Cross-Reference System
The book uses a layered cross-reference system to help you navigate between related topics:
In-text references use parenthetical chapter numbers: "The ReAct pattern (Ch. 36) extends the chain-of-thought technique (Ch. 12) by interleaving reasoning with action."
Callout cross-references appear at the end of callout boxes when the topic is covered in more depth elsewhere: "For a deeper treatment of this concept, see Chapter 28."
Further Reading sections at the end of each chapter list external resources organized by category (research papers, books, documentation, blog posts).
See also references in the Glossary (Appendix E) link related terms: "See also agent loop, ReAct."
Exercise cross-references appear when an exercise builds on material from another chapter: "This exercise requires concepts from Chapter 25 (Design Patterns) and Chapter 36 (Agents)."
This appendix provides a complete reference for the notation system used throughout the book. When in doubt about what a formatting convention means, return to this appendix for clarification.