26 min read

Every practitioner who uses AI tools daily has experienced the friction of the blank context. You open a new chat, and before you can do anything useful, you must re-establish who you are, what context you are working in, what your standards are...

Chapter 37: Custom GPTs, Assistants, and Configured AI Systems

The Problem with Starting Fresh

Every practitioner who uses AI tools daily has experienced the friction of the blank context. You open a new chat, and before you can do anything useful, you must re-establish who you are, what context you are working in, what your standards are, and what kind of help you need. You paste your brand guidelines, re-explain your audience, re-specify the output format, and remind the AI of the conventions you care about — and then, if you are lucky, you get something worth using.

This friction compounds at scale. If you do the same type of AI-assisted work every day — content creation, code review, research synthesis, client communication — you are rebuilding context from scratch on every session. And if your team is using the same AI tools, they are each rebuilding that context independently, with inconsistent results.

Configured AI systems solve this problem. A custom GPT, a Claude Project, or an API-based assistant is an AI interaction environment that comes pre-loaded with the context, instructions, and knowledge base it needs for a specific task. When you open it, it already knows your brand voice, your audience, your conventions, your quality standards, and its own role. You start doing the actual work immediately.

This shift — from ad hoc prompting to configured systems — is one of the highest-leverage changes available in advanced AI practice. The effort to configure a system once pays back every time it is used, across every user who accesses it.

What Configured AI Systems Are

A configured AI system has three defining characteristics:

Persistent instructions that define the AI's role, behavior, and constraints, loaded automatically with every interaction. You do not need to provide them; they are already there.

A knowledge base that the AI draws on for relevant domain information — your company's products, your team's standards, your project's context, your field's conventions. This knowledge is available in every interaction without being pasted in.

A defined identity — the configured system is not a general-purpose AI assistant; it is a specific tool with a specific purpose. Users know what it is for, what it does well, and where to go when it cannot help.

These three characteristics distinguish configured systems from ad hoc prompting. Ad hoc prompting is flexible and powerful; configured systems are reliable and scalable. Both have their place, and the advanced practitioner knows which to use when.

💡 Intuition: A configured AI system is to ad hoc prompting what a specialized professional tool is to a Swiss Army knife. The Swiss Army knife is more flexible; the specialized tool is better at the one job it was designed for.

Custom GPTs: ChatGPT's GPT Builder

OpenAI's GPT Builder allows ChatGPT Plus and Enterprise users to create Custom GPTs — configured AI systems built on ChatGPT and shared with others. As of early 2025, there are over 3 million Custom GPTs in the GPT store.

What Custom GPTs Can Do

Persistent instructions: A Custom GPT has a system prompt that runs automatically for every conversation. Users interact with the GPT without seeing or needing to provide this prompt.

Knowledge files: You can upload documents, PDFs, spreadsheets, and other files that the GPT can retrieve information from. When a user asks something the knowledge file addresses, the GPT retrieves relevant content and incorporates it into the response.

Capabilities: Custom GPTs can be configured to use web browsing, image generation (DALL-E), and code execution, or these capabilities can be disabled if they are not relevant to the GPT's purpose.

Actions: Custom GPTs can be connected to external APIs via OpenAPI specifications, allowing them to fetch real-time data, submit forms, or interact with external systems. Actions are the most technically complex aspect of Custom GPT development and are covered briefly here.

Sharing and publishing: Custom GPTs can be kept private (only accessible to you), shared via link (accessible to anyone with the link), or published to the GPT store (accessible to all ChatGPT users).

The GPT Builder Interface

GPT Builder is accessible from ChatGPT's sidebar under "My GPTs > Create a GPT." It has two panels:

The Configure tab is where you set everything up: - Name and description: What this GPT is called and what it does (displayed to users) - Instructions: The system prompt — the most important element - Conversation starters: Suggested prompts shown to users when they open the GPT - Knowledge: File uploads for the knowledge base - Capabilities: Web browsing, DALL-E, and code interpreter toggles - Actions: External API connections

The Preview panel lets you test the GPT while configuring it. Every change you make to the instructions immediately affects the preview conversation — you can see the impact of each change in real time.

Writing Effective Instructions for a Custom GPT

The instruction field is a system prompt. Everything from Chapters 8 and 9 on system prompt design applies here. But configured system prompts have specific requirements beyond one-off system prompts.

A configured system prompt must be complete. In a one-off interaction, if you forget to specify something, you can add it in the next message. In a configured system, the instructions are the only guidance the AI will receive before the user starts interacting. If the instructions do not cover a situation, the AI will fall back to its default behavior — which may not be what you want.

A configured system prompt must be durable. Users will interact with this GPT in ways you cannot fully anticipate. Write instructions that handle edge cases, not just the common case.

A configured system prompt must include boundary conditions. What should the GPT do when asked something outside its scope? "If asked about topics unrelated to [domain], politely explain that you are specialized for [domain] and suggest where else the user might find help" is more useful than leaving this to chance.

🗣️ Configured System Prompt Template:

# Role and Identity
You are [name], a [brief role description] for [company/team/purpose].

Your primary job is to [main function in one sentence].

# What You Do
[List 3-5 specific things users can ask you to do, in bullet form]

# How You Work
[2-3 sentences on your approach, style, or methodology]

# Knowledge and Expertise
[What knowledge or context you have access to — especially what is in the knowledge files]

# Output Standards
[Format, length, tone, structure requirements for responses]

# Behavioral Guidelines
- [Specific do's and don'ts]
- [How to handle ambiguity]
- [How to handle requests outside your scope]
- [What to do when you don't know something]

# Escalation
If [specific condition], [specific action — e.g., "tell the user to contact [person/team] instead"].

# About This Tool
Version: [version number or date]
Maintained by: [person or team]

Best Practice: Write the Role and Identity section first. Before you specify behaviors, clarify purpose. A GPT that has a clear, specific purpose will produce more consistent results than one with a vague purpose and many behavioral rules, because the purpose guides behavior in situations the rules do not explicitly cover.

Knowledge Files: What to Include

Knowledge files are documents the GPT can retrieve from when relevant context is requested. Effective knowledge files:

Are specific, not generic. The GPT already knows general knowledge about most topics. Upload your company-specific information, your project-specific context, your team-specific standards — things the AI would not know without your providing them.

Have clear structure. Well-organized documents with headers, bullet points, and clear sections are retrieved more reliably than dense prose. The retrieval system finds relevant sections; clean structure makes sections identifiable.

Are appropriately scoped. Uploading your 300-page company handbook will not make the GPT knowledgeable about your company — it will make retrieval unreliable because relevant content is buried in irrelevant content. Upload focused, relevant sections rather than entire documents.

Recommended knowledge file types for common GPTs: - Brand voice GPT: style guide, example posts, vocabulary lists, tone descriptions - Code assistant GPT: internal coding standards, architecture documentation, common patterns - Research assistant GPT: topic overviews, key sources, methodology guidelines - Client communication GPT: client profiles, project background, key personnel

Knowledge file limitations: - Total storage per GPT: 20 files, 512 MB total - Retrieval is not guaranteed: the GPT retrieves relevant sections but cannot access all knowledge simultaneously in one response - File content is not kept confidential from users who probe for it — if you upload sensitive documents, assume users can retrieve them by asking carefully

⚠️ Common Pitfall: Uploading proprietary or confidential information to a GPT that will be shared publicly. GPT knowledge files are not secure storage. Users who probe the GPT can extract significant portions of uploaded documents. For confidential information, use private GPTs or consider Claude Projects (which has different sharing controls) or an API-based approach.

Actions: Connecting to External APIs

Actions allow a Custom GPT to call external APIs during a conversation. When a user asks something that requires real-time data or an external operation, the GPT can make an API call and incorporate the response.

Common action use cases: - Fetching real-time data (weather, stock prices, calendar events) - Querying internal databases or CRMs - Submitting forms or creating records - Retrieving documents from a content management system

Actions are configured using OpenAPI specifications — JSON or YAML documents that describe the API's endpoints, parameters, and authentication. Building actions requires either access to an API with documentation or ability to write the spec yourself.

Actions are the most powerful but also the most technically demanding aspect of Custom GPT development. For most practitioners, GPTs without actions deliver the majority of the configured system value.

Sharing and Publishing Custom GPTs

Private: Default. Only accessible by you. Appropriate for personal productivity tools.

Link-shared: Anyone with the URL can access the GPT but it does not appear in the store. Appropriate for team tools that should not be public.

Published to the store: Visible in the GPT store, searchable by all ChatGPT users. Appropriate only for GPTs that are genuinely useful to people outside your organization.

For most professional use cases — team tools, client-facing assistants, domain-specific research helpers — link-sharing is the appropriate choice.

Evaluating GPTs in the Marketplace

The GPT store contains millions of GPTs, ranging from excellent to useless. When evaluating a third-party GPT:

  • Check the creator: is it from a recognizable organization or an anonymous account?
  • Look at the usage count and rating
  • Test with specific questions relevant to your use case — generic praise is not useful evidence
  • Be cautious about what data you share with third-party GPTs
  • If a GPT asks for credentials, login information, or sensitive data, do not provide it

Best Practice: For professional or sensitive use cases, build your own configured system rather than relying on a third-party GPT you cannot audit. The customization investment is modest; the control it provides is significant.

Claude Projects: Persistent Context in Claude

Claude Projects (available in Claude Pro and Team plans) offer a different approach to configured AI systems. Instead of a standalone GPT-like experience, Projects provide a persistent context layer within Claude — instructions and documents that persist across all conversations within the project.

What Claude Projects Are

A Project is a named workspace in Claude that contains: - Project instructions: A system prompt that applies to every conversation in the project - Project documents: Files you upload that Claude can reference in any conversation - Conversation history: All conversations within the project are accessible

Unlike Custom GPTs, which present as standalone tools (often shared with others), Claude Projects are primarily for individual use — a way to maintain consistent context across an ongoing body of work with a particular client, project, or domain.

Setting Up a Claude Project

Creating a project: In Claude's interface, select "Projects" and create a new project. Give it a descriptive name and set the project instructions.

Project instructions follow the same principles as Custom GPT instructions but are typically less elaborate — they focus on the specific context and working style for this project rather than building a full persona and behavioral framework. A good project instruction covers: - What this project is (the client, the engagement, the domain) - Your role and what you are trying to accomplish - Key context that should inform every conversation (client preferences, project constraints, terminology conventions) - Output standards for this project specifically

Project documents are the files you upload. Unlike GPT knowledge files, project documents in Claude are treated more like working materials — Claude can read them fully and reference them explicitly. You can update them as the project evolves.

Use Cases for Claude Projects

Ongoing client engagements: Load client background documents, previous deliverables, communication style notes, and project objectives. Every conversation in the project benefits from this context without re-establishing it.

Role-specific assistants: Create a project for each major role you play — "Marketing Strategist," "Technical Reviewer," "Research Analyst" — each with instructions and materials appropriate to that role.

Document analysis projects: Load a collection of documents (reports, transcripts, source material) and use the project to conduct multi-session analysis. Claude can reference specific documents across multiple conversations.

Writing projects: Load style guides, reference materials, previous drafts, and structural guidelines. Maintain a consistent project-level context across a multi-week writing engagement.

💡 Intuition: Think of a Claude Project as a pre-briefed contractor who already knows your project background before every meeting. You do not need to re-explain context; you pick up where you left off.

Claude Projects vs. Custom GPTs: When to Use Each

Use Claude Projects when: - The configured context is for your own ongoing use, not for sharing with others - You are working on a body of work that evolves over weeks or months - You want to maintain conversation history across sessions - Your context involves documents you want to update and maintain

Use Custom GPTs when: - You want to share the configured tool with others - The tool should present as a standalone, clearly named product - You need external API integrations (Actions) - The use case is recurring and benefits from a polished user experience

Many practitioners maintain both: Claude Projects for their own ongoing work (where the evolving document context is most valuable) and Custom GPTs for team tools they share with others.

API-Based Custom Assistants: The OpenAI Assistants API

The OpenAI Assistants API provides a programmatic way to create persistent, configured AI systems with tool use, file search, and thread management. Unlike Custom GPTs (a no-code configuration interface) or Claude Projects (a platform feature), the Assistants API requires code to set up and use.

Core Concepts

Assistants are configured AI systems with instructions, a model, and optional tools. Once created, an assistant persists and can be used across many conversations.

Threads are conversations. Each thread maintains its own message history. A single assistant can manage many threads simultaneously.

Runs are the actual execution of the assistant on a thread — the assistant processes the thread's messages and generates a response.

Tools extend the assistant's capabilities: - File search: The assistant can search through uploaded files to answer questions - Code interpreter: The assistant can write and execute Python code - Function calling: The assistant can call functions you define, enabling integration with external systems

Building an Assistant with Python

🐍 Code Block: Creating and Using an Assistants API Assistant

import anthropic
from openai import OpenAI
import json
import time
from pathlib import Path

openai_client = OpenAI()

def create_research_assistant(knowledge_file_path: str = None) -> str:
    """
    Create a research assistant using the OpenAI Assistants API.
    Optionally load a knowledge file for document search.
    Returns the assistant ID.
    """
    # Upload knowledge file if provided
    file_id = None
    if knowledge_file_path and Path(knowledge_file_path).exists():
        with open(knowledge_file_path, "rb") as f:
            file_obj = openai_client.files.create(
                file=f,
                purpose="assistants"
            )
        file_id = file_obj.id
        print(f"Uploaded file: {file_id}")

    # Define assistant tools
    tools = [{"type": "file_search"}] if file_id else []

    # Create the assistant
    assistant = openai_client.beta.assistants.create(
        name="Research Analyst",
        instructions="""You are a research analyst specializing in synthesizing information from documents and generating structured analyses.

Your capabilities:
- Analyze uploaded documents and extract key information
- Identify patterns, themes, and insights across multiple sources
- Generate structured summaries, comparisons, and reports
- Answer specific questions about document content

Your output standards:
- Always cite specific sections or pages when referencing document content
- Clearly distinguish between what documents state and your own analysis
- Flag when information is absent from the provided documents
- Structure responses with clear headers and organized sections

If asked about topics not covered in the provided documents, say so explicitly rather than drawing on general knowledge without flagging it.""",
        model="gpt-4o",
        tools=tools,
        tool_resources=(
            {"file_search": {"vector_store_ids": []}} if not file_id
            else None  # File handling requires vector store setup
        )
    )

    print(f"Assistant created: {assistant.id}")
    return assistant.id


def run_assistant_conversation(
    assistant_id: str,
    user_message: str,
    thread_id: str = None
) -> tuple[str, str]:
    """
    Run an assistant on a message.
    Creates a new thread if thread_id is None.
    Returns (response_text, thread_id).
    """
    # Create or continue thread
    if thread_id is None:
        thread = openai_client.beta.threads.create()
        thread_id = thread.id
        print(f"New thread: {thread_id}")

    # Add user message to thread
    openai_client.beta.threads.messages.create(
        thread_id=thread_id,
        role="user",
        content=user_message
    )

    # Run the assistant
    run = openai_client.beta.threads.runs.create(
        thread_id=thread_id,
        assistant_id=assistant_id
    )

    # Poll until complete
    max_polls = 60  # Maximum 60 seconds
    polls = 0
    while run.status in ["queued", "in_progress", "cancelling"]:
        time.sleep(1)
        run = openai_client.beta.threads.runs.retrieve(
            thread_id=thread_id,
            run_id=run.id
        )
        polls += 1
        if polls >= max_polls:
            raise TimeoutError(f"Assistant run timed out after {max_polls} seconds")

    if run.status == "failed":
        raise RuntimeError(f"Assistant run failed: {run.last_error}")

    if run.status != "completed":
        raise RuntimeError(f"Unexpected run status: {run.status}")

    # Get the response
    messages = openai_client.beta.threads.messages.list(thread_id=thread_id)
    # The latest assistant message is first in the list
    for msg in messages.data:
        if msg.role == "assistant":
            # Extract text content
            text_content = " ".join(
                block.text.value
                for block in msg.content
                if block.type == "text"
            )
            return text_content, thread_id

    return "No response generated", thread_id


def create_multi_turn_assistant_session(assistant_id: str):
    """Run an interactive multi-turn session with an assistant."""
    thread_id = None
    print(f"\nAssistant session started. Type 'quit' to exit.\n")

    while True:
        user_input = input("You: ").strip()
        if user_input.lower() in ["quit", "exit"]:
            print("Session ended.")
            break
        if not user_input:
            continue

        try:
            response, thread_id = run_assistant_conversation(
                assistant_id=assistant_id,
                user_message=user_input,
                thread_id=thread_id
            )
            print(f"\nAssistant: {response}\n")
        except Exception as e:
            print(f"Error: {e}")
            break

    if thread_id:
        print(f"Thread ID: {thread_id} (use to resume this conversation)")

    return thread_id


# Example usage
# assistant_id = create_research_assistant()
# thread_id = create_multi_turn_assistant_session(assistant_id)

When to Use the Assistants API vs. GPT Builder

Use GPT Builder (no-code) when: - Non-technical team members will create or maintain the configured system - You want a fast setup with the visual interface - The assistant does not require complex integration with external systems - You need to share the assistant publicly or via the GPT store

Use the Assistants API (code) when: - The assistant will be embedded in your own application - You need programmatic control over thread creation and management - You need to integrate the assistant with your own backend systems - You are building a product or service that uses AI assistants as infrastructure

Designing Effective System Prompts for Configured Systems

Configured system prompts differ from one-off system prompts in important ways. They must handle a wider range of interactions, they cannot rely on immediate human correction, and they persist through use patterns you cannot fully anticipate.

Role and Persona Definition

Start with a clear, specific role. Not "You are a helpful assistant" but "You are TechGuide, a technical support assistant for Acme Software's developer API. You help developers troubleshoot integration issues, understand API capabilities, and find relevant documentation."

The persona should be specific enough that it excludes irrelevant behaviors. A persona that is too broad ("you are a helpful assistant for our company") will behave inconsistently because the behavioral guidance is too vague to discriminate between different situations.

Behavioral Guidelines

Behavioral guidelines are the most challenging part of configured system prompt design. They must: - Cover the most common interaction patterns in detail - Provide principles for situations not explicitly covered - Set clear limits and escalation paths - Be internally consistent

A useful framework for behavioral guidelines: 1. Default behaviors: What does the assistant do in the typical case? 2. Exceptional behaviors: What does it do differently in specific named situations? 3. Prohibited behaviors: What does it absolutely not do? 4. Escalation: When does it refer the user elsewhere, and where?

🗣️ Behavioral Guidelines Template:

## Default Behaviors
- Respond in [language/tone]
- Format responses as [format]
- When answering questions, [approach]
- When asked to create content, [approach]

## Specific Situations
- When the user seems frustrated: [approach]
- When the user asks about [specific topic]: [approach]
- When the user provides incomplete information: [approach]
- When the user asks for something sensitive: [approach]

## What I Don't Do
- I don't [specific thing]
- I don't have access to [specific information]
- I can't [specific capability]

## Getting Help Beyond Me
- For [category of need], contact [resource]
- For [category of need], see [documentation]
- For urgent issues, [escalation path]

Knowledge Scope

Define explicitly what the assistant knows and what it does not know. "I have access to [company's] product documentation as of [date]. I don't have information about events after that date or about [excluded topics]. If you need current information about [topic], check [resource]."

This prevents the assistant from confidently making up information in areas where its knowledge is actually limited — a common failure mode in configured systems.

Output Format Defaults

Specify default output formats explicitly. "Respond in plain text unless the user asks for formatted output" or "Use markdown formatting with clear headers and bullet points" or "Keep responses under 150 words unless the question requires more." Default format instructions ensure consistency across all users' experiences.

The Escalation Instruction

Every configured system needs an explicit escalation path: what should the AI do when it encounters something it cannot handle? Common escalation triggers: - Topics outside the AI's knowledge scope - User distress or safety concerns - Complex situations requiring human judgment - Requests for information the AI does not have

Escalation instructions should include a specific action ("say 'I'm not able to help with this, but [contact/resource] can'") rather than a vague direction ("use your judgment about when to refer elsewhere").

⚠️ Common Pitfall: Writing a system prompt that defines what the assistant should do in ideal conditions but not what it should do at the boundaries. A configured assistant that handles 90% of interactions well but fails unexpectedly at the 10% creates a worse user experience than one that handles 90% well and gracefully declines the remaining 10%.

Knowledge Base Design

The knowledge base is what makes a configured system knowledgeable rather than merely well-instructed. It is the difference between a GPT that says "please see your brand guidelines" and one that actually knows your brand guidelines.

What to Include

Include: Company-specific or project-specific information the AI would not have from training. Brand guidelines, product documentation, company policies, team processes, project context, client background.

Include: Definitions and vocabulary. Technical terms, internal jargon, abbreviations, product names that are ambiguous or proprietary.

Include: Standards and quality criteria. What "good" looks like in your context — example outputs, rubrics, checklists.

Include: Frequently asked questions. Common questions and authoritative answers.

Exclude: General knowledge that the AI already has from training. Writing a document explaining what a "stakeholder" is wastes space that could be used for information the AI actually needs.

Exclude: Confidential information if the configured system will be accessible to others.

How to Structure for Retrieval

Knowledge files are not read linearly — they are searched. Clear structure makes search effective.

Use descriptive headers. A section titled "Response Time SLA — Enterprise Customers" will be retrieved when a user asks about enterprise response times; a section titled "Section 4.2.1" will not.

State key facts directly. Begin sections with the most important statement, not with context-setting. "Enterprise customers receive 2-hour response times during business hours" is more retrievable than "At our company, we have historically valued our enterprise customer relationships, and as a result..."

Use consistent terminology. If your brand guide uses "tone" not "voice," use "tone" throughout the knowledge base. Inconsistent terminology splits retrieval across synonyms.

Keep documents focused. A document about onboarding should be about onboarding. A document about billing should be about billing. Mixed-topic documents make retrieval unreliable.

Testing and Iterating Configured Systems

A configured system that has not been tested is not ready for use. Testing should be systematic, not just exploratory.

The Testing Protocol

Step 1: Happy path testing. Does the assistant handle the typical case well? Test five to ten representative examples of the most common use pattern. If these fail, fix the instructions before proceeding.

Step 2: Edge case testing. What happens at the boundaries? Test ten to fifteen unusual or challenging interactions: incomplete inputs, off-topic requests, ambiguous questions, and requests that push against the instructions' limits.

Step 3: Adversarial testing. What happens when users try to use the assistant outside its intended purpose? Ask it to ignore its instructions, to behave as a different AI, to produce output it should refuse. Confirm that boundary cases are handled gracefully.

Step 4: Knowledge retrieval testing. For each major section of your knowledge base, ask a question whose answer is in that section. Verify that the assistant retrieves and uses that information correctly.

Step 5: User simulation testing. Ask a colleague who was not involved in building the system to use it for a real task. Watch where they get confused, where the assistant fails, and what they expected that they did not get.

Iteration Practices

Change one thing at a time. If you modify both the instructions and the knowledge base simultaneously and the output changes, you will not know which change caused the improvement or regression.

Document versions. Keep a record of what changed between versions and why. Configured systems tend to accumulate small changes over time; without documentation, it becomes difficult to trace why behavior changed.

Maintain a test suite. Keep your Step 1-4 test cases in a document. After each change to the system, re-run the tests. This is the equivalent of regression testing in software development.

The Assistant Brief: Documenting What Your Configured AI Does

A configured AI system without documentation is an organizational liability. The assistant brief is the document that makes configured AI systems maintainable, transferable, and trustworthy.

🗣️ Assistant Brief Template:

# [Assistant Name] — Assistant Brief

## Overview
**Purpose:** [One sentence: what problem this assistant solves]
**Primary users:** [Who is this assistant for?]
**Created by:** [Person or team]
**Created:** [Date]
**Last updated:** [Date]
**Version:** [Version number]

## What This Assistant Does
[3-5 bullet points: specific things this assistant helps users accomplish]

## What This Assistant Does Not Do
[3-5 bullet points: explicit limits and scope boundaries]
[Where users should go instead for out-of-scope needs]

## How to Use This Assistant
[2-3 paragraph plain-language description of typical use patterns]
[Example prompts that work well]

## Knowledge Base
**Documents included:**
- [Filename]: [What it contains, last updated date]
- [Filename]: [What it contains, last updated date]

**Knowledge currency:**
[When was the knowledge base last reviewed? What is the update cadence?]

## Known Limitations
[Specific things this assistant does or says that users should be aware of]
[Edge cases that produce suboptimal results]

## Feedback and Maintenance
**Report issues to:** [Contact]
**Maintenance schedule:** [How often is this assistant reviewed and updated?]
**Next planned update:** [Date]

## Technical Details
**Platform:** [Custom GPT / Claude Project / API-based]
**Model:** [Model name]
**System prompt version:** [Version or date]
**Knowledge base files:** [List]

The assistant brief serves multiple purposes: it onboards new users, guides maintenance, and provides accountability for what the system claims to do.

Scenario Walkthrough: Alex Builds a Brand Voice GPT

🎭 Alex Chen — Digital Marketing Manager

Alex's team has spent three years building a distinctive brand voice — direct, confident, slightly irreverent, expertise-forward. But three different writers produce content that ranges from close to the mark to clearly off. Brief conversations always help, but Alex cannot be in every content session.

She builds a Brand Voice GPT that embodies the brand guidelines well enough to replace the brief conversation.

System prompt design:

Alex writes the instructions in three passes: 1. Role and voice: who is this GPT and what does it do? 2. Brand voice: what specifically is the voice — with examples of on-brand and off-brand language 3. Content-type rules: how does the voice adapt across blog posts, emails, social, and ads?

The instructions document runs to 1,200 words. Alex spends two hours drafting it and three more hours testing it against past content — asking the GPT to evaluate content samples and getting consistently useful feedback.

Knowledge files:

She uploads four documents: - brand-voice-guide.pdf: The full brand guidelines document - vocabulary-list.md: Words and phrases the brand uses and avoids - example-posts-on-brand.md: Ten blog post excerpts rated exemplary - example-posts-off-brand.md: Ten blog post excerpts with annotations on what went wrong

Testing:

Alex tests with three scenarios: 1. "Review this headline for brand voice: [ten different headlines]" 2. "Rewrite this paragraph in our brand voice: [ten paragraphs varying widely in current style]" 3. "Write a 100-word intro paragraph for a post about data security"

After fifteen rounds of testing and five iterations of the instructions and vocabulary list, the GPT's brand voice assessments match Alex's own 89% of the time (she sampled 100 assessments and compared to her own judgment).

Deployment:

Alex shares the GPT with her team via link. She writes a one-page assistant brief explaining what it does, how to use it, and when to come to her instead of the GPT. She schedules a quarterly review of the instructions to keep them current with brand evolution.

Results six months later:

The team's brand voice consistency — assessed by an external brand audit — improved from 71% to 88% compliance. Alex's personal time spent on brand voice coaching dropped by about 60%. Two important caveats she is clear about: the GPT works because of the specific, detailed instructions and knowledge files she built, not because of the GPT platform itself. And the GPT supports but does not replace human creative judgment — a writer who produces consistent on-brand work is still more valuable than one who produces great off-brand work and then asks the GPT to fix it.

Scenario Walkthrough: Elena's Client Research System

🎭 Elena Rodriguez — Independent Management Consultant

Elena starts a new six-month engagement with a regional professional services firm. The engagement involves a full organizational assessment — interviews with 35 people, review of five years of performance data, and a competitive landscape analysis.

She creates a Claude Project to manage the engagement's knowledge base.

Project setup:

Project name: "[Client Name] Org Assessment — [Year]"

Project instructions (excerpt):

This project is my working environment for the [Client Name] organizational assessment engagement.

Context: [Client Name] is a 200-person accounting and advisory firm in the Pacific Northwest. They are navigating significant partner succession challenges and need a clear organizational strategy for the next five years. The CEO, [Name], has engaged me to produce a strategic assessment and implementation roadmap.

My role in this project: I am the primary strategist and deliverable author. Use this project to help me organize research, test synthesis, and draft deliverable components.

Key stakeholder context:
- CEO [Name]: prefers data-driven recommendations, skeptical of generic frameworks
- CFO [Name]: focused on cost implications of any recommendation
- Senior Partner [Name]: informal opinion leader, skeptical of outside consultants
...

Output standards for this project:
- All drafts should be written for a CEO audience unless specified otherwise
- Use formal but not stuffy language — avoid jargon
- Always flag when a conclusion is my interpretation vs. what data explicitly shows

Documents uploaded:

Week 1: engagement overview and scope document Week 2: interview guide and first batch of interview notes Week 3-4: additional interview notes added as they are completed Week 5: performance data analysis (summarized, not raw data) Week 6+: draft deliverable sections as they are completed

How Elena uses it:

Daily research synthesis: "Here are three more interview transcripts. Update my running themes document with new patterns from these interviews."

Draft testing: "Here is my draft of the strategic options section. Does it reflect the data we've gathered? Are there interview insights I'm not drawing on?"

Preparation: "I have a check-in call with the CEO tomorrow. Based on our research so far, what are the three things most likely to surprise her?"

Outcome:

Elena finds that the project cuts her research organization time by roughly 40% compared to her previous approach of maintaining separate files and re-establishing context each session. The most valuable aspect: Claude can retrieve connections across all the interview notes that she would struggle to hold in working memory simultaneously.

She runs this project structure for every engagement now, and notes that the documents she builds as project resources — interview guides, synthesis templates, deliverable frameworks — have become a library she reuses across engagements.

Scenario Walkthrough: Raj's Code Review Assistant

🎭 Raj Patel — Senior Software Engineer

Raj builds a custom code review assistant for his team's specific codebase standards — not as a replacement for human review, but as a first-pass tool that ensures every PR meets basic quality bars before human review begins.

The challenge: His team's coding standards document is 80 pages. Raj and two other senior engineers are the only people who have read it fully. Junior engineers consistently miss specific patterns the team has adopted, not because they are poor engineers but because the standards are comprehensive and hard to internalize.

System prompt design:

Raj structures the instructions in four sections: 1. Role: "You are CodeReview, a code quality assistant specialized for [team]'s Python and TypeScript codebase." 2. Review scope: "Review for correctness, style compliance, security, performance, and test coverage. Do NOT comment on architectural decisions — flag those for human review instead." 3. Output format: "Produce a structured review with: Overview (one paragraph summary), Issues (critical/high/medium/low), Suggestions (optional improvements), and a Test Coverage Assessment." 4. Standards enforcement: "Apply [team]'s specific standards. When a standard is violated, cite the specific rule number and page from the standards document."

Knowledge files:

He uploads the team's coding standards document (reformatted for retrievability — sections clearly titled, key rules bolded), a list of common anti-patterns they have encountered, and a security checklist adapted from OWASP for their specific stack.

Testing:

Raj tests with 20 real PRs from the past year — 10 that had significant issues caught in human review and 10 that passed cleanly. He checks whether the assistant: - Correctly identifies the issues found in human review - Does not flag false positives (things that were not actually issues) - Correctly formats the review output - Appropriately cites specific standards

After three iterations of the instructions and knowledge files, the assistant catches 83% of the issues that human review caught in the test set, with a false positive rate under 10%.

Deployment:

Raj makes the assistant available to the team via a shared link. He adds it to the team's PR checklist: "Run CodeReview assistant before requesting human review." He makes clear this is a tool for the author, not a replacement for reviewer judgment.

Ongoing maintenance:

Every quarter, Raj reviews the assistant's output on a sample of PRs. When he finds systematic misses, he updates the standards document or the instructions. The assistant has become the most frequently updated part of the team's quality process — which Raj considers a sign it is working.

Research Breakdown: Configured vs. Ad Hoc AI Interactions

Research on configured versus ad hoc AI interactions consistently shows three advantages for configured systems.

Consistency. Configured systems produce significantly more consistent output than ad hoc prompting for the same task, measured by variance in quality ratings across evaluators. The system prompt reduces the AI's decision space, resulting in more predictable behavior.

Quality floor. Configured systems tend to have a higher quality floor — the minimum quality of outputs — than ad hoc prompting, even when the ceiling quality is similar. Ad hoc prompting can produce excellent outputs when the prompt is excellent, but the minimum quality depends entirely on the user's prompting skill. Configured systems provide a baseline that does not depend on user prompting skill.

Accessibility. Configured systems make AI capabilities accessible to users who have not developed prompting skills. A team member who would struggle to get useful output from a general AI assistant can often use a well-configured purpose-specific assistant effectively. This democratizes AI capability within teams.

The limitation: configured systems require upfront design investment and ongoing maintenance. They are worth building when the use case is recurring and the user population is larger than one. For genuinely one-off or highly exploratory tasks, ad hoc prompting remains more appropriate.


Continue to Chapter 38 to learn how to deploy AI tools for teams — governance, access control, training, and change management.