Chapter 28 Exercises: Customer-Facing Work: Sales, Support, and Outreach
These exercises build practical AI-assisted customer-facing communication skills. They are most valuable when completed with real scenarios — actual prospects, real support tickets, genuine account contexts.
Foundation Exercises (Exercises 1-5)
Exercise 1: The Research + Write Workflow
Objective: Experience the difference between generic "personalized" outreach and genuinely research-backed outreach.
Task: 1. Choose 5 real prospects or target companies you'd like to reach out to. 2. For each, run the research prompt from Section 28.2. 3. Review the research summaries AI generates: - Are the facts accurate? Verify at least one specific fact per summary. - Is the "hook suggestion" specific enough to be genuine? - Are there any claims that seem generated rather than researched? 4. For each company, generate a personalized outreach email using the Step 2 prompt. 5. Review each email: does the opening "specific observation" feel real and genuine, or does it read like a template? 6. For any opening that feels generic, rewrite it with the specific detail you know from your own research.
Reflection prompt: How many of the AI-generated hooks required significant modification? What does this reveal about the quality gap between AI "research" and actual research?
Exercise 2: The Authenticity Test
Objective: Develop sensitivity to the difference between AI-sounding and authentic customer communication.
Task: 1. Find 5 cold emails you've received — include both ones that felt genuine and ones that felt automated. 2. For each email, mark: - Specific observations that could only apply to you (genuine) - Generic phrases that could apply to any similar company (AI tell) - Value propositions that connect to your specific situation (genuine) - Value propositions that apply to everyone in your category (AI tell) 3. Now write a cold email to yourself — what would genuinely get your attention? What would you need to believe about the sender to respond? 4. Use AI to draft a cold email to yourself. Compare your "ideal email" to AI's draft. 5. Create a personal checklist: 5-7 specific criteria you'll use to evaluate customer-facing AI output before sending.
Deliverable: Your personal authenticity checklist for customer-facing AI output.
Exercise 3: Support Ticket Response Practice
Objective: Build AI-assisted support response skills with quality review.
Task: 1. Collect or create 5 realistic customer support tickets covering different types: technical issue, billing question, feature request, complaint, and praise. 2. For each ticket, run the draft response generation prompt from Section 28.4. 3. For each AI-generated response: - Is the specific issue genuinely addressed, or is this a generic helpful response? - Is the tone appropriately empathetic vs. appropriately efficient? - Is there anything in the ticket that AI's response missed? - Would you send this as-is, or does it need editing? 4. Edit each response as needed. 5. Run the escalation detection prompt on each ticket. Do you agree with AI's escalation assessment?
Reflection prompt: Which ticket type required the most editing? What does this tell you about where AI-generated support responses are weakest?
Exercise 4: Objection Handling Preparation
Objective: Build a comprehensive objection handling guide for your sales context.
Task: 1. List the 8-10 objections you most commonly encounter in sales conversations (be specific and use the language customers actually use). 2. Run the objection handling preparation prompt from Section 28.3 with your list. 3. For each AI-suggested response: - Is the clarifying question useful and natural? - Does the response address the underlying concern, not just the surface objection? - Is the suggested proof point one you actually have? 4. Identify the 2-3 objections that most often indicate a genuine mismatch. Do you agree with AI's assessment of which ones are mismatch signals? 5. For each objection, refine the response until it sounds like you — not a sales trainer's script.
Deliverable: A finalized objection handling guide for your 8-10 most common objections.
Exercise 5: Discovery Question Framework
Objective: Build and test a discovery question set using AI.
Task: 1. Run the discovery question framework prompt from Section 28.3 for your specific offering. 2. Review the 12-15 questions generated: - Which questions would you actually ask? (vs. questions that seem theoretical) - Which questions have you already been asking? (AI may validate your current approach) - Which questions are genuinely new to you? - Are there questions missing that your experience tells you are critical? 3. Test the question framework on a real or role-played discovery call. 4. After the call, evaluate: which questions produced the most useful information? Which fell flat? 5. Revise the framework based on the test.
Deliverable: A tested, personalized discovery question framework.
Intermediate Exercises (Exercises 6-10)
Exercise 6: The Cold Email Sequence
Objective: Build a tested multi-touch outreach sequence.
Task: 1. Run the cold email sequence prompt from Section 28.2 for a specific prospect persona. 2. Review the 4-email sequence: - Does each email make a different case, or are they variations of the same message? - Does the tone progression make sense (Email 1 ≠ Email 4 in approach)? - Is the "break-up" email genuinely respectful while leaving the door open? 3. Test the sequence on a real mini-campaign (5-10 prospects). Track: which emails get opened? Which get replies? Which get unsubscribes? 4. After 3 weeks, iterate: what would you change based on the results?
Reflection prompt: What response rate justifies the investment in a multi-touch sequence? At what point does sequence follow-up cross the line from persistent to annoying?
Exercise 7: Account Review Preparation System
Objective: Build a reusable account review preparation workflow.
Task: 1. Choose 3 accounts at different stages: healthy/growing, neutral/stable, at-risk. 2. Run the account review preparation prompt from Section 28.5 for each. 3. Compare what AI suggests covering for each account type. What's different? 4. For the at-risk account specifically: - Does AI's suggested approach feel right for the relationship complexity? - What does AI miss that you know from the relationship history? - What would you add to make the renewal conversation strategy more specific? 5. Build a standard account review preparation template that works for your most common account type.
Exercise 8: FAQ and Knowledge Base Construction
Objective: Convert unstructured support knowledge into a structured knowledge base.
Task: 1. Collect 20-30 questions from your support tickets, FAQs, or common customer conversations over the past 3 months. 2. Run the FAQ and knowledge base creation prompt from Section 28.4. 3. Review the categorization and draft answers: - Are the categories logical for how customers actually think about problems? - Are any answers missing edge cases that you know from experience? - Are any answers that require escalation marked as such? 4. Identify the gaps AI flagged: what questions are you getting that you don't have good answers for? 5. Complete the knowledge base with your own answers for the gap questions.
Deliverable: A complete FAQ document (minimum 20 Q&As) with escalation flags and gap identification.
Exercise 9: Escalation Criteria Development
Objective: Define explicit escalation criteria for AI-assisted support in your context.
Task: 1. Collect or remember 10 customer support situations from your experience — include some that required escalation and some that didn't. 2. For each situation, run the escalation detection prompt. 3. Compare AI's escalation assessment to what actually happened: - Where did AI correctly identify escalation risk? - Where did AI under-flag (missed a real escalation need)? - Where did AI over-flag (suggested escalation for routine situations)? 4. Based on the comparison, write explicit escalation criteria for your context: "Escalate to a human agent whenever..." 5. Test your criteria against 5 new support ticket examples.
Deliverable: A written escalation criteria document for your support context.
Exercise 10: LinkedIn Outreach System
Objective: Build an AI-assisted LinkedIn outreach system that maintains authenticity.
Task: 1. Identify 10 LinkedIn profiles you'd genuinely like to connect with for professional reasons. 2. For each person: - Identify one genuine reason to connect (recent content, shared experience, specific relevant work) - Run the LinkedIn outreach prompt with this specific hook 3. Review each note for authenticity: - Does it sound like you? - Would the recipient find the connection reason credible and relevant? - Is it specific enough to feel genuine? 4. Edit the notes that need work. 5. Send the connection requests. Track: what acceptance rate do you get, and do any connections convert to conversations?
Reflection prompt: What's the minimum specificity threshold for LinkedIn outreach to feel genuine rather than automated? How does this change for warm vs. cold connections?
Advanced Exercises (Exercises 11-15)
Exercise 11: The Personalization at Scale Workflow
Objective: Build and test a scalable personalized outreach system.
Task: 1. Identify a target list of 20-30 companies for outreach. 2. Build a research spreadsheet: company name, website, recent news (manually researched for 5 priority companies; AI-researched for the rest). 3. Run the scalable personalization workflow prompt from Section 28.2. 4. Generate personalized hooks for all 20-30 companies using the AI research prompt. 5. Verify the top 10 priority company hooks against your own knowledge or quick fact-checking. 6. Identify and flag hooks that need modification or replacement. 7. Generate the full email for each company. 8. Before sending, apply your authenticity checklist from Exercise 2.
Deliverable: A complete outreach campaign with 20-30 personalized emails, ready to send, with a quality log noting what was verified and what was modified.
Exercise 12: Support Response Quality Calibration
Objective: Calibrate AI support responses against your quality standards.
Task: 1. Collect 10 real support responses you or your team has sent that you're proud of — responses that resolved issues and maintained relationships. 2. Analyze these responses for patterns: - How do they open (vs. standard AI openers)? - How do they handle acknowledging responsibility vs. not? - How do they close and set next steps? 3. Create a "quality standards brief" — a document you can add to support response prompts that captures your standards. 4. Test the quality standards brief on 5 new support tickets. 5. Compare output with standards brief vs. without. Does the brief improve quality?
Deliverable: A support response quality standards brief, tested and refined.
Exercise 13: Competitive Differentiation Workshop
Objective: Build a complete competitive differentiation framework for your 3 main competitors.
Task: 1. Identify your 3 main competitors. 2. Run the competitive differentiation prompt from Section 28.3 for each. 3. For each competitor analysis: - Add your own knowledge of where they're stronger than AI acknowledges - Correct any inaccuracies in AI's description of their offering - Add specific win/loss data you have from real competitive situations 4. Build a one-page competitive card for each competitor: your honest assessment of where you win, where they win, and how to position for each. 5. Test each competitive card in a role-played sales conversation.
Deliverable: Three competitive cards, tested and refined.
Exercise 14: The Transparency Protocol
Objective: Develop a personal or organizational AI transparency protocol for customer communication.
Task: 1. Map your customer-facing AI use cases into three categories: - Clearly appropriate to disclose (real-time automated interactions, AI chatbots) - Judgment call (AI-assisted, human-reviewed written communication) - Clearly not disclosure-worthy (grammar check, spell-check level assistance) 2. For the "judgment call" category, identify the factors that tip toward disclosure vs. not: - Customer relationship type - Communication sensitivity - Industry norms - Your organization's values 3. Write a personal transparency protocol: when you will disclose AI use, when you won't, and how you'll handle the situation if a customer directly asks. 4. Stress-test your protocol: what's the hardest case for your protocol? Where does it break down?
Deliverable: A written AI transparency protocol for your customer communication context.
Exercise 15: The Customer Health Monitoring System
Objective: Use AI to build a proactive account health monitoring approach.
Task: 1. Identify your 10 most important accounts. 2. For each account, collect: usage data, recent support tickets, last conversation notes, contract terms, and any relationship signals (positive or negative). 3. Run the upsell identification prompt from Section 28.5 across all 10 accounts. 4. Run a modified version: "Based on these account summaries, identify which accounts show churn risk signals." 5. Compare the opportunity and risk assessments: - Do you agree with the priority assessments? - Are there accounts where AI misses something you know from the relationship? - What follow-up conversations do these assessments suggest? 6. Create a monthly account health review process using AI to synthesize available data before each review meeting.
Deliverable: A 10-account health assessment with follow-up action items.
Capstone Exercise
Exercise 16: The Full Outreach Campaign
Objective: Plan and execute a complete AI-assisted outreach campaign for a real business goal.
Goal: Generate 10+ qualified conversations over a 30-day campaign.
Requirements: - Target list: 50 minimum - Personalization: every email must have a specific hook (verified, not just AI-generated) - Sequence: minimum 3-touch for non-responders - Quality review: every email reviewed before sending against your authenticity checklist - Tracking: open rate, reply rate, conversation rate, unsubscribe rate
Workflow: 1. Research (AI + manual verification for top 20%) 2. Personalization hook generation (AI-generated, human-verified) 3. Email drafting (AI draft + human edit) 4. Quality review (authenticity checklist) 5. Sending and tracking 6. Sequence management (Days 5, 12, 21 follow-ups) 7. Results analysis (what worked, what didn't)
Deliverable: A post-campaign analysis: what results you achieved, what you'd do differently, and specific improvements to your AI outreach workflow based on the results.
Reflection prompt: How did the AI-assisted outreach compare to your typical outreach results (if you have baseline data)? What was the single most important quality control step in maintaining authenticity at scale?