Chapter 28 Quiz: Customer-Facing Work: Sales, Support, and Outreach


Question 1: What is the "research + write" workflow in AI-assisted sales outreach?

A) Using AI to write the email first, then researching the company to verify the content B) Researching the prospect or company first, then using that specific research as context for generating a personalized email C) Having a research team prepare company profiles, then asking AI to write emails from templates D) Using AI for all research and writing steps without human verification

Answer **B** — Researching the prospect or company first, then using that specific research as context for generating a personalized email. The research-then-write sequence is the key to genuine personalization at scale. Without specific research, "personalized" AI outreach is actually generic content with the company name inserted. With research — recent news, business model specifics, relevant challenges — AI can generate hooks that are genuinely relevant to the specific prospect. The chapter also emphasizes verifying at least one specific fact per research summary before using it, since AI can generate plausible but inaccurate company-specific information.

Question 2: According to the chapter, what is the single most important quality control step in AI-assisted customer-facing communication?

A) Running all communications through a grammar and tone checker B) Having a manager review every AI-generated communication before sending C) A personal review step where the sender reads the communication and takes accountability for it D) Using only AI tools that are specifically trained on your industry's communication patterns

Answer **C** — A personal review step where the sender reads the communication and takes accountability for it. The personal review step is described as non-optional in the chapter. It's not just reading for errors — it's a genuine assessment of whether the communication sounds like you, whether it's specifically relevant to this person or company, and whether you'd be comfortable if they knew how it was created. The person who sends the communication takes responsibility for it, which requires actually reading and approving it.

Question 3: Which of the following is described as an "AI tell" — a sign that customer communication was generated by AI?

A) Emails that are shorter than 150 words B) Communications that mention recent news about the recipient's company C) Generic praise that applies to any company ("I've been impressed by [Company]'s approach to...") D) Emails that ask clear, direct questions

Answer **C** — Generic praise that applies to any company ("I've been impressed by [Company]'s approach to...") This is one of the patterns the chapter identifies as recognizable AI-generated content: praise that is specific-sounding but generic in substance. "I've been impressed by your approach to innovation" could be inserted into any company's outreach and would apply to none specifically. Recipients — especially experienced sales professionals who also receive AI-generated outreach — recognize this pattern quickly. The antidote is genuine specificity: a reference to something actually notable and specific about this company.

Question 4: When should a customer support ticket ALWAYS be escalated to a human agent rather than handled with an AI-generated response?

A) When the ticket has been waiting for more than 24 hours B) When the customer mentions a competitor's product C) When the customer expresses strong emotion, references legal action, or is contacting about the same unresolved issue for the second or third time D) When the ticket requires technical knowledge beyond basic troubleshooting

Answer **C** — When the customer expresses strong emotion, references legal action, or is contacting about the same unresolved issue for the second or third time. The escalation indicators the chapter describes include: expressed strong emotion (anger, distress), mentions of legal action or regulatory complaints, references to customer tenure or relationships ("I've been a customer for 10 years"), second or third contact on the same unresolved issue, account-level risk, and ambiguous situations where the answer isn't clear. These situations require human judgment, empathy, and accountability — the elements AI cannot provide.

Question 5: What is the primary advantage of Salesforce Einstein over a standalone AI writing tool for sales outreach?

A) Einstein produces higher-quality writing than general-purpose AI tools B) Einstein's insights are grounded in real customer data from your CRM, not generic prompts C) Einstein is less expensive than standalone AI tools for enterprise teams D) Einstein doesn't require human review of its outputs

Answer **B** — Einstein's insights are grounded in real customer data from your CRM, not generic prompts. The chapter's key point about integrated CRM AI tools (Einstein, HubSpot AI) is that their quality advantage comes from context — they're working with real customer history, actual account data, and your specific sales data. Lead scoring that knows your actual conversion history is more useful than generic scoring. Suggested responses that can see past interactions with the customer are more relevant than responses generated from a fresh prompt. The integration is the advantage; the writing quality difference is secondary.

Question 6: What does "personalization at scale" mean in the context of AI-assisted sales outreach?

A) Sending the same email to thousands of people but with their name and company inserted B) Using AI to generate genuinely individualized messages to large prospect lists while maintaining quality through research and verification C) Automating the entire outreach process so salespeople can focus on closing rather than prospecting D) Using AI to translate outreach messages into different languages for international campaigns

Answer **B** — Using AI to generate genuinely individualized messages to large prospect lists while maintaining quality through research and verification. The chapter is clear that "personalization" that consists only of name and company insertion is not genuine personalization. Real personalization at scale requires a research layer — gathering specific, accurate information about each prospect that informs genuinely relevant communication. AI assists in synthesizing research and generating language; humans verify research accuracy and review for authenticity before sending.

Question 7: What is the "uncanny valley" effect as it applies to AI customer communication?

A) AI that responds too quickly, making customers suspicious of how fast a human could reply B) AI communication that is almost but not quite human, producing lower trust than either clearly automated or clearly human communication C) The gap between what AI claims to know about a customer and what it actually knows D) The performance decline that occurs when AI is used for high-stakes customer interactions

Answer **B** — AI communication that is almost but not quite human, producing lower trust than either clearly automated or clearly human communication. The research finding cited in Section 28.10 is that partially-personalized, slightly-off AI communication may produce worse outcomes than either clearly automated (customers expect less and aren't disappointed) or clearly human communication. This is an argument for thorough editing: AI output that has been thoroughly reviewed and personalized is more trustworthy than AI output that has been lightly reviewed and remains partly AI-sounding.

Question 8: According to the chapter, what is the key distinction between Intercom Fin and Zendesk AI as customer support tools?

A) Intercom Fin is for B2B companies while Zendesk AI is for B2C B) Fin is designed for automation (AI handles interactions end-to-end) while Zendesk AI is designed for augmentation (AI helps the human agent) C) Intercom Fin is better for technical support while Zendesk AI is better for general inquiries D) Fin requires more training data while Zendesk AI works out of the box

Answer **B** — Fin is designed for automation (AI handles interactions end-to-end) while Zendesk AI is designed for augmentation (AI helps the human agent). The chapter explicitly frames this as a philosophy distinction: automation vs. augmentation. This aligns with the book's broader human-in-the-loop theme — organizations need to decide whether their customer support AI is designed to replace human judgment (with robust escalation logic when it fails) or support it (AI drafts, humans review and send). For most customer-facing communication, the chapter advocates the augmentation model.

Question 9: What is the most important element of a cold email's opening sentence, according to the chapter's outreach workflow?

A) The sender's name and company B) A clear value proposition that explains why the prospect should respond C) A specific, genuine observation about the prospect's company that makes the outreach relevant D) A question that encourages the recipient to reply

Answer **C** — A specific, genuine observation about the prospect's company that makes the outreach relevant. The chapter's outreach prompt explicitly instructs: "Open with a specific, genuine observation about their company (not generic praise)." The opening observation is the hook — it signals that the sender has actually looked at this company, not just mass-sent a template. The observation should be specific enough that it couldn't apply to any company in the prospect's industry, and accurate enough that if the prospect checks it, it holds up.

Question 10: According to the chapter, when is disclosure of AI use in customer communication clearly required?

A) When the communication was generated entirely by AI without human editing B) When the communication mentions specific company data or personal information C) When customers are having real-time interactions without knowing they're talking to an AI D) When the communication is sent to more than 10 recipients

Answer **C** — When customers are having real-time interactions without knowing they're talking to an AI. The chapter's transparency framework is clearest at this point: "If a customer is having a conversation with an AI chatbot without knowing it, that's deceptive." Customers should know when they're not talking to a human in real-time conversational interactions. The disclosure question is less clear for AI-assisted written communication that has been reviewed and sent by a human — the chapter presents this as a judgment call depending on relationship, industry norms, and organizational policy.

Question 11: What does the "break-up" email in a cold outreach sequence accomplish?

A) It ends the business relationship with a prospect who has become a customer but then churned B) It is the final outreach message that explicitly states you won't follow up again, leaving the door open without pressure C) It formally withdraws a proposal or offer that has not been responded to D) It terminates the relationship with a prospect who has been unresponsive in a way that burns bridges

Answer **B** — It is the final outreach message that explicitly states you won't follow up again, leaving the door open without pressure. The break-up email (Email 4 in the chapter's sequence) serves two purposes: it closes the outreach cycle cleanly for the sender, and it sometimes generates responses from prospects who weren't ready to engage earlier. The "I won't follow up again" framing removes pressure, which paradoxically can trigger responses from people who were interested but hadn't prioritized responding. The tone should be respectful and genuinely leave the door open for future contact.

Question 12: What is the key failure in AI support responses that most commonly damages customer relationships?

A) Responses that are too long and explain too much B) Generic acknowledgment that doesn't address the specific issue raised C) Responses that offer discounts or compensation unnecessarily D) Responses that end without a clear next step

Answer **B** — Generic acknowledgment that doesn't address the specific issue raised. The draft response generation prompt in Section 28.4 explicitly addresses this: "Acknowledges the specific issue they raised (not a generic 'I'm sorry to hear this')." Generic acknowledgment — responses that could be sent to any customer about any problem — signal that the company hasn't actually read the ticket. This is particularly damaging because it undermines the fundamental promise of support: that someone is actually paying attention to your problem.

Question 13: In the context of account management, what does "account health" typically refer to?

A) The number of support tickets an account generates per month B) The account's financial health based on their payment history C) Indicators of customer satisfaction, engagement, and renewal risk tracked through CRM data D) The alignment between the account's current usage and their contracted capacity

Answer **C** — Indicators of customer satisfaction, engagement, and renewal risk tracked through CRM data. Account health is a composite signal. It typically includes engagement indicators (product usage, feature adoption, login frequency), satisfaction indicators (NPS scores, support ticket sentiment, direct feedback), relationship indicators (responsiveness to calls, attendance at QBRs, relationship with the account manager), and risk indicators (complaints, usage decline, competitive conversations, budget conversations). AI can help synthesize these signals into an account health assessment that informs proactive outreach.

Question 14: What does the research cited in Section 28.10 find about AI-assisted support responses reviewed by humans before sending?

A) Human review slows down response times enough to negate the quality benefits B) Human-reviewed AI responses maintain customer satisfaction scores similar to fully human responses C) Human review introduces inconsistency because different agents edit AI responses differently D) Human-reviewed AI responses perform better than fully human responses on satisfaction scores

Answer **B** — Human-reviewed AI responses maintain customer satisfaction scores similar to fully human responses. The research finding is that the human review step is what preserves quality. Fully automated AI responses (without human review) show decreased satisfaction, particularly for complex or emotional issues. Human-reviewed AI responses maintain quality parity with fully human responses, demonstrating that the review step is not just ethically appropriate but measurably effective. This is the empirical evidence base for the human-in-the-loop imperative the chapter advocates.

Question 15: What is the primary risk of using AI research summaries to personalize outreach without verification?

A) The research summaries may be biased toward certain types of companies B) AI may generate plausible but inaccurate company-specific facts that damage credibility when used C) Research-based outreach takes longer to send than template-based outreach D) AI research summaries often include confidential information that shouldn't be referenced

Answer **B** — AI may generate plausible but inaccurate company-specific facts that damage credibility when used. The chapter explicitly warns about this: "AI can generate research summaries that sound specific but aren't genuinely researched. It may synthesize general information about an industry or company type and present it in a personalized frame." If you reference a product launch, funding round, or executive hire that didn't actually happen, the recipient knows immediately — and your credibility drops sharply. The mitigation is verification: check at least one specific fact per research summary before using it in customer-facing outreach.