Case Study 1: Elena's Condolence Problem

The Email She Had to Write Herself

Persona: Elena (Management Consultant) Domain: Professional relationships, authentic communication Context: Death of a client's parent Decision: Not to use AI for this communication Outcome: The moral distinction between what AI CAN do and what it SHOULD do


The Situation

Elena had been working with Marcus Chen — the CFO of a mid-size manufacturing company — for eighteen months. Their relationship had grown beyond the typical consultant-client dynamic. Marcus was thoughtful and direct, and Elena had developed genuine professional respect for him. He had referred two other clients to her firm. His organization trusted her.

On a Tuesday afternoon, Elena received an email from Marcus's executive assistant: "Marcus is out of the office this week due to a family bereavement. He will respond to messages when he returns."

She didn't need more information to respond. She knew what to do: she wrote him a condolence message.

Or rather, she almost used AI to write it.


The Moment of Temptation

She had been using AI tools extensively that week. She had three deliverables due, a pitch for a new client, and two ongoing client communications in progress. She was efficient with AI. She was fast.

She opened her AI tool and typed: "Help me write a professional condolence message to a client whose—"

Then she stopped.

She closed the browser tab.


What Stopped Her

The pause was not about capability. She knew AI could produce an excellent condolence message — eloquent, appropriate in tone, warm without being excessive, professional without being cold. She had seen AI produce this kind of content. She knew it would be fine.

She stopped because of a question that surfaced clearly: Would this message be diminished if Marcus knew AI wrote it?

The answer was immediately obvious: yes. Not because Marcus would think less of AI, or because the message would be factually wrong. Because the message would carry meaning in proportion to the care the writer had taken with it. The meaning of condolence is partly communicative — expressing sympathy — and partly relational: demonstrating that you thought of him, that you took time for him specifically, that his loss registered with you as a person, not as a task to be handled.

An AI-generated condolence, however well-composed, would not carry that. It would carry the artifact of someone who handled the task efficiently. That is not the same thing.


What She Wrote

She wrote the message herself. It took about fifteen minutes — longer than an AI generation would have taken.

She did not ask AI for help. She sat with it.

She mentioned a specific thing she had observed about Marcus that felt relevant — he was someone who clearly cared about his family, she had noticed this in small references in their meetings. She didn't know which parent had died. She wrote a message that didn't assume, that acknowledged what she knew and didn't overreach into what she didn't.

She wrote it, read it again, changed a sentence that felt slightly off, and sent it.


The Distinction She Articulated to Herself

Elena is analytically minded. After sending the message, she spent a few minutes thinking about the distinction she had just made, because she wanted to be able to articulate it.

She identified two questions at work:

Question 1: Can AI do this? Yes. Clearly. AI produces perfectly competent condolence messages.

Question 2: Should AI do this? No. Because the communication's value is not separable from its origin. A condolence message communicates: I, a person who knows you and was thinking of you specifically, took a moment to acknowledge your loss. That content cannot be outsourced without losing the content that matters.

This is the distinction between task completion and authentic communication. Task completion is about the output. Authentic communication is about the relationship between the writer and the reader that the output represents. The output of an AI-generated condolence note is technically similar to a human-written one. The meaning is different.


The Broader Pattern She Recognized

Elena spent some additional time thinking about whether this was unique to condolences or reflected a broader category.

She identified several other types of communication that fell into the same category: the apology she owed a colleague she had let down, a reference letter she had been asked to write for a mentee, a difficult conversation she needed to have with a client about a project scope problem.

In each case, the communication carried meaning proportionate to her genuine engagement with it. In each case, AI could produce the output. In each case, AI production would strip out the meaning that mattered.

She was not anti-AI. She used AI extensively and valued it. But she recognized a category of professional communication that lived in a different register — where the relationship was the point, and the communication was an act of the relationship, not a task to be processed.


The Response She Received

Marcus wrote back a week later:

"Elena, thank you for your note. It meant a great deal. My father passed last week — he had been ill, so it wasn't a surprise, but these things are never easy. I appreciate that you took the time."

Seventeen words of acknowledgment. She read it several times.

She did not think: "I should have just used AI and saved fifteen minutes." She thought: "He noticed. Of course he noticed. Care is perceptible."


What This Case Study Is Not Saying

This case study is not saying that AI should never assist with professional communication. Elena uses AI for the overwhelming majority of her professional writing.

It is not saying that using AI for routine condolences or professional sympathy notes in contexts with lower relationship stakes is wrong. Some communications are transactional even when the topic is emotional.

It is not saying that AI-assisted communication is always inadequate. AI helping you think about what to say, organizing your thoughts, or improving the language of something you've written is different from AI generating the message.

It is saying that there exists a category of communication where the meaning is inseparable from the authenticity of the origin — where "a person who knows you cared enough to write this" is part of what the communication communicates. In those cases, the capacity to produce the output is the wrong question. The question is what the output is for.


Lessons

1. The test is not capability but meaning. AI can do many things it should not do in specific contexts. The question is not "can AI produce this?" but "does AI production serve the actual purpose of this communication?"

2. Relationship capital is built through authentic engagement. Professional relationships — particularly the ones that drive referrals, repeat work, and genuine trust — are sustained by demonstrated authenticity. The fifteen minutes Elena spent on a genuine condolence note is part of the account from which she draws when a relationship faces stress.

3. The moral distinction is worth being explicit about. Elena's insight — the distinction between what AI CAN do and what it SHOULD do in human moments — is worth articulating precisely because the temptation to use AI is present and the capability is real.

4. "Authenticity" is not a vague concept here. It has a specific meaning: the reader's experience of receiving communication from a specific person who genuinely thought of them. That experience is altered by AI generation in ways that matter.

5. Noticing matters more than you might expect. The response she received was brief, but it registered that she had taken the time. People in professional relationships notice when someone makes an effort that was not required — particularly under circumstances when efficiency would have been forgiven.


Related: Chapter 32, Section 2 (Relationship-critical communication), Section 7 (The "just because you can" problem)

Continue to Case Study 2: Raj's Learning Trap — When Copilot Was Making Him Worse