Case Study 1: Alex's Disclosure Dilemma
The Client Who Asked
Persona: Alex (Independent Content Creator and Digital Marketer) Domain: Freelance consulting, content strategy Situation: Direct client question about AI use in a major deliverable Decision: Transparent, contextual disclosure Outcome: Strengthened client relationship; updated engagement terms
Background
Alex had been working with a regional healthcare marketing agency — a firm that produced patient education content, physician recruitment campaigns, and internal communications for hospital systems. The engagement was her largest ongoing client relationship, representing about 35% of her monthly revenue.
She had used AI tools throughout this engagement. Her AI use was meaningful: she used it for research synthesis, for first-draft generation on many content pieces, for ideating campaign angles, and for reviewing and improving the clarity of technical content. Her AI use made her significantly more productive — she estimated she produced about 40% more output per hour than she did before adopting AI-assisted workflows.
She had not disclosed this to the client. She hadn't been asked, and the deliverables met or exceeded expectations. In her mental framing, her AI use was a tool efficiency, like having better software or a faster computer.
The question surfaced at an annual review meeting.
The Meeting
The client's marketing director, Sandra, was reviewing the year's work. The meeting was positive overall — deliverables had been strong, deadlines had been met, the client was happy. Near the end of the meeting, Sandra said:
"I need to ask you something directly. We've been evaluating our vendor relationships as part of an organizational AI policy we're developing. Do you use AI tools in work you produce for us?"
Alex's first instinct was to assess the question: Is this hostile? Is there a right answer she's looking for? Is this about the healthcare regulatory context (where HIPAA considerations around patient data are live)?
She decided her instinct toward transparency was correct. She answered:
"Yes, I do. Let me give you the full picture."
Her Full Answer
"I use AI tools throughout my workflow — primarily for research synthesis, first-draft generation on some content pieces, and as a thinking tool for campaign ideation. The strategic analysis, the final content judgment, and everything that involves actual patient or clinical content decisions is mine. I don't put patient data or client confidential information into consumer AI tools — anything that touches PHI or your organization's confidential information goes through your systems or stays in documents I control.
"My AI use has made me more productive, which is part of why our volume has been what it is. I haven't been billing you for the efficiency gain — my rates haven't changed — but the work output has been higher than I could have produced otherwise.
"I should have raised this proactively, especially given that you work in a regulated industry where these questions are particularly relevant. I'm sorry I didn't. If there are specific types of content or workflows where you want me to change my approach, I'd like to understand your requirements."
Sandra's Response
Sandra's response surprised Alex. She had expected concern, possibly frustration at not having been told earlier. What she got was something more nuanced:
"Thank you for being direct with me. Honestly, I assumed most vendors were using AI and wasn't sure how to ask. What you've described — research synthesis, first drafts you then substantially own — is consistent with how we're thinking about acceptable use for our own team. What would have concerned me is if you were putting patient information into consumer tools, or if the clinical accuracy work was being delegated to AI judgment rather than yours."
Sandra then described the organization's developing AI policy, which had two key concerns: patient data protection (HIPAA-related) and clinical accuracy accountability (who is responsible for the clinical content). Both of those, Alex's description had addressed.
The conversation that followed was productive: they discussed what categories of content would require flagging if AI was involved, what the organization's developing policy required for vendor disclosure, and how they might document AI use in future engagement terms.
What Changed After the Meeting
Two things changed as a result of the meeting.
Engagement terms update: Alex proposed, and Sandra agreed, to add a brief AI use disclosure clause to their engagement agreement going forward. The clause described: Alex uses AI tools for research synthesis and drafting assistance; she does not process client confidential information through consumer AI tools; clinical and patient-facing content remains subject to her professional judgment and editorial oversight; she will flag any significant change in AI use practices. Sandra added: the firm reserves the right to require AI-free production for specific content categories if regulatory requirements dictate.
The clause was brief (three sentences) and created clarity in both directions.
Proactive disclosure in new engagements: Alex updated her standard new client intake to include a brief description of her workflow, including AI tool use. Not an elaborate disclosure — just a factual description of how she works, offered upfront. She found that raising it proactively, framed matter-of-factly ("here's how I work, including AI tools I use for X purposes"), produced much better conversations than waiting to be asked.
Several new clients asked follow-up questions; none had concerns once she described the specifics. Two clients in regulated industries (one healthcare, one financial services) had specific requirements (no PHI in consumer tools, no proprietary financial data in consumer tools) that she could now confirm compliance with proactively.
What Could Have Gone Differently
Alex reflected on how the conversation could have gone differently.
If Sandra's question had been accusatory — if she had said "I've heard you use AI and we didn't agree to that" — the conversation would have been harder. Alex's position would have been defensible (she hadn't been asked, there was no agreement against it, the deliverables were high quality) but the framing would have been reactive rather than transparent.
If Alex had been evasive or downplayed her AI use ("I use it occasionally for grammar checking") the answer would have been technically accurate for some uses and misleading for others. A client who subsequently discovered the fuller picture would have felt deceived in a way that damages relationships more than honest disclosure does.
If her AI use had included processing confidential client information through consumer tools, the conversation would have been genuinely problematic — not just about disclosure norms but about a concrete failure.
The outcome was good because the actual practice was defensible and the answer was honest. The disclosure, when it came, created clarity rather than crisis.
The Norms She Developed
After this experience, Alex developed her personal AI disclosure practice for freelance consulting:
Standard engagement disclosure: Brief description of AI tool use in workflow, included in engagement terms or initial project communication.
Proactive flagging triggers: If a significant change in how she uses AI on a specific project occurs, or if the client is in a regulated industry with specific data handling requirements, she raises it proactively.
Response to direct questions: Always honest, complete, and context-setting — describing what AI does in her workflow, where her professional contribution is, and what she doesn't delegate to AI.
No concealment: If the answer to "do you use AI?" would embarrass her if given accurately and completely, that is a signal that her practice needs to change, not that she needs to give a less complete answer.
Lessons
1. Direct, honest answers to disclosure questions typically land better than expected. The anticipatory anxiety about being asked is often worse than the actual conversation. Clients generally want to know the framework — not to catch you, but to understand what they're getting.
2. Proactive disclosure preempts the harder version of the question. Being asked "do you use AI?" in a situation where you haven't disclosed is more awkward than having disclosed upfront. Front-loading the conversation gives you control of the framing.
3. The quality of your actual practice determines how disclosure goes. If your AI use is defensible (appropriate boundaries, client data protected, professional judgment maintained), disclosure is not risky — it is clarifying. If your practice is not defensible, disclosure reveals a problem. The lesson is to have defensible practices, not to avoid disclosure.
4. Engagement terms that address AI use benefit both parties. Clarity about what is and isn't done with AI, upfront, prevents misunderstandings and creates a shared framework for any questions that arise later.
5. Regulated industries have specific requirements that matter regardless of your general disclosure norms. Healthcare, financial services, legal — these industries have specific data handling requirements (HIPAA, SEC, attorney-client) that directly affect which AI tools can be used with client materials. Know these requirements for every client whose industry is regulated.
Related: Chapter 33, Section 1 (Disclosure in professional services), Section 5 (Personal ethics framework)
Continue to Case Study 2: Elena's Framework — A Personal AI Ethics Policy for Consulting