Case Study 2: Elena's Framework
A Personal AI Ethics Policy for Consulting
Persona: Elena (Management Consultant) Domain: Management consulting, organizational advisory Situation: Developing a principled, written personal AI ethics policy Approach: Systematic framework development across disclosure, attribution, fairness, and deception Outcome: Working policy that guides daily decisions and informs client relationships
Why She Built It
Elena did not build her personal AI ethics policy in response to a crisis. She built it in anticipation of one.
After two years of using AI tools professionally, she had noticed a pattern: most of her AI use decisions were ad hoc. A question would arise — should I disclose this? how do I describe this AI-assisted deliverable? is it okay to use AI for this? — and she would resolve it in the moment based on instinct and whatever precedents came to mind. Sometimes her answers were consistent; sometimes they weren't. She could not have articulated her policy because she hadn't developed one.
What prompted the formal effort was a conversation with a colleague who had received a difficult client question about AI use and hadn't handled it well — not because the practice was indefensible, but because he hadn't thought through what he was prepared to say. "I didn't have an answer ready because I'd never thought about what my answer should be," he told her.
Elena decided to think about it before she needed to.
Her Development Process
She spent a Saturday morning working through her policy. She did not use AI to write it (the irony was not lost on her). She used a blank document and worked through each section in sequence.
She started with the question: What do I actually use AI for in my consulting work?
Her list: - Research synthesis (summarizing literature, synthesizing industry reports) - First-draft generation for sections of reports and presentations - Structuring and outlining complex analyses - Generating option sets for client consideration - Reviewing and improving clarity of technical writing - Thinking through problems interactively (using AI as a sounding board) - Generating alternatives when I'm stuck on a framing problem
She then asked, for each: Who is affected by this use? What are their reasonable expectations? What would they want to know?
That analysis produced the framework below.
Her Written Policy
Section 1: Research and Drafting Assistance
What I do: I use AI tools for research synthesis, first-draft generation, outlining, and as a thinking aid throughout my consulting work.
Disclosure standard: I do not routinely disclose AI-assisted research or drafting in client deliverables. My rationale: clients retain me for strategic analysis and professional judgment, not for the unassisted production of written documents. The research synthesis and drafting support are tools in my workflow — comparable to using databases, templates, or professional editing assistance — and do not change the nature of my professional contribution.
Exception — proactive disclosure: I disclose AI involvement proactively when: (a) the client's contract terms address AI use; (b) the client is in a regulated industry (healthcare, financial services, legal) where AI tool use affects compliance obligations; or (c) I believe a reasonable client would consider the extent of AI involvement material to their assessment of the deliverable.
When asked directly: I answer honestly and completely. I describe the nature of my AI use, where my professional judgment is (and isn't) the basis of the work, and what data handling practices I apply.
Section 2: Attribution in Published Work
What I do: I occasionally publish thought leadership pieces, articles, and conference presentations under my own name that are developed with AI assistance.
Attribution standard: When AI substantially contributed to the drafting of a published piece (beyond grammar and minor language edits), I include an acknowledgment: "Prepared with AI assistance" or equivalent. I do not use language that falsely implies purely unaided authorship for substantially AI-drafted content.
Content standard: All analysis and strategic perspectives in my published work are mine. I do not publish AI-generated analysis as if it were my own independent analysis.
Venues: I follow each publication venue's AI policy. If a venue prohibits AI assistance, I comply.
Section 3: Client Data Handling
Rule: No client confidential information goes into consumer AI tools. Consumer AI means: the free or standard subscription tiers of publicly available tools, where data handling terms do not include adequate confidentiality protections.
Enterprise tools: For AI tools with appropriate enterprise data handling agreements, I follow the specific tool's data classification requirements and my firm's approved tool list.
Client materials: All client documents, client-identified data, and information about clients' business plans, personnel, or financial details stay out of consumer AI.
What I tell clients: I proactively communicate this data handling practice to clients in regulated industries and on request.
Section 4: Deception — My Bright Lines
I do not: - Generate fake reviews, testimonials, endorsements, or social content that will be presented as authentic human experience - Create AI personas that will interact with people under the pretense of being human - Submit AI-generated work to venues that prohibit AI use - Use AI to generate content for competitive bids or proposals that misrepresents the nature of my capabilities or approach
I note the distinction between concealment and active deception: not disclosing AI assistance in a context where no disclosure is expected is different from actively asserting "I wrote every word of this" when I did not. I avoid both, but the second is an absolute line.
Section 5: Competitive and Fairness Considerations
I am aware that my AI tool access provides productivity advantages in proposal and competitive contexts. My competitive strategy is to ensure that my professional judgment, strategic thinking, and client relationships — not AI-generated volume — are the source of my competitive value. I do not compete on AI-generated output volume in contexts where judgment is what clients are purchasing.
I support industry-level conversations about AI disclosure norms for competitive proposals and believe clear norms benefit all participants.
Section 6: Review Cadence
I review this policy annually or when: a significant regulatory development affects AI use in my practice area; a professional association I'm affiliated with issues updated guidance; I encounter a situation my current policy doesn't adequately address.
How She Uses It
The policy is not a publicly circulated document. It is a working tool for her own decisions.
When an ambiguous situation arises — and they do — she refers to the relevant section. Usually the answer is clear from the policy. Occasionally she identifies a gap and adds to it.
She has shared it with one trusted colleague who does similar work. His feedback was: "This is more explicit than anything I have. I should do this." He adapted her framework for his own practice. She found that the discussion of the policy with a peer was itself useful — explaining her reasoning revealed places where it was less principled than she thought.
The policy also serves as preparation for client conversations. When a client asks about AI use, she knows exactly what she wants to say — because she has thought through the positions and written them down. The conversation she described in Case Study 1 of this chapter (the disclosure conversation) would have been more difficult without this clarity.
What She Would Tell Others
Elena's advice for professionals developing their own AI ethics framework:
Do it before you need it. The ad hoc approach produces inconsistent decisions and leaves you unprepared when a direct question arrives.
Be specific, not aspirational. A policy that says "I will use AI responsibly" is not a policy. A policy that specifies what disclosure standard you apply, in which contexts, for which types of AI involvement, is a policy you can actually use.
Distinguish categories that require different standards. Disclosure norms for client work, for published content, for competitive proposals, and for data handling are all different. A single rule doesn't cover all four.
Write it down. The act of writing forces clarity. Policies that exist only as mental models are fuzzy at the edges. Written policies reveal the places where your thinking is less settled than you believed.
Review it. The norms are changing. A policy written in 2024 needs updating in 2026. Build the review practice in.
Don't let perfection prevent starting. Elena's first version was imperfect. She has revised it three times in two years. The first version was still much better than no version.
Lessons
1. Having a written policy is qualitatively different from having general intentions. Written policies create accountability, enable consistency, and prepare you for situations before they arise.
2. Specificity is the useful quality. A policy that distinguishes consumer vs. enterprise tools, published vs. unpublished work, and direct questions vs. routine non-disclosure is useful. A policy that says "be ethical" is not.
3. The process of writing reveals gaps. Elena identified two areas during her policy development where she hadn't fully thought through her position. Writing the policy forced the thinking.
4. Sharing with a trusted peer improves the policy. The act of explaining your reasoning to someone who will ask genuine questions reveals places where the reasoning is weaker than you thought.
5. A policy you can articulate is a relationship asset. Clients and peers who can see that you have a considered, principled approach to AI ethics trust your work product more, not less. The framework is not just compliance — it is a signal about professional character.
Related: Chapter 33, Section 5 (Developing a personal AI ethics framework), Section 4 (Organizational ethics), Section 1 (Disclosure sliding scale)
Return to: Case Study 1: Alex's Disclosure Dilemma — The Client Who Asked