Case Study 1: Elena's Client Data Crisis

When Confidential Information Almost Went Into ChatGPT

Note

This case study is for educational purposes and does not constitute legal advice.

Persona: Elena (Management Consultant) Domain: Management consulting, client data handling Context: Near-miss with confidential client information and a consumer AI tool Decision: Stopped before the breach occurred; built organizational framework Outcome: Updated engagement practices, clear data classification policy, organizational training


Background

Elena's consulting firm had been retained for a workforce restructuring engagement by a financial services company. The engagement involved sensitive information: specific employee names and compensation data, proprietary organizational structure decisions not yet announced, performance assessments of named individuals, and strategic business plans that were not public.

Elena had been using AI tools productively throughout the engagement: for research synthesis, for drafting frameworks, for helping structure presentation materials. Her AI use had been efficient and valuable.

The near-miss occurred during a particularly intense phase of the project when she was under significant deadline pressure.


The Moment

It was a Tuesday evening. Elena was working late to prepare materials for a client executive presentation scheduled for Thursday. She had a spreadsheet open that contained compensation and role data for 47 named employees — the people whose positions were being restructured.

She needed to develop a summary of the compensation impact analysis quickly. She had two hours to produce it.

She opened ChatGPT. She copied the spreadsheet content — names, roles, compensation ranges, proposed outcomes for each person.

She was about to paste it.

She paused.

Something stopped her. It wasn't a specific trained response — it was a rising discomfort with what she was about to do. She asked herself: Who are these people? What is this data? Where is it going?

The spreadsheet contained individually identifiable compensation information about 47 real people whose jobs were at stake, generated in the context of a confidential business process. The data had been provided to her firm under a confidentiality agreement. Some of these individuals were based in the EU — their data was subject to GDPR. All of it was confidential client information.

She closed the ChatGPT tab.


The Decision Process After She Stopped

Elena spent fifteen minutes working through what she had nearly done.

The confidentiality agreement question: Her firm's engagement letter with the client included standard confidentiality provisions. Those provisions prevented her from disclosing client confidential information to third parties. Consumer AI tool vendors are third parties. Pasting this data into ChatGPT would have constituted a disclosure to a third party — the vendor whose terms of service reserved the right to use inputs to improve the model.

The GDPR question: Several of the employees were based in the UK and EU. Their personally identifiable employment and compensation data was subject to GDPR privacy protections. Processing their personal data through a US-based consumer AI tool required a lawful basis for transfer and processing that she did not have.

The client confidentiality question: Even beyond GDPR and the contract, this was the client's confidential business information — decisions not yet announced, compensation structures not public, organizational plans that were commercially sensitive. The client had shared it with her on the basis of a professional confidentiality relationship, not with the expectation that it would be processed through an external commercial platform.

The practical consequence: If this data had been entered into ChatGPT's standard tier, it would have been subject to OpenAI's terms of service as they existed at the time — which did not provide the confidentiality guarantees the client relationship required.

Elena completed the compensation summary herself that evening. It took longer without AI assistance. The presentation was delivered on time.


The Organizational Framework She Built

After the engagement concluded, Elena brought the near-miss to her firm's management committee. She framed it not as a personal mistake she had caught but as a systemic gap: the firm had no clear guidance about what client information could or couldn't go into AI tools.

Working with the firm's leadership and its external legal counsel, she developed a data classification framework for AI tool use:

Category 1: Freely usable with AI tools Public information, firm's own methodology documents, published research, anonymized case study material. Can be used with any AI tool, including consumer tiers.

Category 2: Use only with enterprise AI tools under approved data processing terms Client-provided information that is not personally identifiable and not subject to specific regulatory protections. Examples: industry benchmark data, anonymized process descriptions, aggregated financial data without individual identifiers. Requires enterprise tier with data handling agreements.

Category 3: Review required before AI tool use Client confidential information not in Category 4. Any use with AI tools requires confirming the tool's data handling commitments against the specific confidentiality provisions of the client engagement.

Category 4: Never in AI tools without explicit legal review and specialized tool approval - Personally identifiable information about individuals (names, compensation, performance) - PHI under HIPAA - Information about EU/EEA residents subject to GDPR individual rights - Attorney-client privileged communications - Classified or government-controlled information - Client trade secrets explicitly identified as such

The framework was written down, trained to all consulting staff, and included in the firm's new AI use policy.


The Training Conversation

Elena ran a training session for the firm's consulting staff on the framework. The most common question she received: "Why can't we use Category 4 information if we trust the AI tool?"

Her answer: "It's not about whether we trust the AI's capabilities. It's about the legal relationship between us, the client, and the third-party tool vendor. When you paste a client's personally identifiable compensation data into ChatGPT, you're not just using a tool — you're transmitting that data to OpenAI under their terms of service, not under our client's confidentiality agreement. The client trusted us, not OpenAI. They had no say in whether their employees' data was processed through an external commercial platform. That trust creates a legal and professional obligation that our tool preferences don't override."

Another common question: "What about the GDPR component specifically for EU clients?"

Her answer: "GDPR requires a lawful basis for any processing of EU residents' personal data, including processing by vendors we engage. Consumer AI tools don't have the data processing agreements required for GDPR-compliant processing. If the EU supervisory authorities are conducting audits and find that a consulting firm processed client employee data through consumer AI tools without adequate legal basis, that's a GDPR enforcement risk — not just for us, but potentially for the client who provided us the data."

The training session raised awareness of issues the staff had not previously thought through. Several people mentioned they had done things similar to what Elena had nearly done, without realizing the implications.


The Contract Language Update

Elena also worked with legal counsel to develop standard AI use language for client engagement contracts going forward:

"[Firm] uses artificial intelligence tools as part of its professional workflow. Client information, including personally identifiable information, will only be processed using [Firm]'s enterprise AI tools operating under appropriate data handling agreements, and will not be entered into consumer AI tools or any platform without adequate privacy and confidentiality protections consistent with applicable law. [Firm] will provide a description of its current AI tool infrastructure to Client on request."

This language did several things: it disclosed that AI tools were used, committed to appropriate data handling, created an explicit obligation the firm would be accountable for, and invited clients to ask questions. Several clients asked about it during contract negotiations — which led to productive conversations about the firm's actual practices and, in two cases, additional client-specific requirements that were accommodated.


What the Near-Miss Cost and What It Saved

What it cost: A Tuesday evening working the old-fashioned way instead of with AI assistance. Fifteen additional minutes of uncomfortable reflection. Several weeks of policy development work after the engagement.

What it saved: The potential consequences of a confidentiality breach — which might have included contract termination, professional liability exposure, reputational harm to the firm, and potential GDPR enforcement consequences for EU-resident data. The client relationship was worth more than any single engagement; the reputational cost of a data breach would have extended well beyond this client.

The near-miss was information. The firm used it.


Lessons

1. The pause before pasting is worth practicing as a habit. Elena stopped not because she had memorized a rule but because something felt wrong when she considered what she was doing. That intuition is cultivatable — but it needs to be reinforced with actual knowledge about what the concerns are.

2. Data classification frameworks remove moment-by-moment judgment. After the framework was in place, staff didn't need to analyze whether each specific piece of information was okay to paste. The categories resolved most common cases in advance.

3. Near-misses are the most valuable training data. The firm learned more from the near-miss than it would have from a generic training on AI use policy. Real situations make abstract principles concrete.

4. The GDPR and HIPAA protections are not bureaucratic obstacles — they reflect real interests. The 47 employees in the spreadsheet had not consented to having their compensation and employment status processed through a commercial AI platform. Their privacy interests were real. The legal framework protects those interests. This framing — the laws protect real people — makes compliance motivation more than rule-following.

5. Client contract language that addresses AI use creates clarity for both parties. The conversation that language provokes is valuable: it surfaces clients' concerns, creates accountability, and prevents the ambiguity that leads to disputes.


Related: Chapter 34, Section 4 (Data Privacy), Section 3 (Trade secrets and confidentiality), Section 7 (Risk management framework)

Continue to Case Study 2: Raj's Open Source Compliance Audit — AI-Generated Code and License Risk