Chapter 38 Exercises: Deploying AI in Teams and Organizations
These exercises move from individual reflection to team-level implementation planning. Some are solo exercises; others require engaging with actual colleagues. Complete them in order — each builds on the previous.
Section 1: Team AI Assessment
Exercise 1: The AI Use Inventory
Before you can set policy, you need to understand actual use.
Interview at least three colleagues (or, if you're working solo, document your own use):
- Which AI tools do they use? How often?
- What tasks do they use AI for?
- What's working well?
- What isn't working?
- What concerns do they have?
Write a 300-word summary of your findings. What patterns emerge? What surprises you?
Exercise 2: The Skill Spectrum Map
Map your team (or yourself and three people you work with) on a simple 5-point AI literacy scale:
- Non-user — Hasn't engaged with AI tools
- Explorer — Has tried AI tools occasionally, inconsistently
- Practitioner — Uses AI regularly for specific tasks, getting real value
- Expert — Uses AI across many tasks, gets consistently good results, has developed judgment about when to use and when not to
- Integrator — Has built AI into core workflows, contributes to team AI knowledge
For each person (or role), note: What would it take to move them one level up? What investment of time/training/support?
Exercise 3: The Use Case Sensitivity Matrix
Create a simple 2x2 matrix:
- X-axis: AI value potential (Low → High)
- Y-axis: Information sensitivity / confidentiality risk (Low → High)
Place 10 tasks your team does in this matrix. The quadrants suggest:
- High value, Low sensitivity: Tier 1 (approved, encourage use)
- High value, High sensitivity: Tier 2 (approved with safeguards, additional review)
- Low value, Low sensitivity: Optional (use if helpful, don't mandate)
- Low value, High sensitivity: Tier 3 (probably prohibited — low upside, meaningful risk)
What does this map tell you about where to focus your team's AI use?
Section 2: Policy Development
Exercise 4: Draft Your Team's Three-Tier Use Case List
Using the framework from this chapter and your use case inventory from Exercise 1:
Draft: - 5-8 Tier 1 use cases (approved) - 3-5 Tier 2 use cases (approved with review) - 3-5 Tier 3 use cases (prohibited)
For each Tier 2 case, specify what the review process should be. For each Tier 3 case, explain why it's prohibited (the "why" matters for compliance and buy-in).
Exercise 5: The Data Handling Rules Exercise
For your team or organization, answer these questions:
- Which AI tools are currently being used? Are they all sanctioned?
- What categories of information do team members regularly work with? (client data, proprietary data, PII, public information, internal strategies, etc.)
- For each category, what should the rule be about sharing with AI tools?
- Does your industry have specific regulatory requirements (HIPAA, GDPR, financial regulations, legal privilege) that affect what can be shared with AI tools?
Draft a one-page data handling addendum for your team's AI policy. Be specific — "sensitive information" is not actionable; "client names, contract terms, revenue data, and employee information" is.
Exercise 6: The Disclosure Decision Exercise
For your work context:
- What kinds of work do you produce that go to external parties (clients, publishers, authorities, the public)?
- For each category, what is the relevant community's expectation around AI use disclosure? (Research this if you're unsure — what are peers in your industry saying?)
- Draft your default disclosure position: when will you proactively disclose AI use, and how?
- Draft the specific language you would use to disclose AI assistance to a client. Make it honest and confidence-inspiring rather than apologetic.
Exercise 7: The Quality Standard Specification
For the three most common types of deliverable your team produces:
Define specifically what "done" looks like for AI-assisted versions:
- What has been verified?
- Who has reviewed it?
- What editing has occurred?
- What's the test that it meets the standard?
Compare this to your current "done" standard. Are they the same? If AI assistance makes the standard harder or easier to meet, why, and is that appropriate?
Section 3: Training and Resources
Exercise 8: Build Your First Prompt Library Entry
Identify one task your team does regularly that is well-suited to AI assistance. Create a prompt library entry for it:
- Task name and description
- When to use AI for this task (and when not to)
- The full prompt template with placeholders for variable elements
- Instructions for what context to add
- A quality checklist: five things to verify or review before the output is "done"
- An example of good output (either an example you've produced or describe what it should look like)
Share this with at least one colleague and get their feedback: Is the prompt template clear? Is the quality checklist complete? Would they use this?
Exercise 9: The Show-and-Tell Prep Exercise
Identify a real AI-assisted workflow you've used successfully. Prepare a 15-minute demonstration to show a colleague:
- The task you were doing
- The prompt you used (and why you constructed it that way)
- What the AI produced
- What you did to the output (editing, verification, refinement)
- The final result
Present this demonstration to at least one colleague. Afterward, ask them: What surprised you? What would you adapt for your own work? What questions do you have?
This exercise is the foundation of peer AI learning — it's far more effective than any formal training.
Exercise 10: The Concern Conversation
Identify one colleague who is skeptical about or resistant to AI adoption. Have a genuine conversation with them:
- What specifically concerns them?
- What experiences have they had with AI tools (good or bad)?
- What would change their mind, if anything?
- What could be done differently in how AI is being adopted on the team?
Write a 200-word reflection: What did you learn from this conversation? Did any of their concerns change how you think about AI adoption? What would you do differently as a result?
Section 4: Implementation Planning
Exercise 11: The Team AI Policy Draft
Using the template in this chapter and your work from Exercises 4, 5, 6, and 7, draft a complete team AI policy.
Your policy should: - Be readable in 10 minutes - Be actionable (specific enough to guide real decisions) - Include all five components: approved tools, three-tier use cases, data handling rules, disclosure requirements, quality standards, and governance - Be written for the team, not for compliance — use language your colleagues will understand and trust
Share it with 2-3 colleagues for feedback before finalizing. Revise based on their input.
Exercise 12: The AI Playbook First Draft
Choose the three use cases from your Tier 1 list that are highest priority. For each, complete a playbook section using the template in this chapter.
Your playbook sections should be practical enough that a new team member could follow them without additional guidance. Test this: give your playbook section to someone who doesn't regularly use AI for this task and ask them to follow it. What breaks down? What's unclear?
Exercise 13: The Governance Structure Design
For your team (or the team you're imagining), define:
- Policy owner: Who maintains the AI policy? What does that responsibility entail?
- Escalation path: If someone has a question about whether something is appropriate, who do they ask?
- Incident process: If AI-assisted work fails (quality problem, policy violation, confidentiality breach), what happens?
- Review cadence: How often is the policy reviewed and updated?
If you're a team lead, implement this. If you're not, write a recommendation you could bring to your team lead.
Exercise 14: The Change Management Plan
Identify the three team members most likely to be resistant to AI adoption (or, if you're imagining a team, describe three types of resistant team member).
For each, develop a specific engagement strategy: - What is the root of their resistance? - What would make AI adoption feel safe and beneficial for them? - What's the first conversation you'd have? - What early win could demonstrate value in a way that's meaningful to them specifically?
Exercise 15: The 60-Day Rollout Plan
Using Alex's scenario in this chapter as a model, draft a 60-day AI adoption plan for your team:
- Days 1-14: Assessment and policy development
- Days 15-28: Shared resources and first training
- Days 29-42: Quality standards and review process
- Days 43-60: Ongoing cadence and iteration
For each phase, specify: - What happens - Who is responsible - What success looks like - What you'll do if it's not working
Bonus Exercises
Exercise 16: The Policy Benchmark
Research AI policies at three organizations in your industry (some organizations publish these; others you may need to infer from job postings, industry publications, or colleague conversations).
How do they compare to the policy you've drafted? What have they thought of that you haven't? What have you thought of that they haven't?
Exercise 17: The AI Equity Audit
Look at AI adoption in your team or organization through an equity lens:
- Who is benefiting most from AI tools? (More experienced? More technically skilled? Certain job functions?)
- Who is benefiting least?
- What are the downstream performance implications of this uneven adoption?
- What would you do to make AI benefit more equitably distributed?
Exercise 18: The "What Could Go Wrong?" Exercise
Brainstorm the ten worst things that could happen as a result of ungoverned AI use in your team or organization. For each scenario:
- How likely is it?
- How severe would the consequences be?
- Does your draft policy prevent or mitigate it?
Revise your policy based on this exercise.