Chapter 38 Quiz: Deploying AI in Teams and Organizations

Test your understanding of organizational AI deployment concepts. Questions range from foundational to nuanced — some have clear right answers, others are designed to prompt reflection.


Question 1

What is the primary reason that individual AI competence doesn't automatically transfer when AI tools are rolled out to a team?

A) AI tools work differently for different users B) Individual competence is built on implicit domain knowledge, judgment, and habits that others haven't developed yet C) AI tools require special configuration for team use D) Team members have different levels of technical sophistication

Answer **B** is correct. Individual AI effectiveness is built on months of practice, implicit domain knowledge, developed judgment about when to trust or verify, and iteration habits — none of which transfers automatically when the tools are made available to others. The technology works the same way for everyone; what differs is the human layer around it.

Question 2

A team leader notices that AI-assisted work from some team members is excellent while work from others has quality problems — wrong facts, generic phrasing, unverified claims. What is this an example of?

A) The technology failure mode B) The policy vacuum failure mode C) The inconsistent use failure mode D) The trust failure mode

Answer **C** is correct. This is the inconsistent use failure mode — AI tools are being used, but without shared standards for quality, verification, and review. Different team members are applying different levels of rigor, producing outputs that vary in quality in ways that are hard to diagnose and address.

Question 3

In the three-tier use case taxonomy described in this chapter, what distinguishes a Tier 2 use case from a Tier 1 use case?

A) Tier 2 use cases are prohibited B) Tier 2 use cases require additional review beyond standard review C) Tier 2 use cases are completely forbidden D) Tier 2 use cases only apply to senior staff

Answer **B** is correct. Tier 1 use cases are approved with standard review (the same review any work would receive). Tier 2 use cases are permissible but require additional review — either because the stakes are higher, the information involved is sensitive, or the AI limitations in that domain require closer oversight. Tier 3 use cases are prohibited.

Question 4

An employee shares detailed client contract terms with an external AI tool because they want to draft a summary. The organization has no AI policy. What failure mode does this represent?

A) The skill gap failure mode B) The policy vacuum failure mode C) The trust failure mode D) The inconsistent use failure mode

Answer **B** is correct. Without explicit policy about what information can be shared with which tools, individuals make their own judgments — which may be reasonable or may create significant confidentiality risk. This is the policy vacuum failure mode: AI use is happening without organizational guidance, creating ungoverned risk.

Question 5

Which of the following is NOT typically a valid reason for requiring disclosure of AI use in professional work?

A) Industry or regulatory requirements B) Client expectations about the nature of the work C) That AI was used for any part of the work at all D) External publication or professional context requirements

Answer **C** is correct as the "not valid" option. The mere fact that AI was used for any part of the work is not by itself a valid universal trigger for disclosure — that standard would be unworkable and inconsistent with how many other tools (spell checkers, research databases, templates) are used without disclosure. Disclosure requirements are context-dependent: industry requirements, client expectations, and specific professional contexts may require disclosure; internal drafting assistance typically does not.

Question 6

What is the key principle regarding responsibility for AI-assisted work?

A) Responsibility shifts to the AI tool when it generates the content B) Responsibility is shared equally between the AI tool and the employee C) Responsibility remains with the person who submits the work, regardless of AI involvement D) Responsibility shifts to the manager who authorized AI use

Answer **C** is correct. AI assistance does not change accountability. The person who submits work is responsible for its quality, accuracy, and appropriateness. "AI generated that" is not a defense for errors. This principle must be explicit in team AI policy to prevent the diffusion of accountability that erodes quality standards.

Question 7

A team is rolling out AI tools. The three early adopters who are already getting results start enthusiastically promoting AI use to skeptical colleagues. Research suggests this approach typically:

A) Is the most effective way to drive adoption B) Tends to polarize the team — enthusiasts become more enthusiastic, skeptics become more resistant C) Has no significant effect on adoption rates D) Works well for technical teams but not for creative teams

Answer **B** is correct. Research on AI adoption patterns shows that early adopter evangelism tends to polarize rather than convert. Peer demonstration (showing real work in real workflows) is more effective than peer persuasion. Skeptics often experience evangelical promotion of AI tools as dismissive of their genuine concerns, which hardens rather than softens resistance.

Question 8

The "equity and fairness dimension" of organizational AI adoption refers primarily to:

A) Whether AI tools treat different demographic groups fairly B) Whether the benefits of AI adoption are equitably distributed across team members with different skill levels and access C) Whether AI pricing is fair for small organizations D) Whether AI policies apply equally to senior and junior staff

Answer **B** is correct. The equity dimension in team AI adoption is about the uneven distribution of AI productivity benefits — which tend to concentrate among already-higher-performing employees with greater domain expertise and AI literacy. Without explicit attention to skill development and equitable access, AI adoption can widen existing performance gaps rather than raise overall team performance.

Question 9

An AI playbook differs from an AI policy in that:

A) An AI policy is required by law; a playbook is optional B) An AI policy governs what's allowed; a playbook documents how to do it well C) A playbook is for senior staff; a policy applies to everyone D) An AI policy covers all tools; a playbook covers only one tool

Answer **B** is correct. The policy answers "what can we do?" — establishing rules, permissions, and requirements. The playbook answers "how do we do it well?" — documenting use-case-specific workflows, example prompts, quality checklists, and lessons learned. Both are valuable; neither substitutes for the other.

Question 10

A team member refuses to use AI tools, citing concerns about AI-generated content's effect on the creative industries and the ethics of using AI trained on others' work. The most appropriate response from a team leader is:

A) Require them to use AI tools anyway — business needs come first B) Excuse them from all AI use indefinitely C) Dismiss their concerns as irrelevant to the work context D) Engage their concerns as legitimate, give them meaningful agency in how AI is deployed, and find appropriate use cases where they're willing to engage

Answer **D** is correct. Ethical concerns about AI are legitimate and deserve genuine engagement rather than dismissal or enforcement. At the same time, a team leader needs to navigate these concerns alongside organizational needs. The most effective approach is to treat the concerned employee as someone with valuable perspective, involve them in governance decisions, and find approaches to AI use that are consistent with their values where possible — rather than either forcing compliance or granting a blanket exception.

Question 11

What does the "AI skills gap" research finding that "AI is a skill multiplier, not a skill equalizer" mean?

A) AI tools are only useful for already-skilled employees B) AI adoption tends to disproportionately benefit already-higher-performing employees, potentially widening performance gaps C) Less skilled employees should not be given access to AI tools D) AI skills are more important than domain skills

Answer **B** is correct. The research finding means that AI tools tend to amplify existing capabilities rather than equalize across skill levels. An already-expert domain practitioner who uses AI effectively gets substantially more productivity benefit than a newer employee who lacks the domain knowledge to catch AI errors and provide good context. This doesn't mean less experienced employees can't benefit — it means intentional training investment is needed to ensure AI benefits are broadly distributed.

Question 12

In Alex's team rollout scenario, what is identified as the highest-value training activity?

A) A formal AI orientation workshop B) Providing access to the company-licensed AI tools C) A peer "show and tell" session where experienced AI users walk through their actual workflows D) Sending team members to an external AI training course

Answer **C** is correct. The peer demonstration session — where team members who are already getting results show their actual workflows, prompts, and outputs — is described as the highest-value training activity. This aligns with the research: peer demonstration is more compelling and practical than any formal training, because it shows real work in real contexts rather than generic AI capability.

Question 13

Which governance element answers the question "I'm not sure if I should use AI for this — who do I ask?"

A) The policy owner role B) The escalation path C) The incident process D) The review cadence

Answer **B** is correct. The escalation path is specifically the governance element that gives employees a clear answer to "I'm not sure, who do I ask?" Without a clear escalation path, employees either make ungoverned individual decisions or default to not using AI at all — both suboptimal outcomes.

Question 14

Elena's quality test for client deliverables — "Would I sign my name to this?" — is primarily a test of:

A) Whether the document is grammatically correct B) Whether the AI tool used was company-approved C) Whether the submitting person genuinely stands behind every claim, conclusion, and recommendation D) Whether the document follows the company's style guide

Answer **C** is correct. Elena's test is about personal accountability and genuine ownership of the content. It's not a technical test — it's an ownership test. Can the person submitting the document say, honestly, "I stand behind everything in here"? If they can't because they haven't read it carefully, verified the claims, or challenged the AI's analysis, it's not ready.

Question 15

According to this chapter, what is the most important factor distinguishing successful from unsuccessful enterprise AI deployments?

A) Which AI tools were selected B) Which AI models were used C) Change management — how the deployment was communicated, how concerns were addressed, how adoption was supported D) The AI budget and investment level

Answer **C** is correct. The research on enterprise AI deployments consistently shows that change management factors — communication, concern management, and adoption support — are more predictive of success than technical factors like tool selection or model choice. The technology works; what determines organizational outcomes is the human and organizational work around the technology.