Chapter 37 Exercises
Change Management for Compliance Transformation
Exercise 37.1: ADKAR Diagnostic
Difficulty: Introductory
A UK retail bank has implemented a new automated KYC onboarding system. Six months after go-live, adoption is at 55% — approximately half of new customer applications are still being processed through the old manual system. A quick survey of the operations team reveals the following:
- 95% of staff understand why the new system was introduced (FCA operational resilience requirements, reduction in manual errors, speed improvement)
- 68% of staff say they "would prefer to keep the old system if given a choice"
- 71% of staff say they know how to use the new system for straightforward applications
- 42% of staff say they feel confident using the new system for complex applications (non-standard ID, business customers, high-risk indicators)
- Team leaders have not made system usage a performance expectation; the old system remains accessible
a) Map this situation onto the ADKAR model. At which stage is the adoption failure primarily occurring? At which stage is there a secondary problem?
b) For each ADKAR stage, rate the bank's current status (Strong / Adequate / Weak) and provide a brief justification.
c) Design three specific interventions to address the primary and secondary gaps you have identified. For each intervention, specify: what it involves, who delivers it, what timeframe it operates over, and how you would measure its effectiveness.
d) The team leaders have not enforced system usage. Is this a leadership failure, a training failure, or a governance failure? What would you recommend to address it?
Exercise 37.2: Stakeholder Communication Plan
Difficulty: Intermediate
Meridian Capital (fictional US broker-dealer) is replacing its manual trade surveillance review process with an AI-assisted platform. The platform will: - Generate a prioritized alert queue with AI-assigned risk scores - Provide analysts with contextual information and pattern history for each alert - Automatically close alerts below a specified confidence threshold (after review of AI's rationale) - Generate a daily automated summary report for the Head of Surveillance
The affected stakeholders include: - Surveillance analysts (15 people): Their core work will change significantly. Some feel their expertise will be devalued. - Head of Surveillance (Rafael, in a consulting advisory role): Supportive of the technology but concerned about the team's readiness. - Trading desk (front office): Their alerts will now include AI-generated pattern analysis. Some are concerned about being flagged incorrectly. - Legal and compliance leadership: Need assurance that the AI outputs meet regulatory defensibility standards. - FINRA: The firm's primary regulator. The system represents a material change to surveillance controls.
a) For each stakeholder group, identify: - Their primary concern about this change - Their current likely position on the ADKAR model - The key message they need to receive - The appropriate communication channel and frequency
b) Write the central narrative (maximum 150 words) that will be consistent across all communication to all stakeholder groups.
c) Design the communication sequence for the first eight weeks of the program. What communications occur when, to whom, through what channel?
d) FINRA has not been proactively notified of the system change. Should they be? At what point in the program? What should the notification contain?
Exercise 37.3: Training Program Design
Difficulty: Intermediate-Applied
You are designing the training program for a new AML transaction monitoring system at a fictional UK challenger bank (Apex Bank). The system will be used by:
- 15 AML analysts who review and close alerts (core system users)
- 3 team leads who manage alert queues and supervise analyst decisions
- 1 Deputy MLRO who reviews escalated suspicious activity and authorizes SAR filings
- 2 compliance technology staff who administer the system (configuration, user management, reporting)
The new system uses ML-based alert scoring. It represents a significant shift from the previous rules-based system. The team has worked with the old system for an average of 3 years.
a) Design a competency framework for each role: what specific competencies (knowledge, skills, judgment) must each role demonstrate before they are permitted to use the system in production for regulatory purposes?
b) Design the training architecture: for each role, specify (i) the format (classroom, online, one-on-one, workshop, supervised practice), (ii) the duration and timing relative to go-live, (iii) the content organized around actual workflows rather than system features, and (iv) the assessment methodology.
c) The chapter argues that a "second training wave" at 2–4 weeks post go-live is essential. Design this second wave specifically for the AML analyst role: what topics does it cover, what format does it use, and what questions or situations from the first weeks of production should it address?
d) Design a "calibration exercise" — a training scenario that specifically builds the analysts' ability to recognize when the AI system's alert scoring may not be reliable and additional judgment is required. The exercise should use realistic transaction scenarios.
Exercise 37.4: Post-Implementation Adoption Assessment
Difficulty: Applied
It is 90 days after go-live of a new regulatory reporting platform at a mid-size asset manager. You have been asked to conduct the formal hypercare closure review. You have the following data:
Adoption metrics: - 74% of reports submitted through new system (target: 90%) - 26% of reports still produced manually (team lead for the derivatives reporting team has continued manual production, citing "data quality concerns") - New users onboarded since go-live: 4 (none received formal training)
Quality metrics: - Error rate in system-generated reports: 1.8% (target: <1%) - Error rate in manual reports: 3.4% - 2 regulatory submissions required corrections (both from manual reports)
System performance: - Average report generation time: reduced by 28% for system-generated reports - System uptime: 99.6%
Attitude survey (conducted at 60 days, n=18): - "I feel competent using the system for my regular reporting tasks": 71% agree - "I trust the system's outputs for regulatory submissions": 58% agree - "My team lead supports using the new system": 44% agree
a) Evaluate the program's current status against each dimension. Where is the adoption program succeeding? Where are the critical gaps?
b) Identify the three highest-priority issues requiring immediate action. For each, specify the root cause, the risk it creates, and the recommended intervention.
c) The derivatives reporting team lead's continued use of the manual process is described as driven by "data quality concerns." How do you investigate whether this is a legitimate technical issue or change resistance? What response is appropriate for each possibility?
d) Four new users have received no training. What is the compliance risk of this gap, and how should it be addressed immediately?
e) Write a hypercare closure recommendation. Should you close hypercare and declare the program stable, extend hypercare with specific conditions, or escalate to senior leadership? Justify your recommendation with reference to the data above.
Exercise 37.5: Reflective Professional Analysis
Difficulty: Applied — Reflective
Read the following scenario and write a 500-word analysis.
Scenario: You are a compliance manager at a large bank. Your CCO has approved a new automated KYC refresh system. The system will run automated periodic reviews for all existing customers, flagging those requiring enhanced due diligence review by the team.
You believe the system is well-designed. But during user acceptance testing, you notice that four of the six analysts on your team are routing their UAT alerts back through the old manual process to verify the system's outputs before accepting them. When you ask why, they say: "We don't trust it yet. What if it misses something and we've already signed off?"
The go-live is three weeks away. Your CCO wants a go/no-go recommendation.
Questions to address:
-
Is the team's behavior during UAT evidence of a change management failure, a technology failure, or appropriate professional caution? How do you distinguish between these possibilities?
-
Apply the ADKAR diagnostic to this situation. At which stage are the analysts stalling? What evidence do you have from the scenario?
-
What specific actions would you take in the three weeks before go-live to address the situation?
-
What is your go/no-go recommendation? Under what conditions would you recommend proceeding as planned? Under what conditions would you recommend a delay?
-
How does this scenario relate to the chapter's argument that "the technology is the easy part"?