Case Study 2: JPMorgan's AI Training Program — Upskilling 60,000 Employees
The Imperative
In 2019, JPMorgan Chase — the largest bank in the United States by assets, with over 290,000 employees — made a commitment that reverberated across the financial services industry. The company announced a $600 million, five-year investment in technology training, with AI and data literacy as central pillars. By 2024, the program had trained over 60,000 employees in AI-related skills, from basic data literacy for branch managers to advanced machine learning for quantitative analysts.
The scale was unprecedented. Most corporate AI training programs target hundreds of employees — typically the data science team and its immediate collaborators. JPMorgan targeted tens of thousands, spanning every major business line: consumer banking, commercial banking, investment banking, asset management, and corporate functions. The ambition was not to turn every employee into a data scientist, but to create an organization where every employee could understand, evaluate, and collaborate with AI systems.
The driving force was both defensive and offensive. Defensive: financial services is one of the most heavily regulated industries, and AI systems that make lending, trading, or compliance decisions require human oversight by people who understand what the AI is doing — and what it might get wrong. Offensive: JPMorgan's leadership recognized that AI would transform every business line, and that competitive advantage would accrue to organizations that embedded AI literacy broadly, not just deeply.
As Mary Callahan Erdoes, then CEO of JPMorgan's Asset & Wealth Management division, stated in a 2023 investor presentation: "Every one of our 300,000 employees will be touched by AI in some way. The question is whether they'll be equipped to use it well or be overwhelmed by it."
Program Design: The Three-Track Architecture
JPMorgan's AI training program was designed around three tracks — remarkably similar in philosophy to the three-tier model described in Section 32.8, though developed independently.
Track 1: Digital and Data Literacy (All Employees)
Target audience: All employees, starting with client-facing roles.
Format: Online, self-paced modules of 60 to 90 minutes each, with periodic in-person workshops for reinforcement. Completion was mandatory for designated employee populations and strongly encouraged for all others.
Content areas: - How data flows through the organization — from customer interactions to data warehouses to analytical outputs - What AI and ML are, in plain language, with financial services examples - How to interpret model outputs — understanding that a "90% confidence score" on a fraud alert does not mean "definitely fraud" - Data privacy and security responsibilities — every employee's role in protecting customer data - Recognizing AI limitations — when to trust a model's recommendation and when to apply human judgment
Key design decisions: - The curriculum was built around JPMorgan-specific examples, not generic AI concepts. A branch manager learned about AI through the lens of customer analytics tools she already used. A compliance officer learned through the lens of surveillance systems he already monitored. - The program was designed for a workforce with enormous educational diversity — from front-line tellers with high school degrees to PhDs in quantitative finance. The language, pacing, and assumed knowledge were calibrated for the broadest possible audience. - Completion data was tracked at the individual and manager level. Managers received dashboards showing their team's completion rates, creating social accountability.
Track 2: AI for Business Leaders (Managers and Senior Leaders)
Target audience: Approximately 10,000 managers and senior leaders across all divisions.
Format: Blended — online pre-work (4-6 hours) followed by a 1-day in-person workshop, followed by a 90-day post-workshop project.
Content areas: - Identifying AI opportunities within their business areas — using a structured opportunity assessment framework - Evaluating AI project proposals — distinguishing viable projects from "AI washing" (labeling existing analytics as AI) - Managing AI-augmented teams — how roles change when AI handles routine tasks - Risk management for AI — understanding model risk, bias risk, and regulatory risk in financial contexts - Communicating about AI to clients, regulators, and stakeholders
The post-workshop project was a critical differentiator. Each participant was required to identify one AI opportunity in their business area, develop a one-page proposal using the framework taught in the workshop, and present it to a panel that included representatives from the central AI team. The best proposals were funded as pilot projects. This created a direct pipeline from training to action — the upskilling was not theoretical.
Business Insight. JPMorgan's post-workshop project requirement addresses one of the most common upskilling failures: training without application. By requiring each manager to apply what they learned within 90 days — and creating a pathway to actual funding — the program ensured that learning translated into organizational action. Compare this to the "AI for Managers" model in Section 32.8, where Ravi's key success metric was "number of viable AI project proposals submitted within 90 days." The principle is identical.
Track 3: Technical Upskilling (Analysts, Engineers, and Power Users)
Target audience: Approximately 5,000 to 8,000 employees with quantitative or technical backgrounds who would work directly with AI tools and models.
Format: Intensive programs ranging from 4 weeks (for analysts adding Python and basic ML to their toolkit) to 6 months (for software engineers transitioning to ML engineering roles).
Content areas varied by sub-track: - Analytics track: Python, SQL, data visualization, basic statistical modeling, using internal ML platforms - Engineering track: ML engineering, model deployment, MLOps practices, working with the firm's internal ML infrastructure - Quantitative track: Advanced ML (deep learning, NLP, time series), model risk management, regulatory requirements for model validation
Internal certifications were developed in partnership with external training providers (including Coursera, Udacity, and custom programs developed with universities). Certification was tied to career progression — certain roles required specific certifications, creating a clear incentive for completion.
The Organizational Infrastructure
JPMorgan's training program did not operate in a vacuum. It was supported by several organizational infrastructure investments:
The AI & Data Science Team
JPMorgan's central AI organization — several hundred data scientists, ML engineers, and researchers — served as both a builder of AI systems and a source of training content and mentorship. Senior data scientists contributed to curriculum development, served as workshop facilitators, and mentored Track 3 participants through their projects.
The Chief Data & Analytics Office (CDAO)
The CDAO provided organizational oversight for the training program, ensuring alignment between training content and the firm's data strategy, governance requirements, and regulatory obligations. The CDAO's involvement ensured that training was not just technically accurate but institutionally aligned — teaching employees how AI worked at JPMorgan, not just how AI worked in general.
Manager Accountability
A critical and controversial design decision was making managers accountable for their teams' completion of Track 1 training. Completion rates were included in manager dashboards and discussed in leadership reviews. This created top-down pressure for participation — addressing the "optional training attracts the wrong audience" problem described in Section 32.8.
Some managers resisted, viewing the training as an unfunded mandate that competed with business deliverables. The program team countered with data: divisions with higher AI literacy scores showed faster adoption of new AI tools, fewer compliance incidents related to model misuse, and higher scores on internal innovation metrics. The data did not silence all critics, but it provided leadership with evidence to sustain the mandate.
Learning Communities
JPMorgan established internal learning communities — similar in structure to Spotify's guilds (see Case Study 1) — for employees who had completed Track 2 or Track 3 training. These communities met monthly, shared case studies from their own divisions, and served as peer support networks for applying AI in their work.
The communities proved particularly valuable for Track 2 graduates who were applying AI concepts in divisions where AI was still new. A marketing manager piloting a customer segmentation model could seek advice from a risk management colleague who had been through a similar process six months earlier. The communities created organizational memory for AI adoption — lessons learned in one division flowed to others without requiring centralized coordination.
Challenges and Adaptations
The Resistance Problem
Not all employees welcomed the training. Resistance came from several sources:
Job security fears. Some employees viewed AI training as preparation for their own replacement. "You're teaching me how the thing that's going to take my job works" was a sentiment the program team heard repeatedly, particularly from employees in operations and administrative roles. The program addressed this by reframing AI literacy as a career advantage ("people who understand AI will manage it; people who don't will be managed by it") and by including specific examples of how AI augmented rather than replaced roles at JPMorgan.
Skepticism from senior leaders. Some senior leaders viewed the training as unnecessary — they had succeeded for decades without understanding AI, and they saw no reason to start now. The program countered by presenting cases where AI-illiterate leaders had made costly mistakes: approving AI projects with unrealistic expectations, ignoring model risk warnings they didn't understand, or failing to leverage AI tools that could have improved their team's performance.
Time pressure. Front-line employees in client-facing roles had limited time for training. The program adapted by creating mobile-friendly, micro-learning modules that could be completed in 15-minute increments — during commutes, between client meetings, or during scheduled training blocks.
The Content Freshness Problem
AI evolves rapidly. Training content developed in 2020 was partially outdated by 2022, and significantly outdated by 2024 — particularly after the emergence of generative AI and large language models. The program established a curriculum review cycle: core content was reviewed quarterly, with major updates every 6 months. A small team of curriculum developers maintained the content, working with the central AI team to ensure accuracy and relevance.
The generative AI wave of 2023-2024 required the most significant content overhaul. Within months, the program added modules on prompt engineering, responsible use of large language models, and the firm's policies on using AI-generated content in client communications, research reports, and regulatory filings.
The Measurement Problem
Measuring the impact of training at this scale is inherently difficult. JPMorgan tracked several metrics:
Completion metrics: - Track 1 completion rate: exceeded 80% across targeted employee populations by 2024 - Track 2 completion rate: approximately 70% of targeted managers - Track 3 certifications earned: over 5,000
Behavioral metrics: - Number of AI project proposals generated through Track 2 post-workshop projects: over 1,200 in the first three years - Number of proposals that progressed to funded pilots: approximately 200 - Internal AI tool adoption rates in divisions with high training completion vs. low training completion
Outcome metrics: - Reduction in compliance incidents related to AI misuse (measured by the compliance team) - Time-to-adoption for new AI tools (measured by IT deployment teams) - Employee confidence scores on AI-related survey questions (measured annually)
The firm has not publicly disclosed all of these metrics in detail, but presentations at industry conferences have described the program as generating positive ROI within three years of launch — driven primarily by faster AI tool adoption, reduced compliance risk, and the project pipeline from Track 2 graduates.
Lessons for Enterprise AI Upskilling
JPMorgan's experience offers several broadly applicable lessons:
1. Scale Requires Simplification
Training 60,000 people is not the same as training 600 people ten times. The curriculum must be simpler, the logistics must be more automated, and the delivery format must accommodate enormous variation in employee availability, learning styles, and baseline knowledge. JPMorgan's use of self-paced online modules for Track 1, with in-person workshops reserved for Track 2, reflects this scaling reality.
2. Mandatory Beats Optional
JPMorgan's decision to make Track 1 training mandatory for designated populations was controversial but effective. Optional programs consistently reach the already-converted. Mandatory programs reach the people who most need the education — the skeptics, the fearful, and the indifferent. The key is making mandatory training genuinely valuable, not just a compliance checkbox.
3. Training Must Connect to Action
The Track 2 post-workshop project — requiring every manager to identify an AI opportunity and develop a proposal — transformed the program from education to organizational change. Without the action requirement, managers would have attended the workshop, nodded appreciatively, and returned to their desks unchanged.
4. Context Matters More Than Content
Generic AI training ("what is machine learning?") is less effective than contextual training ("how is machine learning used in mortgage underwriting at our firm?"). JPMorgan's investment in firm-specific examples, firm-specific tools, and firm-specific use cases made the training immediately relevant to employees' daily work.
5. Upskilling Is Not a Project — It Is a Program
JPMorgan's program is ongoing, not one-time. The curriculum is updated regularly. New modules are added as technology evolves. Refresher training is offered for alumni. The investment is continuous, not episodic. Organizations that treat AI upskilling as a one-time initiative will find their investment obsolete within two years.
6. Executive Sponsorship Is Non-Negotiable
The program's mandate came from the CEO and the operating committee. Without that level of sponsorship, the resistance from individual managers and business unit leaders would have been overwhelming. Executive sponsorship provides the air cover that training programs need to survive organizational friction.
Discussion Questions
-
Scale vs. depth. JPMorgan trained 60,000 employees, but the depth of training varied significantly across tracks. Is it better to train 60,000 people at a basic level or 6,000 people at a deep level? Under what circumstances would you choose one over the other?
-
Mandatory training. JPMorgan made Track 1 mandatory. What are the risks of mandatory AI training? How would you address employees who view mandatory training as insulting or patronizing?
-
Measuring ROI. The chapter describes several metrics JPMorgan used to measure training impact. Which metric do you consider most meaningful? Which is most misleading? Design a metric that you believe would better capture the training's true business impact.
-
Generative AI disruption. The emergence of generative AI in 2023-2024 required significant curriculum updates. How should an ongoing upskilling program be designed to accommodate technological disruptions that cannot be predicted in advance?
-
Applicability to smaller organizations. JPMorgan invested $600 million over five years. A mid-size company with 2,000 employees cannot invest at that scale. How would you adapt JPMorgan's three-track model for a company with 1/100th the budget? What would you keep, what would you cut, and what would you modify?
-
Connection to team building. How does JPMorgan's upskilling program relate to the AI team building strategies described in this chapter? Specifically, how does broad AI literacy across the organization affect the AI team's ability to collaborate with business units, recruit internal talent, and deliver business value?
Sources: JPMorgan Chase & Co. (2019-2024). Annual Reports and Technology Investor Day presentations. Son, H. (2023). "JPMorgan Is Training Thousands of Employees on AI." CNBC. Noonan, L. & Edgecliffe-Johnson, A. (2023). "JPMorgan Rolls Out AI Training for Employees." Financial Times. Braunstein, M. (2023). "Building AI Literacy at Scale in Financial Services." MIT Sloan Management Review, Digital Supplement. JPMorgan Chase (2024). "2023 Environmental, Social, and Governance Report" — Workforce Development section.