38 min read

> "The question is no longer whether AI will transform your industry. The question is whether you will be the one leading that transformation — or scrambling to catch up."

Chapter 1: The AI-Powered Organization

"The question is no longer whether AI will transform your industry. The question is whether you will be the one leading that transformation — or scrambling to catch up."

— Professor Diane Okonkwo, first lecture, MBA 7620: AI for Business Strategy


The First Lecture

The lecture hall in Langford Hall seats 84. On this particular Tuesday in September, every seat is taken. A few students lean against the back wall. The course — MBA 7620: AI for Business Strategy — was supposed to cap at 60, but the waitlist grew so long that the registrar moved it to a larger room.

Professor Diane Okonkwo stands at the front, arms folded, surveying the room with the composed patience of someone who has facilitated boardroom negotiations on four continents. She is 54, British-Nigerian, with close-cropped silver hair and reading glasses she never actually uses for reading. Before joining the faculty, she spent eighteen years at McKinsey & Company, where she led the Digital & Analytics practice across EMEA and built a reputation for telling CEOs things they did not want to hear.

She lets the murmur die down on its own. Then she asks a question.

"How many of you believe that AI will fundamentally transform your industry within the next five years?"

Hands go up across the room — tentatively at first, then with growing conviction. By the time the wave crests, roughly 90 percent of the class has a hand raised.

Okonkwo nods. "Good. Now keep your hand up if you can explain — clearly, to a non-technical colleague — what a machine learning model actually does."

Hands drop. A few stay up, including one near the front that shoots up with particular confidence. That hand belongs to Tom Kowalski, 32, who spent five years as a product manager at a fintech startup in Chicago before deciding an MBA would round out his toolkit. Tom has a computer science undergraduate degree from Carnegie Mellon and the quiet self-assurance of someone who has shipped production code. He keeps his hand raised and makes eye contact with the professor.

Three rows back and to the left, NK Adeyemi — 27, Nigerian-American, formerly a brand strategist at a mid-sized consumer goods company — has both hands firmly on her laptop. She did not raise her hand for the first question and she certainly did not raise it for the second. NK enrolled in this course because her advisor said it would be "career insurance." She is skeptical of that framing. She is skeptical of most framings that involve the word "AI."

"That gap," Okonkwo says, gesturing between the imaginary ceiling of raised hands and the current sparse count, "is the most expensive gap in business today. Ninety percent of you believe AI will transform your industry. Fewer than ten percent of you can explain the basic mechanism by which it works. And you are MBA students — ostensibly the people who will be running these transformations."

She pauses.

"This course exists to close that gap. Not by turning you into data scientists — you don't need to be. But by making you literate enough to lead, to ask the right questions, to know when you're being sold snake oil, and to know when you're sitting on a genuine strategic advantage and failing to exploit it."

NK opens a new document and types: Snake oil detection — yes please.

Tom opens his notebook — he prefers paper — and writes: She's McKinsey. Frameworks incoming.

They are both right.


The AI Landscape Today

Let us begin with the world as it is, not as the press releases describe it.

By early 2026, artificial intelligence has moved from the periphery of business strategy to its center. The numbers are staggering and, for once, the reality has begun to approach the hype. According to McKinsey's annual survey on the state of AI, 72 percent of organizations reported adopting AI in at least one business function in 2024, up from 55 percent in 2023 and just 20 percent in 2017. Global corporate spending on AI — including software, hardware, and services — exceeded $200 billion in 2025, with projections suggesting it will surpass $300 billion by 2028.

Research Note: McKinsey's "The State of AI" reports (published annually since 2017) provide the most comprehensive longitudinal data on enterprise AI adoption. We will reference them frequently throughout this textbook.

But adoption is not the same as value creation. A 2024 Boston Consulting Group survey found that while 90 percent of executives described AI as a "top three priority," only 26 percent reported achieving significant financial impact from their AI investments. That 64-percentage-point gap between priority and impact is not a technology problem. It is a management problem. It is, arguably, the management problem of the current decade.

The landscape can be understood along three dimensions:

What Has Changed

Generative AI has democratized access. Before 2022, deploying AI required teams of data scientists, months of model development, and significant infrastructure investment. The release of ChatGPT in November 2022, followed by GPT-4 in March 2023, Claude in 2023-2024, Gemini, and a rapidly expanding ecosystem of large language models (LLMs), fundamentally shifted the equation. Suddenly, any employee with a browser could interact with a sophisticated AI system using natural language. By 2025, over 75 percent of Fortune 500 companies had enterprise licenses for at least one generative AI platform.

AI tools have become embedded. Microsoft Copilot is integrated into Office 365. Salesforce Einstein GPT operates within CRM workflows. Adobe Firefly generates creative assets inside Photoshop. GitHub Copilot writes code alongside developers. The era of "standalone AI tool" is giving way to the era of "AI-augmented everything." For business leaders, this means AI strategy is no longer something you bolt on — it is something woven into every operational decision.

The cost of compute has plummeted — then risen again. Training costs for frontier models have increased dramatically (GPT-4 reportedly cost over $100 million to train), but the cost of using AI — inference costs — has dropped by roughly 90 percent between 2023 and 2025. This means that deploying AI in production at scale is more affordable than ever, even as building cutting-edge models from scratch remains the province of a handful of companies with billions in capital.

What Has Not Changed

Data quality remains the bottleneck. The old adage "garbage in, garbage out" has not been repealed by generative AI. A 2024 Gartner survey found that poor data quality costs organizations an average of $12.9 million per year, and that 60 percent of AI projects stall or fail due to data issues rather than algorithmic limitations. We will return to this theme repeatedly — it is one of the five recurring themes of this textbook.

Organizational change is hard. Buying AI software is easy. Getting 12,000 employees to change how they work is not. McKinsey estimated in 2024 that for every dollar companies spent on AI technology, they needed to spend three to five dollars on change management, training, and process redesign. Most did not.

The talent gap persists. Despite a flood of online courses, bootcamps, and certifications, the demand for AI-literate professionals continues to outpace supply — not just for data scientists and ML engineers, but for the business translators who can bridge the gap between technical capability and business value. This textbook is designed to help you become one of those translators.

What Is Emerging

AI agents are moving from demos to deployment. By 2025-2026, the industry conversation shifted from chatbots (single-turn interactions) to agents (multi-step, autonomous workflows). AI systems that can research, plan, execute, and iterate — booking travel, conducting competitive analyses, managing customer service escalations — are beginning to appear in enterprise settings. They raise profound questions about oversight, accountability, and the future of knowledge work.

Regulation is taking shape. The EU AI Act, passed in 2024, established the world's first comprehensive AI regulatory framework, classifying AI systems by risk level and imposing requirements accordingly. China's AI regulations, various US state-level laws, and industry-specific guidance (particularly in financial services and healthcare) are creating a patchwork of compliance requirements that every global business must navigate. We will cover regulatory frameworks in depth in Chapters 36 and 37.

The environmental costs are becoming visible. Training and running large AI models requires enormous amounts of electricity and water. The International Energy Agency estimated that data center energy consumption could double between 2024 and 2030, driven largely by AI workloads. For companies with sustainability commitments, AI strategy and environmental strategy are increasingly in tension.

Business Insight: When your CEO says "We need an AI strategy," what she usually means is "We need to understand where AI creates value, what it costs, what risks it introduces, and how to organize ourselves to capture the opportunity." That is what this course — and this textbook — will teach you.


A Brief History of AI, Told as a Business Story

You do not need to memorize dates. But understanding the arc of AI's development helps you recognize patterns — particularly the pattern of hype cycles, winter, and resurgence — that repeat with remarkable consistency.

The Dream (1950-1969)

In 1950, Alan Turing published "Computing Machinery and Intelligence," posing the question: Can machines think? The field of artificial intelligence was formally born at the 1956 Dartmouth Conference, where a group of researchers — including John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon — proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

The early optimism was breathtaking and, in retrospect, naively grandiose. Herbert Simon predicted in 1965 that "machines will be capable, within twenty years, of doing any work a man can do." That prediction was off by at least fifty years — and counting.

Business Insight: This pattern — bold predictions by brilliant technologists, followed by slower-than-expected progress — is the original hype-reality gap. Every generation of AI advancement has reproduced it. Learning to distinguish genuine capability from projected potential is a critical skill for any business leader evaluating AI investments.

The business impact in this era was essentially zero. AI was a research pursuit, funded by government grants and military contracts, with no commercial applications to speak of.

The First Winter (1970-1979)

When AI failed to deliver on its grand promises, funding dried up. The British government's Lighthill Report (1973) was devastating in its assessment that AI had failed to achieve its "grandiose objectives." DARPA cut funding. Corporate interest evaporated. The lesson for business leaders: technologies that overpromise and underdeliver lose institutional support, and rebuilding credibility takes years.

Expert Systems and the Second Boom (1980-1987)

AI's commercial debut came through expert systems — rule-based programs that encoded human expertise into decision trees. Companies like Digital Equipment Corporation deployed XCON, an expert system that configured computer orders and reportedly saved $40 million per year. By 1985, the AI industry exceeded $1 billion in revenue.

Definition: An expert system is a computer program that uses a knowledge base of human expertise to solve specialized problems. Unlike modern machine learning, expert systems rely on hand-coded rules ("if the customer orders a server, then recommend a compatible power supply") rather than learning from data.

The business model was simple: hire knowledge engineers to interview domain experts, encode their knowledge as rules, and deploy the resulting system. It worked for narrow, well-defined domains. It failed spectacularly for anything requiring common sense, ambiguity, or adaptation to new situations.

The Second Winter (1988-1996)

Expert systems proved brittle, expensive to maintain, and unable to scale. The market collapsed. Companies that had invested millions in AI wrote off their investments. The word "AI" became toxic in corporate settings — a pattern that would not fully reverse for two decades.

The Machine Learning Quiet Revolution (1997-2011)

While "AI" languished as a brand, the underlying science made extraordinary progress. Three developments proved transformative:

  1. The internet generated data at unprecedented scale. Machine learning algorithms are hungry for data. The explosion of digital commerce, social media, and online behavior created training datasets that earlier researchers could only dream of.

  2. Computing power followed Moore's Law. GPUs, originally designed for video games, turned out to be spectacularly well-suited for the matrix mathematics that underpin neural networks.

  3. Algorithms improved. Researchers developed support vector machines, random forests, and — crucially — refined techniques for training deep neural networks, including backpropagation improvements and better initialization methods.

The business impact was real but often invisible. Google's search engine was, fundamentally, an AI system. Amazon's recommendation engine drove 35 percent of its revenue. Netflix offered a $1 million prize in 2006 for anyone who could improve its recommendation algorithm by 10 percent. Machine learning was creating enormous business value — but under brand names like "analytics," "personalization," and "optimization," not "AI."

Deep Learning and the Modern Era (2012-2022)

In 2012, a deep neural network called AlexNet won the ImageNet competition — an annual benchmark for computer vision — by a margin so large that it effectively ended the debate about whether deep learning worked. The era of deep learning had begun.

The business implications arrived in waves:

  • Computer vision enabled automated quality inspection in manufacturing, facial recognition for security, and medical image analysis.
  • Natural language processing powered virtual assistants (Siri, Alexa, Google Assistant), chatbots, and document analysis tools.
  • Recommendation systems became more sophisticated, driving engagement at Netflix, Spotify, YouTube, and TikTok.
  • Predictive analytics became standard in finance (credit scoring, fraud detection), healthcare (readmission prediction), and supply chain (demand forecasting).

By 2020, AI was creating measurable business value — but primarily for large technology companies with the data, talent, and infrastructure to exploit it. Most traditional enterprises were still in the early stages of adoption.

The Generative AI Revolution (2022-Present)

The release of ChatGPT on November 30, 2022, was a watershed moment — not because the underlying technology was entirely new, but because it made AI accessible. For the first time, any person with an internet connection could interact with a powerful AI system using natural language and receive coherent, useful responses.

The business impact was immediate and far-reaching:

  • Content creation — writing, coding, image generation, video production — was transformed overnight.
  • Knowledge work — research, analysis, summarization, translation — became dramatically more productive.
  • Customer service — AI-powered agents could handle increasingly complex interactions.
  • Software development — code generation tools accelerated development by 30-50 percent in early studies.

The speed of adoption was unprecedented. ChatGPT reached 100 million users in two months — faster than any consumer technology in history. Enterprise adoption followed: by 2025, generative AI tools were embedded in the daily workflows of an estimated 400 million workers worldwide.

Caution

Speed of adoption is not the same as depth of adoption. Many organizations have employees using generative AI tools (often without organizational sanction or oversight) without any coherent strategy for capturing value, managing risk, or building competitive advantage. The gap between using AI and being an AI-powered organization is the central challenge this textbook addresses.


Definitions That Matter: AI, ML, Deep Learning, and Generative AI

"I've been in three board meetings this year where the CEO used 'AI' and 'machine learning' interchangeably," Professor Okonkwo tells the class. "In two of those meetings, the CTO silently winced but said nothing. In the third, the CTO did the same thing."

Imprecise language leads to imprecise thinking, which leads to imprecise strategy. Let us be precise.

Artificial Intelligence (AI)

AI is any system that performs tasks typically requiring human intelligence. This is deliberately broad. A chess program from 1997 is AI. A spam filter is AI. ChatGPT is AI. A hypothetical future system that matches or exceeds human intelligence across all domains (known as Artificial General Intelligence, or AGI) would also be AI.

For business purposes, think of AI as an umbrella term. When someone says "AI strategy," they could mean anything from deploying a chatbot to redesigning their entire operating model around data-driven decision-making. Your first job as a business leader is to ask: "Which kind of AI, specifically?"

Machine Learning (ML)

Machine learning is AI that learns from data rather than following explicit rules. Instead of programming a computer with instructions ("if the email contains the word 'lottery,' mark it as spam"), you give it thousands of examples of spam and non-spam emails and let it figure out the patterns.

Definition: Machine learning is a subset of AI in which algorithms improve their performance on a task through exposure to data, without being explicitly programmed for that specific task. The system identifies patterns, relationships, and structures in data and uses them to make predictions or decisions on new, unseen data.

Three types of machine learning matter for business leaders:

Type How It Works Business Example
Supervised learning Learn from labeled examples (input → correct output) Predicting customer churn based on historical behavior
Unsupervised learning Find patterns in unlabeled data Customer segmentation based on purchasing behavior
Reinforcement learning Learn by trial and error, receiving rewards or penalties Optimizing warehouse robot navigation, dynamic pricing

Tom Kowalski, reading ahead in the syllabus, has already noted that Chapters 8-11 cover these in depth. He writes in his notebook: Good — finally a business program that doesn't handwave the technical details.

NK Adeyemi, meanwhile, writes: Supervised = learns from examples. Unsupervised = finds patterns. RL = learns from trial/error. I can work with this.

Deep Learning

Deep learning is machine learning using neural networks with many layers. These "deep" neural networks can learn incredibly complex patterns — recognizing faces in photographs, understanding spoken language, generating realistic images — that simpler algorithms cannot.

Definition: Deep learning is a subset of machine learning that uses artificial neural networks with multiple layers (hence "deep") to learn hierarchical representations of data. Each layer learns increasingly abstract features — from edges and textures in images, to shapes, to objects, to scenes.

For business leaders, the key insight is this: deep learning is what made modern AI possible. It is the reason your phone can recognize your face, your car can (sometimes) drive itself, and your email can summarize a 20-page report in three sentences. But deep learning has specific requirements — large amounts of data, significant computing power, and specialized expertise — that shape where and how it can be deployed.

Generative AI

Generative AI creates new content — text, images, code, audio, video — rather than just analyzing or classifying existing content. This is the category that has captured the public imagination since 2022, and for good reason: it represents a qualitative shift in what AI can do.

Definition: Generative AI refers to AI systems that can produce new content (text, images, code, music, video, etc.) based on patterns learned from training data. The most prominent examples are large language models (LLMs) like GPT-4, Claude, and Gemini, which generate text, and diffusion models like DALL-E, Midjourney, and Stable Diffusion, which generate images.

The relationship among these terms is nested:

AI (broadest)
  └── Machine Learning (learns from data)
        └── Deep Learning (neural networks with many layers)
              └── Generative AI (creates new content)
                    └── Large Language Models (generates text/code)

Business Insight: When evaluating AI vendors, ask which layer of this stack they operate at. A vendor selling "AI-powered analytics" might be using simple statistical models (effective but not cutting-edge) or sophisticated deep learning (powerful but potentially opaque). The term "AI" alone tells you almost nothing about what you are actually buying.


The AI Maturity Model: Where Does Your Organization Stand?

Not all organizations are equally prepared to exploit AI. Understanding where your organization falls on the maturity spectrum is the first step toward building a credible strategy.

The following maturity model synthesizes frameworks from McKinsey, Gartner, and MIT Sloan Management Review. It describes five stages of organizational AI capability:

Stage 1: Ad Hoc (Exploring)

Characteristics: Individual employees experiment with AI tools. No organizational strategy. No governance. No centralized data infrastructure. AI use is bottom-up, fragmented, and often unsanctioned.

Typical signs: - Employees use ChatGPT on personal accounts for work tasks - Data is siloed in departmental spreadsheets and legacy systems - No one has the title "Chief AI Officer" or "VP of Data" - AI is discussed in strategy meetings as something "we should look into"

Business risk: Shadow AI — employees using AI tools without oversight — creates data security, compliance, and quality risks. Valuable use cases are discovered but not scaled.

Percentage of large enterprises at this stage (2025): ~20%

Stage 2: Opportunistic (Experimenting)

Characteristics: The organization has launched a few AI pilot projects, typically in IT or analytics. There is growing awareness at the executive level. A small team or task force has been assembled. But AI initiatives are opportunistic — driven by individual champions rather than strategic priorities.

Typical signs: - Two to five AI pilots running in different business units - An "AI task force" or "innovation lab" has been created - The company has purchased a few enterprise AI tool licenses - Some data infrastructure work has begun, but it is far from complete - ROI from pilots is promising but has not been rigorously measured

Business risk: Pilot purgatory — projects that succeed in controlled settings but never scale to production. Organizational learning remains localized.

Percentage of large enterprises at this stage (2025): ~35%

Stage 3: Systematic (Scaling)

Characteristics: AI has a dedicated budget, executive sponsorship, and a governance framework. The organization has moved beyond pilots to production deployments. Data infrastructure is being modernized. AI and business strategy are beginning to converge.

Typical signs: - A Chief Data Officer, Chief AI Officer, or VP of AI has been appointed - AI initiatives are linked to specific business KPIs - A centralized data platform (or data lakehouse) is in place or under construction - Model monitoring and governance processes exist - Cross-functional teams include both technical and business members - Training programs are underway to build AI literacy across the organization

Business risk: Scaling creates new challenges — model drift, data pipeline failures, integration complexity, change resistance. Organizations at this stage often underestimate the operational demands of AI in production.

Percentage of large enterprises at this stage (2025): ~30%

Stage 4: Differentiated (Transforming)

Characteristics: AI is a core competitive differentiator. The organization has developed proprietary models, unique datasets, or AI-driven processes that competitors cannot easily replicate. AI informs major strategic decisions.

Typical signs: - Proprietary AI models trained on unique company data - AI-driven products or services that generate revenue - Data and AI capabilities are considered in M&A decisions - The organization contributes to AI research and open-source projects - AI ethics and responsible innovation are embedded in governance

Business risk: Over-reliance on AI without adequate human oversight. Complacency — believing current advantages are durable when competitors are closing the gap.

Percentage of large enterprises at this stage (2025): ~12%

Stage 5: AI-First (Leading)

Characteristics: AI is woven into the fabric of the organization — every process, product, and decision is AI-augmented or AI-driven. The company's competitive moat is fundamentally built on AI capabilities.

Typical signs: - AI is embedded in every major business process - Real-time data flows enable continuous learning and adaptation - The organization attracts top AI talent as a matter of brand - AI shapes the company's culture, not just its technology stack - The company sets industry standards for AI governance and ethics

Business risk: Concentration risk — dependence on AI systems that are not fully understood. Regulatory exposure. Ethical blind spots.

Percentage of large enterprises at this stage (2025): ~3% (primarily large tech companies)

Try It: Where would you place your current or most recent employer on this maturity model? What evidence supports your assessment? What would need to change for the organization to move one stage higher? Write down your answers — we will return to this exercise in Chapter 5.

Tom Kowalski mentally places his former fintech startup at Stage 3, trending toward Stage 4. They had good data infrastructure, a capable ML team, and AI-driven fraud detection in production. But their customer-facing AI features were still experimental.

NK Adeyemi places her former employer — a $400 million consumer goods company — at Stage 1, generously. "We had a Tableau dashboard," she tells the student next to her. "And my manager thought that was AI."


Introducing Athena Retail Group

Throughout this textbook, we will follow Athena Retail Group — a fictional but realistic mid-market retailer — as it attempts to transform itself into an AI-powered organization. Athena's story is based on a composite of real companies the author has worked with and studied. The challenges Athena faces, the mistakes it makes, and the solutions it discovers are drawn from actual enterprise AI transformations.

Company Profile

Attribute Detail
Founded 1987 (as a single home goods store in Portland, Oregon)
Revenue $2.8 billion (fiscal year 2025)
Employees 12,000 (8,200 in stores, 3,800 in corporate/distribution)
Stores 340 locations across 28 states
E-commerce 18% of total revenue (industry average: 25%)
Product Categories Home goods, furniture, kitchenware, seasonal decor
Market Position #4 in US specialty home retail, losing share to e-commerce pure plays
CEO Grace Chen (appointed 2023, former COO of a major apparel brand)

Athena is a company with a proud history, a loyal customer base, and a growing problem. Revenue growth has stalled at 2-3 percent annually. E-commerce competitors are eating into its market share. Its stores are profitable but aging. Its technology infrastructure — a patchwork of systems accumulated over three decades — is increasingly unable to support the speed and personalization that modern consumers expect.

The Announcement

Athena Update: Phase 1 — Discovery

On a Monday morning in January 2025, Grace Chen stands in front of 200 corporate employees in Athena's Portland headquarters. (Another 11,800 employees are watching via livestream, though most of them will catch the recording later, if at all.)

"I want to talk about the next chapter for Athena," Chen begins. She has a CEO's gift for making planned remarks sound spontaneous. "We've been a great company for 38 years. We've survived recessions, a pandemic, and the Amazon Effect. But surviving is not thriving. And I don't want to lead a company that merely survives."

She clicks to a slide that reads: THE ATHENA AI TRANSFORMATION INITIATIVE — $45 MILLION OVER 3 YEARS.

The number ripples through the room. Forty-five million dollars is roughly 1.6 percent of annual revenue — significant for a company with thin retail margins, but within the range that analysts would consider reasonable for a strategic technology investment.

Chen outlines the vision: AI-powered demand forecasting to reduce inventory waste. Personalized customer experiences across online and in-store channels. Automated supply chain optimization. Intelligent pricing. Employee productivity tools. "Within three years," she says, "I want Athena to be the most data-driven specialty retailer in America."

The applause is genuine but cautious. Several department heads exchange glances. The head of stores, a 22-year veteran named Janet Morrison, texts her regional managers: Big changes coming. Will fill you in when I know more. Which I don't yet.

The Hire

Three weeks later, Athena announces the appointment of Ravi Mehta as Vice President of Data & AI — a newly created role reporting directly to the CTO. Mehta, 41, comes from a mid-sized e-commerce company where he built a data science team from scratch and deployed ML models for demand forecasting and customer segmentation. He is smart, experienced, and optimistic.

His optimism lasts approximately one week.

What Ravi Found

Ravi Mehta's first month at Athena is an exercise in controlled dismay. He compiles a confidential assessment for the CEO and CTO. Key findings:

Data Infrastructure: - The company runs on a 15-year-old point-of-sale (POS) system that stores data in a proprietary format incompatible with modern analytics tools - Customer data is split across four systems: the POS, the e-commerce platform, the loyalty program (run by a third-party vendor), and the email marketing platform — with no single customer ID linking them - Product data is maintained in spreadsheets by the merchandising team; product catalogs across the POS and e-commerce system have different naming conventions, different category hierarchies, and approximately 12,000 duplicate entries - Historical sales data is available going back three years; anything older was lost in a server migration that "didn't go well" (Ravi's diplomatically restrained phrasing)

Talent: - The "analytics team" consists of two business analysts who primarily build reports in Excel and Tableau - No one at the company has experience building or deploying machine learning models - The IT department is consumed by system maintenance and has no bandwidth for new data initiatives - The data literacy of business leaders varies from "reasonably competent" (the CFO) to "openly hostile to change" (the VP of Stores)

Culture: - Decisions are made by intuition and seniority, not data - The merchandising team has resisted previous attempts to introduce data-driven buying decisions, arguing that "retail is an art, not a science" - Store managers have high autonomy and low trust in corporate initiatives, particularly technology initiatives, after a botched ERP implementation in 2019

Governance: - No data governance framework exists - No one has the title "data owner" for any dataset - Privacy compliance is reactive — the legal team reviews data practices only when a specific question arises - There is no AI use policy; several employees are using ChatGPT to draft customer communications without any oversight or guidelines

Ravi summarizes his assessment in a sentence that he will repeat, in various forms, throughout the next three years: "We have a $45 million budget to become an AI-powered organization, and we don't yet have the foundation to be a data-informed one."

Business Insight: Ravi's situation is more common than exceptional. A 2024 NewVantage Partners survey found that only 24 percent of organizations described themselves as "data-driven," down from 32 percent in 2023 — a decline attributed not to technology limitations but to organizational and cultural barriers. The gap between AI ambition and data readiness is the central challenge of enterprise AI transformation.

We will follow Athena's journey throughout this textbook. In Chapter 2, Ravi will present his assessment to the executive team and propose a phased approach — sparking a debate about strategy, priorities, and the meaning of "quick wins" that will sound familiar to anyone who has led a transformation initiative.


Why Business Leaders Must Understand AI

"I'm a marketing person," NK says during the first class discussion. "I'm never going to build a model. Why do I need to understand how they work?"

Professor Okonkwo's response is worth quoting in full, because it frames the philosophy of this entire textbook:

"You're right that you probably won't build a model. But you will be asked to do several things that require understanding. You'll be asked to evaluate whether an AI solution proposed by a vendor is credible or snake oil. You'll be asked to decide whether your team's budget should fund an AI initiative or a traditional one. You'll be asked to interpret the output of an AI system and make a consequential decision based on it. You'll be asked to explain to your board why an AI project failed, or why it needs more time, or why it succeeded. You'll be asked to hire, manage, and evaluate technical people whose work you must understand well enough to challenge.

"In every one of those situations, the person who understands AI — not at the level of writing code, but at the level of asking the right questions — will make better decisions than the person who does not. And the cost difference between a good decision and a bad decision, in the context of AI investments, is typically measured in millions of dollars."

Tom, who has seen this dynamic from the other side — watching non-technical executives approve or kill projects they didn't understand — nods in agreement. "She's right," he writes in his notebook. "I've seen this go wrong."

The Cost of Ignorance

The business case for AI literacy is partly defensive — avoiding costly mistakes:

Overpaying for underwhelming solutions. Without technical literacy, business leaders cannot evaluate AI vendor claims. A 2024 Forrester report estimated that 40 percent of enterprise AI software purchases were "substantially underutilized" within 18 months, representing billions in wasted spending. When you don't understand what a product does, you can't assess whether it does it well.

Approving infeasible projects. A common failure mode: a business leader sees an AI demo, gets excited, and greenlights a project without understanding the data, infrastructure, and organizational requirements for production deployment. The project consumes months and millions before someone identifies a fundamental feasibility issue that a more literate leader would have caught in the first meeting.

Missing strategic opportunities. Ignorance is not just about avoiding bad decisions — it's about recognizing good ones. The leader who understands AI's capabilities can identify applications that competitors miss. Amazon's investment in recommendation engines, Netflix's commitment to algorithmic content curation, and Starbucks' use of reinforcement learning for personalized marketing all began with leaders who understood enough about AI to see its strategic potential before it was obvious.

Losing talent. Top data scientists and ML engineers do not want to work for leaders who don't understand their work. An organization whose leadership cannot engage meaningfully with AI strategy will struggle to attract and retain the technical talent required to execute it.

The Cost of Expertise

AI literacy also has an offensive dimension — enabling value creation:

Better vendor negotiations. The leader who understands model architectures, training data requirements, and deployment considerations can negotiate more effectively with AI vendors — asking sharper questions, identifying red flags, and structuring contracts that protect the organization's interests.

Faster, more accurate prioritization. Understanding AI's strengths and limitations allows leaders to prioritize the use cases most likely to deliver value and defer those that are technically premature or strategically misaligned.

More effective cross-functional collaboration. The most successful AI initiatives are joint ventures between technical teams and business teams. When business leaders speak the language of data and models — even at a conversational level — the quality of collaboration improves dramatically.

Strategic foresight. Understanding AI's trajectory — where the technology is heading, which capabilities are maturing, which are still experimental — allows leaders to make investments today that will pay off in two to three years. This long-term view is particularly valuable in industries where AI adoption is still in early stages.

Research Note: A 2023 study published in Harvard Business Review found that companies whose senior leadership teams included at least one member with "deep AI expertise" were 2.4 times more likely to report significant value from AI investments, even controlling for company size, industry, and total AI spending. The presence of AI-literate leadership was a stronger predictor of AI success than total investment.


Five Themes That Will Recur Throughout This Book

This textbook is organized around forty chapters and dozens of concepts, tools, and case studies. But five themes weave through everything. Understanding them now will help you see the connective tissue that holds the book together.

Theme 1: The Hype-Reality Gap

AI is simultaneously overhyped and underappreciated. It is overhyped by vendors, consultants, and breathless media coverage that conflate demos with deployments. It is underappreciated by organizations that have not yet grasped how fundamentally AI can reshape their operations, products, and competitive position.

Your job as a business leader is to navigate between these extremes — maintaining urgency without succumbing to hype, exercising skepticism without missing genuine opportunities. Throughout this textbook, we will give you frameworks for distinguishing signal from noise.

Watch for this theme in: Chapter 5 (evaluating AI use cases), Chapter 13 (vendor evaluation), Chapter 20 (measuring AI ROI), and the IBM Watson Health case study in this chapter.

Theme 2: Human-in-the-Loop

AI is a tool, not a replacement for human judgment. The most effective AI deployments augment human decision-making rather than replacing it. This is not a philosophical preference — it is a practical observation supported by research and experience.

But "human-in-the-loop" is not a magic phrase. It requires deliberate design: Which decisions should humans make? Which should AI make? How should humans and AI collaborate? What happens when they disagree? These questions have operational, organizational, and ethical dimensions that we will explore throughout the textbook.

Watch for this theme in: Chapter 7 (prompt engineering as human-AI collaboration), Chapters 14-15 (AI in decision-making), Chapter 35 (AI ethics), and the Athena storyline (where Ravi constantly balances automation with human judgment).

Theme 3: Data as Strategic Asset

"Data is the new oil" is a cliché that has been repeated so often it has lost its meaning. Let us restore it. Data — specifically, proprietary, high-quality, well-governed data — is the most durable competitive advantage in the AI era. Models can be copied. Algorithms are often open-source. But the unique data that a company generates through its operations, customer relationships, and industry expertise is difficult for competitors to replicate.

However, data is only a strategic asset if it is treated as one: curated, governed, protected, and made accessible to the people and systems that need it. Most organizations treat data as a byproduct of operations rather than a strategic resource. Transforming that mindset is one of the hardest parts of becoming an AI-powered organization.

Watch for this theme in: Chapters 3-4 (data strategy), Chapter 6 (data governance), Chapter 22 (data quality), and the Athena storyline (where Ravi's first challenge is getting Athena's data house in order).

Theme 4: Build vs. Buy

For every AI capability your organization needs, there is a fundamental strategic decision: should you build it in-house, buy it from a vendor, or adopt a hybrid approach? This decision involves trade-offs between cost, speed, customization, competitive advantage, and organizational capability.

The answer is rarely simple. Building gives you control and differentiation but requires talent and time. Buying gives you speed and proven technology but limits customization and creates vendor dependency. Throughout this textbook, we will develop a framework for making this decision rigorously.

Watch for this theme in: Chapter 12 (cloud AI platforms), Chapter 13 (vendor evaluation), Chapter 21 (building AI teams), and the Athena storyline (where the build-vs-buy debate becomes a recurring source of tension between Ravi and the CTO).

Theme 5: Responsible Innovation

AI creates value. AI also creates risk — to individuals, communities, and society. Bias in hiring algorithms. Privacy violations in surveillance systems. Misinformation generated by language models. Job displacement at scale. Environmental costs of computation.

Responsible innovation is not a compliance checklist — it is a strategic discipline. Companies that ignore AI ethics face regulatory penalties, reputational damage, customer backlash, and employee attrition. Companies that integrate ethics into their AI strategy build trust, attract talent, and create more sustainable competitive advantages.

Watch for this theme in: Chapters 35-37 (AI ethics, bias, and regulation), Chapter 38 (sustainability), and throughout the Athena storyline, where ethical dilemmas arise in contexts that range from customer privacy to employee surveillance to algorithmic pricing.

Business Insight: These five themes are not independent. They interact in complex ways. The hype-reality gap influences build-vs-buy decisions. Human-in-the-loop design shapes responsible innovation. Data strategy determines which AI capabilities are feasible. Throughout this textbook, we will trace these interactions and help you develop an integrated perspective.


What This Book Will Teach You

This textbook is organized into seven parts, covering forty chapters. Here is a roadmap of what lies ahead:

Part 1: Foundations of AI for Business (Chapters 1-6)

The conceptual and strategic foundations. You are here. We will cover: what AI is and isn't (Chapter 1 — this chapter), business strategy frameworks for AI (Chapter 2), data strategy and infrastructure (Chapters 3-4), data governance (Chapter 5), and AI ethics foundations (Chapter 6). By the end of Part 1, you will have the vocabulary and conceptual framework to engage meaningfully with any AI conversation.

Part 2: Prompt Engineering and AI Tools (Chapters 7-12)

The practical skills of working with AI systems. Prompt engineering — the art and science of communicating effectively with large language models — is covered in depth (Chapters 7-9). We also cover AI-powered productivity tools (Chapter 10), code generation and low-code AI (Chapter 11), and cloud AI platforms (Chapter 12). By the end of Part 2, you will be a proficient user of modern AI tools.

Part 3: Machine Learning for Business Leaders (Chapters 13-18)

The technical foundations, taught through a business lens. Supervised learning (Chapter 13), unsupervised learning (Chapter 14), model evaluation (Chapter 15), feature engineering (Chapter 16), and the ML project lifecycle (Chapters 17-18). You won't learn to code a neural network — but you will learn to evaluate whether one is appropriate for your business problem.

Part 4: AI Applications Across the Enterprise (Chapters 19-26)

AI in practice across business functions. Marketing and sales (Chapters 19-20), operations and supply chain (Chapters 21-22), finance and accounting (Chapters 23-24), human resources (Chapter 25), and customer experience (Chapter 26). Each chapter includes real-world case studies and practical frameworks for identifying and prioritizing AI use cases in that function.

Part 5: Building and Leading AI Teams (Chapters 27-32)

The organizational dimension. Building AI teams (Chapter 27), managing AI projects (Chapter 28), change management (Chapter 29), vendor management (Chapter 30), measuring AI ROI (Chapter 31), and scaling AI across the enterprise (Chapter 32). This is the part where strategy meets execution.

Part 6: AI Governance, Ethics, and the Future (Chapters 33-38)

The broader context. AI regulation and compliance (Chapter 33), bias and fairness (Chapter 34), responsible AI frameworks (Chapter 35), privacy and security (Chapter 36), the future of work (Chapter 37), and AI strategy for the next decade (Chapter 38). This part ensures you are prepared not just for today's challenges but for tomorrow's.

Part 7: Capstone (Chapters 39-40)

Integration and application. A comprehensive AI strategy simulation (Chapter 39) and a final reflection on the AI-powered organization (Chapter 40). You will synthesize everything you've learned into a coherent strategic plan.

Try It: Scan the table of contents above and identify the three chapters most relevant to your current role or career aspirations. Write them down. As you progress through the textbook, check whether your assessment changes — it probably will.


The Road Ahead

Professor Okonkwo ends her first lecture with a story.

"In 2019, I was consulting for a European retailer — about the same size as our fictional Athena. The CEO told me, with absolute confidence, that AI was irrelevant to his business. 'We sell furniture,' he said. 'Our customers don't want algorithms. They want quality craftsmanship and good customer service.'

"He was right about what his customers wanted. He was wrong about what it took to deliver it. Three years later, his competitors were using AI to optimize inventory, personalize marketing, and predict demand with a precision that allowed them to carry less stock and still have higher in-stock rates. His company's margins had eroded by four percentage points. He wasn't replaced by AI. He was replaced by a competitor who used AI better than he did."

She surveys the room.

"That is what we are here to prevent. Not by turning you into technologists, but by turning you into leaders who understand the most important technology of your generation well enough to wield it wisely."

NK Adeyemi closes her laptop and — for the first time in the semester — thinks this course might actually be worth the credit hours.

Tom Kowalski closes his notebook and thinks — for the first time in a while — that there might be more to learn than he assumed.


Chapter Summary

This chapter established the landscape, vocabulary, and framework for the rest of the textbook:

  1. The AI landscape is characterized by rapid adoption, enormous investment, and a persistent gap between AI ambition and realized value. Generative AI has democratized access, but organizational readiness remains the primary bottleneck.

  2. The history of AI is a story of hype cycles, winters, and resurgence. Understanding this pattern helps business leaders calibrate their expectations and investment timelines.

  3. AI, ML, deep learning, and generative AI are related but distinct concepts. Precision in language leads to precision in strategy.

  4. The AI maturity model provides a framework for assessing organizational readiness. Most large enterprises are at Stage 2 (Experimenting) or Stage 3 (Scaling). Moving up the maturity curve requires investment in data, talent, process, and culture — not just technology.

  5. Athena Retail Group illustrates the typical starting point for an enterprise AI transformation: ambitious goals, significant budget, and a sobering gap between aspiration and readiness. Ravi Mehta's initial assessment reveals the foundational challenges — data quality, talent, culture, and governance — that must be addressed before AI can deliver value.

  6. AI literacy is a leadership imperative. The cost of ignorance — wasted spending, failed projects, missed opportunities, lost talent — far exceeds the investment required to build understanding.

  7. Five recurring themes — the Hype-Reality Gap, Human-in-the-Loop, Data as Strategic Asset, Build vs. Buy, and Responsible Innovation — provide a framework for analyzing AI decisions throughout the textbook and throughout your career.


Next chapter: Chapter 2: Strategy First — Aligning AI with Business Objectives, where we will explore how to translate AI's potential into a coherent business strategy — and where Ravi Mehta presents his uncomfortable assessment to Athena's executive team.