Case Study 2: The AI Leaders of Tomorrow — Five Emerging Leaders Shaping Responsible AI


Introduction

The first wave of AI leadership was dominated by technologists — the researchers who built the models, the engineers who scaled them, and the founders who commercialized them. The next wave will be led by a different kind of leader: people who combine technical understanding with domain expertise, ethical awareness, and a commitment to ensuring that AI benefits are widely shared.

This case study profiles five real-world emerging leaders who embody the principles discussed in Chapter 40 and throughout this textbook. They come from different industries, different backgrounds, and different countries. What they share is a conviction that AI leadership requires more than technical skill — it requires judgment, purpose, and the courage to build systems that serve people, not just shareholders.

These profiles are forward-looking and deliberately diverse. They represent the kind of leaders that the AI era needs most — and the kind of leaders that readers of this textbook are positioned to become.


Profile 1: Joy Buolamwini — Algorithmic Justice and the Power of Making Bias Visible

Background: Joy Buolamwini is a Ghanaian-American-Canadian computer scientist, digital activist, and founder of the Algorithmic Justice League (AJL). She holds a Master's degree from MIT Media Lab, where her research on facial recognition bias became one of the most influential studies in the responsible AI movement.

The Defining Moment: As a graduate student at MIT, Buolamwini discovered that commercial facial recognition systems from major technology companies could not detect her face. The systems, trained primarily on lighter-skinned faces, performed significantly worse on darker-skinned individuals — and worst of all on darker-skinned women. Her 2018 research paper, "Gender Shades" (co-authored with Timnit Gebru), quantified the disparity: error rates for darker-skinned women were up to 34.7 percent higher than for lighter-skinned men.

Why She Matters for AI Leadership:

Buolamwini's contribution extends far beyond the technical finding. She translated a research result into a movement. The Algorithmic Justice League combines research, art, and advocacy to raise public awareness of AI bias. Her documentary Coded Bias (2020) brought the issue to mainstream audiences. Her testimony before Congress contributed to legislative action on facial recognition regulation.

Her work exemplifies several principles from Chapter 40:

  • Ethical courage: Buolamwini challenged the AI practices of some of the world's most powerful technology companies — including Amazon, Microsoft, and IBM — as a graduate student. IBM subsequently withdrew its facial recognition product from the general-purpose market.
  • Technical fluency serving a social purpose: Her research is rigorous — peer-reviewed, reproducible, and methodologically sound. But her communication of that research is accessible, compelling, and designed to reach policymakers and the public, not just other researchers.
  • Building coalitions: The Algorithmic Justice League is not a solo operation. It is a community of researchers, advocates, artists, and policymakers working together to address algorithmic harm.

Research Note: The "Gender Shades" study has been cited over 4,000 times in academic literature and is widely credited with catalyzing the AI fairness research field. Several major technology companies improved their facial recognition systems in direct response to the study's findings — a concrete example of how rigorous research, effectively communicated, can change industry practices.

Connection to the Textbook: Buolamwini's work connects directly to Chapter 25 (Bias in AI Systems), Chapter 26 (Fairness, Explainability, and Transparency), and the Responsible Innovation theme. Her career demonstrates that identifying AI harm is not enough — leaders must also mobilize organizations, institutions, and the public to address it.


Profile 2: Mustafa Suleyman — From DeepMind to Enterprise AI Governance

Background: Mustafa Suleyman is a British AI entrepreneur and executive. He co-founded DeepMind in 2010 (acquired by Google in 2014), later co-founded Inflection AI, and in 2024 became CEO of Microsoft AI, overseeing the company's consumer AI products and strategy. He is also the author of The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma (2023).

The Defining Contribution: Suleyman occupies a unique position in the AI leadership landscape: he has been both a builder of cutting-edge AI systems and a vocal advocate for AI governance and containment. At DeepMind, he led the Applied AI division and co-created DeepMind Health. At the same time, he has consistently argued that AI's power demands unprecedented forms of governance, accountability, and — in his framing — "containment."

Why He Matters for AI Leadership:

Suleyman's career illustrates the tension at the heart of AI leadership: the imperative to innovate and the imperative to govern. He does not resolve this tension — he lives it, publicly and often uncomfortably.

  • Strategic judgment across contexts: Suleyman's career spans non-profit research (DeepMind), a Big Tech acquisition (Google), an AI startup (Inflection), and a major enterprise role (Microsoft AI). Each context required different strategic thinking, and his ability to adapt demonstrates the adaptive leadership described in Chapter 40.
  • The containment argument: In The Coming Wave, Suleyman argues that AI and synthetic biology represent technologies so powerful that existing governance frameworks are insufficient to manage them. He calls for "containment" — not in the sense of stopping development, but in the sense of building robust governance structures that can keep pace with technological capability. This argument connects directly to Lena Park's observation in Chapter 40: "The AI regulations we have today are version 1.0."
  • The practitioner-theorist integration: Few AI leaders have both built commercially significant AI products and written serious books about AI governance. Suleyman's integration of practice and theory — building AI systems while simultaneously arguing for their governance — represents a model for the kind of leader the field needs.

Connection to the Textbook: Suleyman's career connects to Chapters 27-30 (AI governance and regulation), Chapter 37 (Emerging AI Technologies), and the build-vs-buy theme (his experience spans both building and buying AI capabilities at different organizational scales).


Profile 3: Rumman Chowdhury — Operationalizing Responsible AI

Background: Rumman Chowdhury is a Bangladeshi-American data scientist and responsible AI leader. She has served as the Director of META (ML Ethics, Transparency, and Accountability) at Twitter, as CEO of the responsible AI company Parity, and has been recognized by Bloomberg, MIT Technology Review, and Forbes for her work on AI ethics. She holds a PhD in political science from the University of California, San Diego, with a specialization in computational social science.

The Defining Contribution: While many AI ethics advocates focus on identifying problems, Chowdhury has focused on building tools and processes that operationalize responsible AI within organizations. At Twitter, she led the company's first algorithmic bias bounty challenge — inviting external researchers to identify biases in Twitter's image cropping algorithm. At Parity, she built tools that enable companies to audit their AI systems for fairness, transparency, and accountability.

Why She Matters for AI Leadership:

Chowdhury's work addresses the gap between AI ethics principles and AI ethics practice — the same gap that the textbook addresses in Chapter 30 (Responsible AI in Practice).

  • From principles to processes: Many organizations have published AI ethics principles. Few have translated those principles into operational processes. Chowdhury's work at Twitter and Parity focused on building the mechanisms — bias bounties, audit tools, transparency reports — that turn aspirational statements into organizational practices.
  • Interdisciplinary leadership: With a PhD in political science and a career in data science, Chowdhury embodies the cross-disciplinary perspective that Chapter 40 identifies as essential for AI leadership. Her political science training gives her a framework for understanding power, institutions, and governance that pure technologists often lack.
  • External accountability mechanisms: The algorithmic bias bounty challenge was a radical act of transparency: inviting outsiders to find flaws in your own AI systems. This approach — which Chowdhury has advocated be adopted more widely — creates accountability structures that internal governance alone cannot provide. It connects directly to NK's AI Customer Advisory Board concept: bringing external perspectives into AI governance.

Connection to the Textbook: Chowdhury's work connects to Chapter 27 (AI Governance Frameworks), Chapter 30 (Responsible AI in Practice), and the Human-in-the-Loop theme — particularly the question of who gets to participate in evaluating whether AI systems are working as intended.


Profile 4: Andrew Ng — Democratizing AI Education and Practical Leadership

Background: Andrew Ng is a British-American computer scientist and entrepreneur. He co-founded Coursera, led Google Brain, served as Chief Scientist at Baidu, and founded DeepLearning.AI and Landing AI. His machine learning course on Coursera, launched in 2011, has been taken by over 5 million learners worldwide and is widely credited with democratizing AI education.

The Defining Contribution: While Ng's technical contributions are significant — his work on large-scale deep learning at Google Brain was pioneering — his most enduring impact may be in AI education and in demonstrating how AI can be deployed practically in industries beyond Big Tech. Through Landing AI, he has focused on bringing AI to manufacturing, agriculture, healthcare, and other traditional industries that lack the data volumes and technical talent of technology companies.

Why He Matters for AI Leadership:

Ng's career trajectory mirrors a transition that Chapter 40 argues is essential for the field: from AI as a capability concentrated in large technology companies to AI as a capability accessible to every organization.

  • The "100 percent technical fluency, 100 percent business focus" model: Ng holds a PhD in computer science from Berkeley and has published foundational machine learning research. Yet his career has increasingly focused on making AI practically useful for non-technical organizations — a trajectory that parallels the textbook's argument that technical fluency must serve business purpose.
  • Data-centric AI: Ng has been a leading advocate for "data-centric AI" — the argument that improving data quality produces better results than improving model architecture. This perspective aligns directly with the textbook's recurring theme of Data as a Strategic Asset. His Landing AI platform focuses on data quality tools for manufacturing and inspection — domains where data is scarce and expensive, not abundant and cheap.
  • The teaching imperative: Ng's commitment to AI education embodies Chapter 40's argument that "the most effective learning technique is teaching." By creating free, high-quality AI education at scale, Ng has expanded the pool of AI-literate professionals worldwide. His work demonstrates that AI leadership includes the responsibility to develop the next generation of AI leaders.

Business Insight: Ng's observation that "AI is the new electricity" — a general-purpose technology that will transform every industry — has become one of the most cited framings in AI business strategy. Like electricity, AI's value comes not from the technology itself but from the applications it enables. The leaders who will capture the most value from AI are not those who build the best models but those who identify the best applications in their specific industries.

Connection to the Textbook: Ng's work connects to Chapter 2 (Thinking Like a Data Scientist), Chapter 4 (Data Strategy), Chapter 22 (No-Code/Low-Code AI), and the Data as a Strategic Asset theme.


Profile 5: Mira Murati — Technical Leadership Under Pressure

Background: Mira Murati is an Albanian-American engineer who served as Chief Technology Officer of OpenAI during one of the most consequential periods in AI history. With a background in mechanical engineering from Dartmouth and prior experience at Goldman Sachs, Leap Motion, and Tesla (where she worked on the Model X), Murati brought an engineering leadership perspective to the company behind ChatGPT, GPT-4, and DALL-E.

The Defining Contribution: As CTO of OpenAI during the period of ChatGPT's launch and the November 2023 board crisis (in which CEO Sam Altman was briefly fired and reinstated), Murati navigated one of the most turbulent leadership episodes in technology history. She served as interim CEO during the crisis — a period of approximately three days during which the future of the world's most prominent AI company hung in the balance.

Why She Matters for AI Leadership:

Murati's experience at OpenAI offers a concentrated case study in adaptive leadership — leading through uncertainty at the highest stakes.

  • Technical depth in a business-critical role: Murati's engineering background enabled her to make technical decisions — about model capabilities, safety features, deployment timelines, and product strategy — that were simultaneously technical judgments and business decisions. Her career demonstrates that the CTO role in an AI company requires the integration of technical fluency and strategic judgment described in Chapter 40.
  • Leadership under extreme uncertainty: The November 2023 board crisis placed Murati in the most challenging adaptive leadership scenario imaginable: leading a company whose CEO had just been fired, whose employees were threatening mass resignation, and whose primary investor (Microsoft) was publicly maneuvering. Her ability to maintain organizational stability during this period — whatever one's views on the outcome — demonstrates the kind of composure under uncertainty that Chapter 40 identifies as essential.
  • The safety-capability tension: OpenAI's internal debates about the pace of AI development and the adequacy of safety measures reflect the central tension of AI leadership: how fast to move, how carefully to govern, and who gets to make that decision. Murati's position at the intersection of these debates — as a leader who both pushed for capability advances and advocated for safety measures — illustrates the complexity of responsible innovation at the frontier.

Connection to the Textbook: Murati's experience connects to Chapter 37 (Emerging AI Technologies), Chapter 40's discussion of adaptive leadership under uncertainty, and the Hype-Reality Gap theme — since OpenAI's products are at the epicenter of both AI capability and AI hype.


Common Themes Across the Five Profiles

These five leaders differ in background, focus area, and organizational context. But several common themes emerge:

1. None of them is "just" a technologist. Buolamwini is a computer scientist, artist, and advocate. Suleyman is a technologist and governance theorist. Chowdhury is a political scientist and data scientist. Ng is a researcher, educator, and entrepreneur. Murati is an engineer, product leader, and organizational manager. Each has integrated technical capability with a broader set of skills — exactly the integration that Chapter 40 argues is essential.

2. All of them have faced moments of ethical courage. Buolamwini challenged billion-dollar companies as a graduate student. Suleyman argues publicly that the industry he helped build needs containment. Chowdhury invited external researchers to find flaws in her own company's AI systems. Ng advocates for data quality in a field obsessed with model sophistication. Murati navigated the tensions between AI capability and AI safety at the world's most visible AI company. Ethical courage is not abstract for any of them.

3. All of them operate in multiple communities. None of these leaders works in isolation. Each has built or participated in networks that span industry, academia, policy, and civil society. The network effect of AI leadership — the amplification of individual knowledge through collective learning — is evident in every profile.

4. All of them are learning continuously. The AI landscape has changed dramatically during each of their careers, and each has adapted — changing roles, organizations, and perspectives as the field has evolved. The "disciplined information diet" described in Chapter 40 is, for these leaders, a professional survival skill.


Discussion Questions

  1. Which of the five profiles most closely matches your own career aspirations? What capabilities would you need to develop to follow a similar path?

  2. The case study identifies ethical courage as a common trait. Select one of the five leaders and analyze a specific decision where they demonstrated ethical courage. What were the costs? What were the benefits? Would you have made the same decision?

  3. Joy Buolamwini's "Gender Shades" study is an example of research that changed industry practice. What conditions enabled a graduate student's research to have such outsized impact? What does this suggest about the role of academic research in AI governance?

  4. Mustafa Suleyman argues for "containment" of AI — not stopping development, but building governance structures that can keep pace. Is this feasible? What would a governance structure capable of keeping pace with AI development actually look like?

  5. Andrew Ng's data-centric AI philosophy emphasizes data quality over model sophistication. How does this philosophy connect to Athena Retail Group's transformation — particularly the transition from seven siloed databases to a unified data platform?

  6. If you could add a sixth profile to this case study — someone not yet famous but doing important work in AI leadership — who would it be, and why? (This question has no wrong answer. It is a test of your awareness of the AI leadership landscape.)

  7. NK Adeyemi and Tom Kowalski are fictional characters. But which of the five real-world leaders does NK most resemble? Which does Tom most resemble? What does the comparison tell you about the textbook's conception of AI leadership?


This case study is based on publicly available information as of early 2026, including published research, books, interviews, conference presentations, and media reporting. The interpretations and analyses are the authors' own. Readers are encouraged to follow these leaders' ongoing work and to identify emerging leaders whose contributions are shaping the field.