Chapter 32 Further Reading: Building and Managing AI Teams


AI Team Structure and Organization

  1. Colson, E. (2019). "What AI-Driven Companies Can Teach Us About Building Algorithms." Harvard Business Review, January-February 2019. Written by Stitch Fix's Chief Algorithms Officer, this article argues for embedding data scientists in business teams rather than centralizing them — the "embedded" model described in Section 32.4. Colson's perspective reflects a company where algorithmic capability is the product, making business proximity essential. Valuable for understanding the trade-offs between organizational models. (First referenced in Chapter 6.)

  2. Davenport, T.H. & Patil, D.J. (2012). "Data Scientist: The Sexiest Job of the 21st Century." Harvard Business Review, 90(10), 70-76. The article that launched a thousand job postings. While its specifics are dated, its core argument — that organizations need professionals who can bridge data, technology, and business judgment — remains prescient. Read alongside Davenport and Patil's 2022 follow-up, "Is Data Scientist Still the Sexiest Job of the 21st Century?" which acknowledges the role's evolution and fragmentation into the specialized roles described in Section 32.2.

  3. Ng, A. (2018). "AI Transformation Playbook." Landing AI White Paper. Andrew Ng's concise guide to implementing AI in enterprises, covering pilot project selection, building an AI team, providing broad AI training, and developing an AI strategy. The team-building section is particularly relevant, recommending a centralized AI team in early stages — consistent with the maturity-based structural evolution described in this chapter. (Also referenced in Chapter 6.)

  4. Fountaine, T., McCarthy, B., & Saleh, T. (2019). "Building the AI-Powered Organization." Harvard Business Review, 97(4), 62-73. Based on McKinsey research across hundreds of AI implementations, this article identifies the organizational practices that distinguish AI leaders from laggards. Key finding: the primary barriers to AI adoption are organizational, not technical — a theme echoed throughout this chapter. The article's discussion of the "hub and spoke" model and AI Centers of Excellence is directly relevant to Sections 32.4 and 32.9.

  5. Kniberg, H. & Ivarsson, A. (2012). "Scaling Agile @ Spotify with Tribes, Squads, Chapters & Guilds." Spotify Labs White Paper. The foundational document describing Spotify's organizational model. Required context for Case Study 1. Note that this paper describes the model as it existed in 2012; the structure has evolved significantly since. Read it as a starting point, not as a current description of Spotify's organization.


AI Talent and Hiring

  1. Tambe, P., Cappelli, P., Yakubovich, V., & Shen, L. (2019). "Artificial Intelligence in Human Resources Management: Challenges and a Path Forward." California Management Review, 61(4), 15-42. Examines the unique HR challenges created by AI — including the labor market dynamics, compensation pressures, and retention challenges described in Sections 32.1 and 32.7. The article's analysis of how AI talent markets differ from traditional labor markets is particularly relevant for HR leaders and hiring managers.

  2. Stanford Institute for Human-Centered AI (HAI). (2024). AI Index Report 2024. Stanford University. The definitive annual survey of AI trends, including talent supply-demand data, industry adoption metrics, and educational pipeline statistics. Chapter 4 ("Economy and Education") provides the empirical foundation for the talent landscape discussion in Section 32.1. Updated annually; consult the latest edition for current data.

  3. Bessen, J., Impink, S.M., Reichensperger, L., & Seamans, R. (2022). "The Role of Data for AI Startup Growth." Research Policy, 51(5), 104513. Examines how AI startups acquire and develop talent, with findings about the relationship between data assets, team composition, and organizational growth. Relevant for understanding the talent dynamics that AI teams at large enterprises compete against.


Upskilling and AI Literacy

  1. Agrawal, A., Gans, J., & Goldfarb, A. (2022). Power and Prediction: The Disruptive Economics of Artificial Intelligence. Harvard Business Review Press. The follow-up to the authors' influential Prediction Machines, this book examines how AI changes organizational decision-making and the skills needed to manage that change. The framework of AI as "cheap prediction" provides a useful lens for designing upskilling programs that help managers understand what AI does (makes predictions) and what humans must do (exercise judgment).

  2. Brynjolfsson, E. & McAfee, A. (2017). "The Business of Artificial Intelligence." Harvard Business Review, July 2017. Provides the strategic context for why AI literacy matters across the organization — not just within technical teams. The article's argument that AI creates the most value when combined with organizational redesign supports the upskilling-as-transformation perspective advocated in Section 32.8.

  3. Neeley, T. & Leonardi, P. (2022). The Digital Mindset: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI. Harvard Business Review Press. A practical guide to developing digital and AI literacy in non-technical professionals. The "30% rule" — the idea that you need to understand approximately 30% of a technical domain to collaborate effectively with specialists — aligns with the T-shaped professional concept from Section 32.3. Particularly useful for designing Tier 2 upskilling programs.


Managing Data Science Teams

  1. Muller, M., Lange, I., Wang, D., Piorkowski, D., Tsay, J., Liao, Q.V., Dugan, C., & Erickson, T. (2019). "How Data Science Workers Work with Data: Discovery, Capture, Curation, Design, Creation." Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1-15. Ethnographic research examining how data scientists actually work — as opposed to how they are assumed to work. Findings about the exploratory, iterative, and uncertain nature of data science work support the management adaptations recommended in Section 32.10.

  2. Saltz, J.S. & Grady, N.W. (2017). "The Ambiguity of Data Science Team Roles and the Need for a Data Science Workforce Framework." IEEE International Conference on Big Data, 2355-2361. Examines role ambiguity in data science teams — a persistent problem that this chapter addresses through clear role definitions in Section 32.2. The paper documents how unclear role boundaries lead to duplicated effort, missed responsibilities, and interpersonal conflict.

  3. Kim, M., Zimmermann, T., DeLine, R., & Begel, A. (2018). "Data Scientists in Software Teams: State of the Art and Challenges." IEEE Transactions on Software Engineering, 44(11), 1024-1038. A large-scale study of data scientists at Microsoft, examining how they collaborate with software engineers and how organizational structures affect their effectiveness. Findings about the "notebook-to-production" gap are directly relevant to the ML engineer role discussion and the cross-functional collaboration challenges in Section 32.11.


Diversity in AI

  1. West, S.M., Whittaker, M., & Crawford, K. (2019). "Discriminating Systems: Gender, Race, and Power in AI." AI Now Institute Report. Documents the diversity crisis in the AI field and its consequences — including the relationship between team homogeneity and algorithmic bias. The report's findings support Ravi's observation that diverse teams produce better AI systems (Section 32.6) and provide a research foundation for diversity-focused hiring strategies.

  2. Page, S.E. (2017). The Diversity Bonus: How Great Teams Pay Off in the Knowledge Economy. Princeton University Press. A rigorous, evidence-based argument for the cognitive benefits of team diversity. Page demonstrates mathematically that diverse teams outperform homogeneous teams on complex problems — not despite their differences, but because of them. Provides the theoretical framework for the business case for diversity in AI teams.


Vendor and Partner Management

  1. Iansiti, M. & Lakhani, K.R. (2020). Competing in the Age of AI. Harvard Business Review Press. Examines how companies build AI capabilities through a combination of internal development, platform services, and partner ecosystems. The "operating model" framework is relevant for the build-vs-buy-vs-partner decisions described in Section 32.12. (Also referenced in Chapter 6.)

  2. Ransbotham, S., Khodabandeh, S., Kiron, D., Candelon, F., Chu, M., & LaFountain, B. (2020). "Expanding AI's Impact With Organizational Learning." MIT Sloan Management Review and Boston Consulting Group Report. Based on a global survey of over 3,000 managers, this report examines the relationship between organizational learning practices and AI success. Key finding: organizations that invest in broad AI literacy outperform those that invest only in technical talent — empirical support for the three-tier upskilling model in Section 32.8.


AI Centers of Excellence

  1. Brock, J.K.U. & von Wangenheim, F. (2019). "Demystifying AI: What Digital Transformation Leaders Can Teach You About Realistic Artificial Intelligence." California Management Review, 61(4), 110-134. Examines how organizations structure their AI capabilities, with specific attention to Centers of Excellence and their governance roles. The article's framework for CoE maturity stages complements the CoE design guidance in Section 32.9.

  2. Deloitte AI Institute. (2023). "State of AI in the Enterprise: Getting AI Governance Right." Deloitte Insights. Annual survey examining enterprise AI adoption, with detailed sections on AI governance structures, CoE models, and talent strategies. Provides benchmarking data useful for evaluating your organization's AI team structure and maturity against industry peers.


Cross-Functional Collaboration

  1. Patil, D.J. & Mason, H. (2015). Data Driven: Creating a Data Culture. O'Reilly Media. Short, practical guide to building data-literate organizations. Covers hiring, team structure, and the cultural prerequisites for data-driven decision making. The discussion of "data translators" — people who bridge technical and business domains — anticipates the translator role described in Section 32.11. (Also referenced in Chapter 6.)

  2. Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I.D., & Gebru, T. (2019). "Model Cards for Model Reporting." Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT)*, 220-229. The original paper proposing model cards — standardized documentation for machine learning models. Model cards are discussed in Section 32.11 as a cross-functional communication tool. This paper provides the rationale, format, and examples needed to implement model cards in practice. (Also referenced in Chapter 27.)

  3. Sculley, D., et al. (2015). "Hidden Technical Debt in Machine Learning Systems." Advances in Neural Information Processing Systems (NeurIPS), 28. While primarily about technical debt, this paper's analysis of the organizational and process dimensions of ML system maintenance is relevant to the team management challenges in Section 32.10. The concept of "pipeline jungles" — ad hoc data pipelines that become unmaintainable — reinforces the importance of dedicated data engineering roles discussed in Section 32.2. (Also referenced in Chapter 6.)


Note: Several works referenced here were also cited in earlier chapters (particularly Chapter 6 on the Business of Machine Learning). This reflects the continuity of themes: the organizational and team-building challenges introduced conceptually in Chapter 6 are addressed operationally in Chapter 32. Readers are encouraged to revisit Chapter 6's further reading alongside these resources for a comprehensive perspective.