Appendix C: Historical Timeline — AI from 1943 to the Present

This timeline traces the major milestones in the development of artificial intelligence, from the earliest theoretical work to the generative AI revolution and emerging regulation. Events are grouped by era. AI winters and hype peaks are marked to highlight the recurring pattern of promise and disillusionment discussed in Chapter 2.


The Theoretical Foundations (1943–1955)

Year Event Significance
1943 Warren McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity" First mathematical model of an artificial neuron — the conceptual ancestor of all neural networks
1950 Alan Turing publishes "Computing Machinery and Intelligence" Proposes the "imitation game" (later known as the Turing Test) and asks: "Can machines think?" — framing the philosophical question that still drives the field
1951 Marvin Minsky and Dean Edmonds build SNARC, the first neural network machine Early hardware implementation of a neural network, using 3,000 vacuum tubes to simulate 40 neurons
1955 Allen Newell and Herbert Simon create the Logic Theorist Often called the first AI program — proved 38 of 52 theorems in Russell and Whitehead's Principia Mathematica

The Golden Age (1956–1974)

Year Event Significance
1956 Dartmouth Conference John McCarthy coins the term "artificial intelligence" at a workshop that formally founds the field. The proposal's ambition: every aspect of intelligence can be precisely described and simulated
1958 Frank Rosenblatt introduces the Perceptron A single-layer neural network that could learn from data — the first practical learning machine
1964 Joseph Weizenbaum creates ELIZA An early chatbot that simulated a Rogerian therapist. Users often attributed genuine understanding to the program — an early demonstration of our tendency to anthropomorphize AI
1965 Herbert Simon predicts: "Machines will be capable, within twenty years, of doing any work a man can do" One of many overly optimistic predictions that contributed to unrealistic expectations
1966 The ALPAC Report U.S. government report declares machine translation impractical, leading to severe funding cuts — an early example of how overselling AI leads to backlash
1969 Minsky and Papert publish Perceptrons Demonstrated fundamental limitations of single-layer perceptrons, effectively freezing neural network research for over a decade

⬇️ The First AI Winter (1974–1980)

Year Event Significance
1973 The Lighthill Report (UK) Sir James Lighthill's devastating critique of AI research leads the UK to cut nearly all AI funding. He concluded that AI had not delivered on its promises
1974–1980 Funding cuts across the US and UK Government agencies drastically reduce AI funding. Researchers rebrand their work to avoid the "AI" label. Progress continues quietly in narrow areas

⬆️ The Expert Systems Boom (1980–1987)

Year Event Significance
1980 XCON (R1) deployed at Digital Equipment Corporation First commercially successful expert system — configured computer orders using thousands of hand-coded rules. Saved DEC an estimated $40 million per year
1981 Japan launches the Fifth Generation Computer Project A $850 million initiative to build "intelligent computers" using logic programming. It galvanized government AI funding worldwide
1984 Doug Lenat begins the Cyc project An attempt to encode common-sense knowledge into a machine by manually entering millions of logical assertions — a project that continues, in modified form, today
1986 Rumelhart, Hinton, and Williams popularize backpropagation Published an influential paper demonstrating how to train multi-layer neural networks. This algorithm remains the foundation of deep learning, but its significance was not fully realized for another two decades

⬇️ The Second AI Winter (1987–1993)

Year Event Significance
1987–1993 Expert systems market collapses Companies discover that expert systems are expensive to maintain, brittle in practice, and unable to learn or adapt. The specialized hardware market (LISP machines) crashes. AI funding is cut again
1988 Judea Pearl publishes Probabilistic Reasoning in Intelligent Systems Introduces Bayesian networks as a framework for reasoning under uncertainty — a shift from logic-based to probabilistic AI that would prove transformative

The Statistical Turn and Quiet Progress (1993–2010)

Year Event Significance
1997 IBM Deep Blue defeats world chess champion Garry Kasparov A landmark moment in AI history — but also an illustration of the AI effect (Ch. 1): many dismissed it as "just brute force," not "real" intelligence
1998 Yann LeCun demonstrates convolutional neural networks (LeNet) for handwriting recognition An early demonstration that neural networks could be practical for real-world pattern recognition — but the broader AI community remained skeptical of neural networks
2002 iRobot releases the Roomba AI enters millions of homes in mundane form — autonomous vacuum cleaners. AI stops being just a research topic and becomes a consumer product
2006 Geoffrey Hinton and colleagues publish work on deep belief networks Demonstrates that deep neural networks can be trained effectively, reigniting interest in deep learning after decades of dormancy. Often cited as the beginning of the deep learning renaissance
2009 Fei-Fei Li and team release ImageNet A dataset of over 14 million labeled images that would become the standard benchmark for computer vision. Its creation involved crowdsourced labeling through Amazon Mechanical Turk

⬆️ The Deep Learning Revolution (2011–2017)

Year Event Significance
2011 IBM Watson wins Jeopardy! against champions Ken Jennings and Brad Rutter Another major milestone dismissed by some as "just search" — the AI effect in action
2011 Apple launches Siri Voice assistants bring AI into mainstream consumer use — millions of people interact with natural language processing daily
2012 AlexNet wins the ImageNet challenge by a dramatic margin Deep convolutional neural network reduces error rate by over 40% compared to the previous year. This result is widely considered the moment that ignited the modern AI era (see Appendix B)
2014 Ian Goodfellow introduces Generative Adversarial Networks (GANs) A new architecture in which two neural networks compete — one generating content, one evaluating it. This framework enabled realistic image and video synthesis
2014 DeepMind develops AI that learns to play Atari games from raw pixels Demonstrates that reinforcement learning can achieve superhuman performance on complex tasks directly from visual input
2016 DeepMind's AlphaGo defeats world Go champion Lee Sedol Go was considered far more complex than chess and resistant to brute-force approaches. AlphaGo's victory stunned the AI community and the world. Lee Sedol retired in 2019, citing AI as unbeatable
2017 "Attention Is All You Need" — the Transformer paper Vaswani et al. introduce the Transformer architecture, based entirely on self-attention mechanisms. This paper is the technical foundation of every large language model (see Appendix B)

⬆️ The Transformer Era and Generative AI (2018–Present)

Year Event Significance
2018 Google releases BERT; OpenAI releases GPT-1 Pre-trained language models demonstrate that training on massive text corpora produces models that can be fine-tuned for many tasks — transfer learning applied to language
2018 Gender Shades study published (Buolamwini & Gebru) Reveals significant accuracy disparities in commercial facial recognition across skin tones and genders, catalyzing the AI fairness research movement (see Appendix B)
2019 OpenAI releases GPT-2 Initially withheld from full public release due to concerns about misuse, marking the first major debate about whether AI model releases should be restricted
2020 OpenAI releases GPT-3 With 175 billion parameters, GPT-3 demonstrates surprising few-shot learning capabilities. Third-party developers build hundreds of applications on its API
2021 "Stochastic Parrots" paper published (Bender, Gebru, et al.) Raises concerns about the environmental and social costs of ever-larger language models; the term "stochastic parrot" enters common usage (see Appendix B)
2021 DALL-E introduced by OpenAI Generates images from text descriptions, demonstrating multimodal AI capabilities and raising questions about AI creativity (Ch. 11)
2022 Stable Diffusion released as open source Puts powerful image generation in the hands of anyone with a computer, democratizing access but also enabling misuse
2022 ChatGPT released by OpenAI (November 30) Reaches 100 million users in approximately two months — the fastest-growing consumer application in history. Triggers a global conversation about AI in education, work, creativity, and society
2023 GPT-4 released Multimodal model (text and images) with substantially improved reasoning capabilities. Scores in the top percentiles on bar exams, medical licensing exams, and other standardized tests
2023 Google releases Bard (later Gemini); Anthropic releases Claude; Meta releases Llama 2 The LLM landscape becomes multi-player. Open-source models challenge closed-source dominance
2023 Hollywood writers' and actors' strikes include AI provisions The WGA and SAG-AFTRA strikes include negotiations over AI use in writing and acting — the first major labor actions directly addressing AI in creative industries (Ch. 10, Ch. 11)
2023 Executive Order on AI signed by President Biden First comprehensive U.S. executive action on AI safety, requiring safety testing for powerful models and establishing AI governance frameworks
2024 EU AI Act enters into force The world's first comprehensive AI regulation classifies AI systems by risk level and imposes requirements including transparency, human oversight, and conformity assessments (Ch. 13)
2024 AI agents and multimodal models proliferate AI systems increasingly combine text, image, audio, and video understanding. Agent-based systems that can take actions — browsing the web, writing code, interacting with software — emerge as a major trend
2025 AI governance becomes a global priority Multiple countries and international organizations develop AI governance frameworks. The gap between AI development speed and regulatory capacity remains a central challenge (Ch. 19)

Patterns to Notice

As you read this timeline, notice the recurring patterns discussed in Chapter 2:

  1. The Hype Cycle: Ambitious promises lead to inflated expectations, followed by disappointment when results fall short, followed by a period of quieter but often more productive work.
  2. The AI Effect: Each major achievement (chess, Jeopardy!, Go, language generation) is initially hailed as a breakthrough and then dismissed as "not really AI."
  3. Breakthroughs from Unexpected Directions: GPUs designed for video games enabled deep learning. A machine translation paper gave us the Transformer architecture. Crowdsourced labeling (Amazon Mechanical Turk) made ImageNet possible.
  4. Demonstration Is Not Deployment: There is often a long and difficult gap between a dramatic demonstration (beating a champion, passing an exam) and reliable, equitable, real-world deployment.
  5. Technology Outpaces Governance: Regulatory action consistently lags behind technological capability, from the ALPAC report in 1966 to the EU AI Act in 2024.

A Note on Dates

Some events (particularly the AI winters) are gradual processes rather than discrete moments. The dates given represent either the commonly cited start of a period or the publication date of a key document. Historians of AI may disagree about the precise boundaries of these eras, and that disagreement itself illustrates how narratives about technology are constructed.