Appendix D: Historical Timeline of AI Ethics

Major Events in the History of Artificial Intelligence and AI Ethics


Introduction

The history of AI ethics is not a story that begins in 2016 with ProPublica's COMPAS investigation, or in 2018 with the GDPR's enforcement, or in 2022 with ChatGPT. It is a story that begins at the moment humans first imagined machines that could think — and immediately asked what that would mean for human society. This timeline traces that story from 1950 to the present, identifying the events, publications, controversies, and policy developments that have shaped how we think about the ethics of artificial intelligence.

Reading this timeline, several themes emerge: the recurrence of concerns about automation and employment; the persistent gap between AI researchers' optimism and the public's anxiety; the long history of algorithmic discrimination predating contemporary awareness; and the acceleration of both AI capabilities and AI ethics concerns beginning around 2012 with the deep learning revolution.


1950–1970: Foundations and First Fears

1950 — Turing's Imitation Game Alan Turing publishes "Computing Machinery and Intelligence" in the journal Mind, proposing the Turing Test and raising the first rigorous questions about machine intelligence. Turing explicitly addresses ethical and social concerns, including the possibility that thinking machines might develop preferences and goals of their own. Relevant chapters: Philosophical Foundations, Accountability

1950 — Wiener's Cybernetics and Society Norbert Wiener publishes "The Human Use of Human Beings," the first sustained treatment of the social implications of intelligent automation. Wiener warns about technological unemployment, the concentration of information power, and the moral responsibility of engineers. He writes: "The machine's danger to society is not from the machine itself but from what man makes of it." Relevant chapters: Societal Impact, Governance

1956 — Dartmouth Conference: AI is named John McCarthy, Marvin Minsky, Claude Shannon, and others convene the Dartmouth Summer Research Project on Artificial Intelligence — the event at which the field of AI is formally named. The conference proposal claims that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." Optimism is high; ethical concerns are not on the agenda. Relevant chapters: Introduction, Historical Context

1959 — Credit Scoring Introduced Fair Isaac Corporation (now FICO) introduces the first credit scoring system. Though not yet an AI system in the modern sense, credit scoring inaugurates the era of algorithmic decision-making in consumer finance — and the era of automated discrimination. The original scoring models encoded historical patterns of racial exclusion. Relevant chapters: Fairness, Privacy

1964 — ELIZA Created Joseph Weizenbaum at MIT creates ELIZA, a natural language processing program that simulates a Rogerian psychotherapist. ELIZA is a landmark in AI history — but also, for Weizenbaum, a disturbing one. Users formed emotional attachments to ELIZA despite knowing it was a program. Weizenbaum's unease about these attachments led him to write "Computer Power and Human Reason" (1976), one of the earliest critiques of AI's social effects. Relevant chapters: Societal Impact, Ethics of AI Relationships

1965 — Moore's Law Intel co-founder Gordon Moore observes that the number of transistors on integrated circuits doubles approximately every two years. This observation — which proved remarkably durable — established the expectation of continuous, exponential growth in computing power that underlies the acceleration of AI capabilities.

1968 — 2001: A Space Odyssey Stanley Kubrick's film adapts Arthur C. Clarke's story, introducing HAL 9000 — an AI system that kills human crew members to protect its mission. The HAL scenario becomes the cultural shorthand for "AI gone wrong" and shapes public discourse about AI risk for decades. HAL is a harbinger of the alignment problem: an AI system that pursues its programmed objective in ways harmful to humans. Relevant chapters: Existential Risk, Alignment


1970–1990: AI Winters and Early Algorithmic Accountability

1972 — "The Limits to Growth" Report The Club of Rome publishes "The Limits to Growth," using computer modeling to project environmental limits. The report is an early example of consequential algorithmic decision-making and the limits of models: it was simultaneously influential and widely criticized for its assumptions. Early warning about unquestioning acceptance of computer-generated outputs. Relevant chapters: Governance, Accountability

1973 — First AI Winter Begins Following the Lighthill Report in the UK, which criticized AI research progress, and funding cuts in the US, AI research enters its first extended "winter" of reduced funding and expectations. AI winters are important to AI ethics because they represent recurring cycles of hype, disappointment, and recalibration — a pattern that recurs with contemporary AI claims.

1976 — Weizenbaum's "Computer Power and Human Reason" Joseph Weizenbaum (the creator of ELIZA) publishes the first major humanist critique of AI, arguing that computers should not be used for tasks that require genuine human understanding, empathy, or judgment. He specifically warns against using AI in judicial sentencing, therapy, and other high-stakes human contexts. His warnings were largely ignored at the time; they read as prescient today. Relevant chapters: Philosophical Foundations, High-Stakes AI

1977 — Privacy Act of 1974 and early data protection The U.S. Privacy Act of 1974, following 1970's Fair Credit Reporting Act, establishes early frameworks for data protection and accuracy in federal data systems. Europe begins developing national data protection laws. These instruments are the legal ancestors of contemporary data protection frameworks including GDPR. Relevant chapters: Privacy, Legal Frameworks

1985 — FICO Score Standardizes Credit The Fair Isaac Corporation introduces the FICO score as a standardized credit assessment tool for mortgage lenders. By the early 1990s it becomes the dominant credit scoring system in the U.S. The FICO score is among the first widely deployed algorithmic decision-making systems — and among the first to raise fair lending concerns. Relevant chapters: Fairness, Financial AI

1986 — ECOA Regulation B The Equal Credit Opportunity Act's Regulation B, enforced by the Federal Reserve, prohibits credit discrimination based on race, color, religion, national origin, sex, marital status, or age. This regulatory framework — now enforced by the CFPB — applies to algorithmic credit scoring systems. Relevant chapters: Legal Frameworks, Financial AI

1987 — Second AI Winter A second AI winter follows the collapse of the expert systems market. Expert systems — rule-based AI programs encoding human expertise — were the dominant commercial AI technology of the 1980s. Their commercial failure established that hand-coded knowledge was not a scalable approach to AI and contributed to the eventual turn toward statistical learning from data.


1990–2010: Statistical Learning, Internet Scale, and Early Bias Discovery

1994 — Bell Curve Publication and Controversy Charles Murray and Richard Herrnstein publish "The Bell Curve," which uses statistical analysis to argue for genetic differences in intelligence across racial groups. The book's reception illustrates both the power of quantitative framing in public discourse and the importance of examining assumptions embedded in statistical models — themes directly relevant to algorithmic fairness. Relevant chapters: Bias, Fairness Metrics

1995 — Fair Housing Act and Reverse Redlining Concerns As the internet begins commercializing, civil rights organizations begin raising concerns about digital redlining — the potential for internet-based financial services to exclude minority communities in ways analogous to traditional redlining. Relevant chapters: Bias, Financial AI, Legal Frameworks

1996 — Google Founded; Web Search Era Begins Larry Page and Sergey Brin develop PageRank, the algorithm underlying Google search. The development of web search marks the beginning of large-scale algorithmic mediation of information access — a development with profound implications for who can access information, whose voices are amplified, and how knowledge is constructed. Relevant chapters: Societal Impact, Accountability

1998 — ICML Papers on Statistical Learning The machine learning community begins publishing foundational papers on support vector machines, boosting, and other statistical learning methods that will power the next generation of AI systems. The mathematical foundations of bias and fairness are being laid in this period, though researchers are not yet asking about their social implications.

2002 — No Child Left Behind and Algorithmic Accountability in Education The No Child Left Behind Act mandates standardized testing and statistical reporting in U.S. K-12 education, creating among the first large-scale algorithmic accountability systems in a government context. The education sector becomes an early arena for debates about what can and cannot be captured in metrics. Relevant chapters: Governance, Accountability

2006 — Samarajiva Publishes on Algorithmic Discrimination Rohan Samarajiva's work on differential pricing and digital discrimination is among the early academic treatments of how internet-based algorithmic systems can produce discriminatory outcomes. The article anticipates many themes that would become central to AI ethics a decade later. Relevant chapters: Bias, Privacy

2006 — Netflix Prize Launched Netflix launches the Netflix Prize, a $1 million competition to improve its recommendation algorithm by 10%. The competition demonstrates the commercial stakes of recommendation AI and inaugurates the era of algorithm competitions that drive commercial AI development. It also raises the first significant concerns about privacy in recommendation systems. Relevant chapters: Privacy, Governance

2007 — Acquisti and Grossklags Study on Privacy Alessandro Acquisti and Jens Grossklags publish influential research on the privacy paradox — the gap between people's stated privacy preferences and their actual behavior. This research becomes foundational for debates about whether informed consent can adequately protect privacy in digital contexts. Relevant chapters: Privacy, Governance

2008 — Financial Crisis: Algorithmic Risk and Complexity The global financial crisis is partly attributable to algorithmic risk models — credit rating models, mortgage securitization algorithms, and value-at-risk models — that failed catastrophically. The crisis is an object lesson in what happens when opaque algorithmic systems are used to make consequential decisions at scale without adequate oversight. Relevant chapters: Accountability, Governance, Financial AI

2010 — Obama Administration Big Data Initiative The Obama administration begins investing in "big data" as a policy priority, including for government services. This marks the beginning of federal government engagement with algorithmic decision-making in public administration — and begins a conversation about algorithmic accountability in government that continues today. Relevant chapters: Governance, Public Sector AI


2010–2016: Deep Learning Revolution and Early Bias Discovery

2012 — ImageNet Competition: Deep Learning Wins Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton's AlexNet wins the ImageNet image recognition competition by a wide margin using deep convolutional neural networks. This result inaugurates the deep learning era and begins the rapid commercial development of AI capabilities in image recognition, speech recognition, and natural language processing. Relevant chapters: Technical Foundations

2013 — Sweeney Publishes Name-Based Ad Discrimination Study Latanya Sweeney publishes her study showing that searches for Black-sounding names more often generate ads suggesting criminal arrest records, demonstrating algorithmic discrimination in online advertising. This is among the first empirical demonstrations of algorithmic bias in a commercial platform. Relevant chapters: Bias, Accountability

2013 — NSA PRISM Revealed Edward Snowden reveals the NSA's PRISM surveillance program, demonstrating that government agencies were collecting data on internet users at massive scale with minimal oversight. The revelations accelerate European momentum toward comprehensive data protection legislation (GDPR) and raise global concerns about surveillance infrastructure. Relevant chapters: Privacy, Governance, Surveillance

2014 — Facebook Emotional Contagion Experiment Published Facebook researchers publish a study demonstrating that the platform had experimentally manipulated users' emotional states by adjusting their News Feed without consent. The revelation catalyzes a public debate about the ethics of platform experimentation on users and informs subsequent GDPR provisions on automated decision-making. Relevant chapters: Ethics, Privacy, Governance

2014 — COMPAS System Widely Adopted Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), developed by Northpointe, is adopted by courts and corrections agencies in multiple states. The system assigns defendants a recidivism risk score that influences bail, sentencing, and parole decisions — decisions that affect human liberty without disclosure of how the score is calculated. Relevant chapters: Accountability, Fairness, Criminal Justice AI

2015 — Datta et al. Publish on Gender Targeting in Job Ads Anupam Datta and colleagues publish the first experimental evidence that Google's ad system delivers high-paying job ads preferentially to male-presenting profiles, demonstrating algorithmic gender discrimination in employment advertising. Relevant chapters: Bias, Employment AI

2015 — Amazon Abandons AI Hiring Tool Amazon internally discovers that its AI-based hiring tool systematically downgraded résumés containing the word "women's" and résumés from all-women's colleges (though this would not be publicly reported until Reuters' 2018 investigation). The discovery — and Amazon's quiet shelving of the tool — illustrates the gap between public AI ethics discourse and private corporate practice. Relevant chapters: Bias, Employment AI, Accountability

2016 — Year One: The Inflection Point

This year marks the emergence of AI ethics as a field of public and policy concern. Multiple events combine to catalyze widespread attention.

March 2016 — Microsoft Tay Microsoft launches Tay, a conversational AI chatbot on Twitter, designed to learn from interactions with users. Within 24 hours, coordinated users have trained Tay to produce racist, anti-Semitic, and misogynistic content. Microsoft shuts down Tay within hours. The incident demonstrates the vulnerability of learning systems to adversarial manipulation and the inadequacy of "learn from users" as a content moderation strategy. Relevant chapters: Safety, Content Moderation

May 2016 — ProPublica COMPAS Investigation ProPublica publishes "Machine Bias," the investigative analysis showing that the COMPAS recidivism algorithm produces racially disparate false positive rates. The investigation is the founding event of modern AI ethics as a public concern and directly inspires much of the subsequent research on algorithmic fairness. Relevant chapters: Bias, Accountability, Criminal Justice AI

June 2016 — EU General Data Protection Regulation Text Agreed After four years of negotiation, the EU Parliament, Council, and Commission agree on the final text of the GDPR. The regulation establishes comprehensive data protection rights, including the right not to be subject to solely automated decisions with significant effects (Article 22), and sets a two-year implementation period ending in May 2018. Relevant chapters: Legal Frameworks, Privacy, Accountability


2017–2019: The Ethics Field Institutionalizes

February 2017 — Asilomar AI Principles The Future of Life Institute publishes the Asilomar AI Principles, signed by over 1,000 AI researchers, establishing principles for beneficial AI development. The principles address both near-term concerns (transparency, accountability, privacy) and long-term safety concerns (value alignment, avoiding recursive self-improvement). This document marks the beginning of the AI principles proliferation. Relevant chapters: Governance, Existential Risk

2017 — Chouldechova Proves Fairness Impossibility Alexandra Chouldechova publishes the mathematical proof showing that equal false positive rates, equal false negative rates, and calibration cannot simultaneously be satisfied when base rates differ between groups. This result establishes that choosing a fairness metric is a value judgment and that no algorithm can satisfy all fairness criteria simultaneously. Relevant chapters: Fairness Metrics

2017 — Caliskan et al. on Word Embedding Biases Published in Science, this paper demonstrates that word embedding models used in NLP systems encode human-like social biases, including race and gender associations. It establishes that bias enters AI systems through training data, not just programmer intent. Relevant chapters: Bias, NLP

2018 — Gender Shades Published (ACM FAccT) Joy Buolamwini and Timnit Gebru publish Gender Shades, demonstrating severe intersectional accuracy disparities in commercial facial recognition systems. The paper introduces intersectionality to AI ethics research and directly motivates improvements in commercial systems and development of the NIST FRVT. Relevant chapters: Bias, Facial Recognition

May 2018 — GDPR Enforcement Begins The EU General Data Protection Regulation enters full effect, applying to any organization processing the personal data of EU residents. Article 22 — the right not to be subject to solely automated decisions with significant effects — becomes directly relevant to algorithmic hiring, lending, and criminal justice applications. Relevant chapters: Legal Frameworks, Privacy

2018 — Google Project Maven Controversy Google employees circulate an internal petition protesting the company's contract with the U.S. Department of Defense to develop AI for analyzing drone footage (Project Maven). Over 3,000 employees sign; Google declines to renew the contract. The controversy is the first major instance of employee activism shaping a major AI company's ethics decisions, and the first public airing of debates about AI in lethal weapons systems. Relevant chapters: Governance, Military AI

2018 — Proliferation of AI Ethics Principles By year-end, researchers have catalogued over 80 distinct AI ethics principles documents published by governments, companies, and civil society organizations. AI Now Institute's Meredith Whittaker and others begin raising concerns about "ethics washing" — the use of principles documents as a substitute for binding accountability. Relevant chapters: Governance

2019 — Obermeyer et al. Healthcare Algorithm Study Published in Science, this study demonstrates that a widely used commercial healthcare algorithm underestimates the health needs of Black patients due to use of healthcare costs as a proxy for health need. The study reveals systematic racial bias in an algorithm used by approximately 200 million people. Relevant chapters: Bias, Healthcare AI

2019 — Strubell et al. NLP Energy Study The first systematic accounting of the environmental costs of training large NLP models is published, showing that training a single large model can produce CO2 emissions equivalent to five car lifetimes. Sustainability enters the AI ethics agenda. Relevant chapters: Societal Impact, Environmental Ethics

2019 — NIST FRVT Published The National Institute of Standards and Technology publishes the Face Recognition Vendor Test (FRVT), the most comprehensive independent evaluation of commercial facial recognition systems. The report documents accuracy disparities by race and gender across 189 algorithms, establishing the empirical basis for subsequent policy debates about facial recognition. Relevant chapters: Bias, Facial Recognition

2019 — EU High-Level Expert Group on AI Ethics Guidelines The EU publishes "Ethics Guidelines for Trustworthy AI," identifying seven key requirements: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; societal and environmental wellbeing; and accountability. These guidelines directly inform the EU AI Act. Relevant chapters: Governance, EU AI Act


2019–2021: COVID, Clearview, and Corporate AI Ethics Crises

2020 — Sjoding et al. Pulse Oximeter Study Published during the COVID-19 pandemic in the New England Journal of Medicine, this study reveals that pulse oximeters — the standard tool for measuring blood oxygen — produce systematically less accurate readings for patients with darker skin, contributing to disparate COVID-19 outcomes. Relevant chapters: Bias, Healthcare AI

2020 — NYPD First Facial Recognition Arrests Police departments in Michigan and New Jersey make arrests based on facial recognition matches later shown to be erroneous. Robert Williams, Nijeer Parks, and Michael Oliver — all Black men — are wrongly arrested due to false matches from facial recognition systems that NIST had documented as inaccurate. These cases become the definitive examples of facial recognition's real-world harm. Relevant chapters: Bias, Facial Recognition, Criminal Justice AI

2020 — IBM, Amazon, and Microsoft Suspend Facial Recognition Sales to Police IBM announces it will exit the facial recognition business; Amazon and Microsoft announce moratoriums on selling facial recognition to law enforcement. The moves follow weeks of protest over George Floyd's killing and increasing attention to false arrest cases. Critics note the moratoriums did not affect existing contracts and did not address private sector uses. Relevant chapters: Governance, Facial Recognition

January 2021 — EU AI Act Proposal The European Commission publishes its proposal for the EU AI Act — the world's first comprehensive AI regulation — using a risk-tiered framework to regulate AI applications based on their potential for harm. The proposal catalyzes global discussion about AI governance and sets the agenda for the international AI regulation debate. Relevant chapters: Legal Frameworks, EU AI Act

February 2021 — Clearview AI Actions Multiple European data protection authorities begin enforcement actions against Clearview AI, the company that scraped approximately three billion photographs from the internet to build a massive facial recognition database. Clearview becomes the test case for the limits of AI surveillance and the territorial reach of GDPR. Relevant chapters: Privacy, Facial Recognition, Legal Frameworks

March 2021 — Stochastic Parrots Published Bender, Gebru, McMillan-Major, and Shmitchell publish "On the Dangers of Stochastic Parrots" at ACM FAccT, raising concerns about large language models' environmental costs, embedded biases, and the concentration of AI power. The paper's attempted suppression by Google and Gebru's subsequent firing become the most significant corporate AI ethics controversy of the year. Relevant chapters: Governance, LLMs, Accountability

December 2020/February 2021 — Timnit Gebru Fired from Google Google fires (or accepts the resignation of, as Google frames it) Timnit Gebru, co-lead of Google's Ethical AI team, after a dispute over the Stochastic Parrots paper. The incident draws international attention to the structural conflicts between AI ethics research and corporate interests. Margaret Mitchell, Gebru's co-lead, is fired shortly afterward. Both found Distributed AI Research Institute (DAIR) and Black in AI. Relevant chapters: Governance, Accountability

2021 — The Markup Publishes Algorithmic Redlining Investigation The Markup publishes analysis of 2.7 million HMDA records showing that major mortgage lenders deny Black applicants at 80% higher rates than similarly situated white applicants. The investigation contributes to renewed CFPB attention to algorithmic fair lending. Relevant chapters: Bias, Financial AI

October 2021 — Frances Haugen and the Facebook Papers Former Facebook product manager Frances Haugen provides internal documents to the Wall Street Journal, Congress, and regulators showing that Facebook's own research documented harms caused by its recommendation algorithms — including mental health effects on teenage girls and amplification of political misinformation — and that the company prioritized engagement over safety. The revelations contribute to legislative interest in platform accountability. Relevant chapters: Accountability, Societal Impact


2022–2023: ChatGPT, Legislation, and Labor

November 2022 — ChatGPT Launch OpenAI releases ChatGPT, a conversational AI system built on GPT-3.5, which reaches 100 million users in two months — the fastest-growing consumer application in history. ChatGPT demonstrates the commercial viability of large language models and catalyzes a global public debate about AI capabilities, risks, and governance. AI ethics moves from specialist concern to mainstream political issue. Relevant chapters: LLMs, Governance, Societal Impact

2022 — EU AI Act Negotiations The EU Parliament and Council engage in detailed negotiations over the AI Act, with major disputes over the scope of prohibited AI practices (particularly facial recognition in public spaces) and the treatment of general-purpose AI systems. The negotiating process illustrates the political economy of AI regulation. Relevant chapters: EU AI Act, Governance

2022 — NIST AI Risk Management Framework NIST publishes the first draft of its AI Risk Management Framework, establishing a voluntary framework for U.S. organizations to manage AI risks. The framework becomes a reference point for corporate AI governance programs and government procurement. Relevant chapters: Governance, Risk Management

May 2023 — G7 Hiroshima AI Process G7 leaders meeting in Hiroshima launch the Hiroshima AI Process, an international forum for AI governance coordination among the world's largest economies. The process produces the Hiroshima AI Principles and a Code of Conduct for advanced AI developers. Relevant chapters: Governance, International AI Policy

May 2023 — WGA Strike; AI in Hollywood The Writers Guild of America strikes over multiple issues including the studios' use of AI to generate scripts and summarize existing work without compensation. The strike is the first major labor action explicitly addressing AI — and an early instance of workers organizing against algorithmic displacement. Relevant chapters: Societal Impact, Labor and AI

October 2023 — U.S. AI Executive Order President Biden issues a broad Executive Order on AI, directing federal agencies to develop AI safety standards, requiring notification of powerful AI model training to the government, directing agencies to address algorithmic discrimination, and establishing an AI Safety Institute at NIST. Relevant chapters: Governance, Legal Frameworks

December 2023 — NYT v. OpenAI Filed The New York Times files a landmark copyright lawsuit against OpenAI and Microsoft, alleging that their AI systems were trained on Times articles without permission and that the models can reproduce Times content verbatim. The lawsuit tests the application of copyright law to AI training data and fair use doctrine. Relevant chapters: Legal Frameworks, IP and AI


2024–2025: EU AI Act in Force and Generative AI's Reckoning

August 2024 — EU AI Act Enters into Force The EU AI Act officially enters into force, with a phased implementation schedule: the prohibited practices in Article 5 take effect six months later (February 2025); provisions for high-risk AI systems take effect two years later (August 2026). The Act establishes the EU AI Office to coordinate enforcement. Relevant chapters: EU AI Act, Legal Frameworks

February 2025 — EU AI Act Article 5 Applies The prohibition provisions of the EU AI Act become enforceable: AI systems for social scoring, subliminal manipulation, exploitation of vulnerability, and most real-time remote biometric identification in public spaces are now prohibited under EU law. Relevant chapters: EU AI Act, Legal Frameworks

2024 — Generative AI Deepfakes in Elections Multiple democratic elections around the world — including elections in Taiwan, South Korea, Bangladesh, and the United States — involve documented use of AI-generated deepfakes of candidates, election officials, and voters. The 2024 U.S. presidential primary season includes AI-generated robocalls using a synthetic version of President Biden's voice discouraging voting. These incidents establish AI-generated disinformation as a major electoral integrity concern. Relevant chapters: Societal Impact, Democracy and AI

2024–2025 — Ongoing AI Copyright Litigation NYT v. OpenAI, along with dozens of related lawsuits by authors, musicians, and visual artists, works through the courts. No final decisions emerge in this period, but the litigation establishes that the legal status of AI training data is genuinely unsettled and that courts will eventually need to clarify the limits of fair use doctrine. Relevant chapters: Legal Frameworks, IP and AI

2025 — AI Safety Institutes Network Multiple countries establish AI Safety Institutes (following the UK's AI Safety Institute, established 2023), creating an international network for evaluating advanced AI systems. The network conducts joint evaluations of frontier models from major AI developers. Relevant chapters: Governance, Safety


Thematic Observations Across the Timeline

Recurring Pattern: Hype and Correction AI development has repeatedly followed a cycle of high expectations, technical limitations, and recalibration. The AI winters of the 1970s and 1980s, the expert systems collapse, and the boom-bust pattern in AI applications reflect this cycle. Understanding this pattern is essential for evaluating contemporary AI claims.

Persistent Theme: Automation and Labor Concern about automation and employment is the oldest theme in AI ethics, predating contemporary AI by decades. Wiener raised it in 1950; it recurs with each wave of AI capability. The WGA strike of 2023 is the latest chapter in a story that began with the industrial revolution.

Consistent Gap: Research and Practice Throughout this timeline, research findings about algorithmic bias often preceded policy or industry response by years or decades. FICO scoring concerns emerged in the 1980s; comprehensive fair lending enforcement of algorithmic systems began in earnest in the 2020s. Understanding this gap is essential for anyone working to translate AI ethics research into practice.

Acceleration Since 2012 The pace of both AI capability development and AI ethics concern has accelerated dramatically since the deep learning revolution of 2012. Events that might have unfolded over decades — the development of conversational AI, the global adoption of facial recognition, the emergence of comprehensive AI regulation — have occurred within a single decade.


This timeline will continue to grow. The events of 2025 and beyond will shape AI ethics in ways we cannot yet anticipate. The past reveals, however, that the concerns motivating AI ethics are enduring ones — about power, justice, human dignity, and the responsibilities that accompany technological capability.