Case Study 28-1: The Writers Guild Strike and AI in Creative Industries
Overview
In May 2023, approximately 11,500 members of the Writers Guild of America walked off the job, beginning what would become a 148-day strike — the longest WGA strike since 1988 and, when Screen Actors Guild-AFTRA joined in July, part of the first simultaneous Hollywood dual-union strike since 1960. The economic issues driving the strike were primarily about residuals in the streaming era and minimum staffing in television writer rooms. But the AI provisions were distinct, novel, and historically significant: for the first time, a major American union negotiated binding contractual protections against AI displacement of its members' work.
The strike and its resolution offer the most fully documented case study available of how workers, organized collectively, can shape the terms of AI deployment in their industry — and of the limits of what can be achieved within the constraints of collective bargaining in a single sector.
Background: The Creative Industry and AI
The Economic Transformation of Writing Work
Before examining the strike itself, it is important to understand what AI actually threatened in the writing profession — and what the studios' interests in AI deployment were.
Professional television and film writing in Hollywood is organized through a guild (the WGA) that has, since the 1940s, negotiated minimum pay scales, credit arbitration, residuals for subsequent exhibition of work, and working conditions. The industry has a highly stratified structure: at the top are showrunners and experienced writers commanding substantial compensation; in the middle are staff writers and story editors who constitute the professional core; at the bottom is what used to be a substantial entry point — feature films that required teams of writers, and television seasons that staffed writers' rooms with 8-15 writers learning the craft.
The streaming era had already substantially disrupted this structure before AI became a bargaining issue. As platforms moved toward limited-run seasons (6-10 episodes rather than 22-24 episode broadcast seasons), mini-rooms became common: studios would hire small teams of writers for short periods to break story before production began, then dismiss the writers' room, hiring individual writers on a freelance basis for specific scripts. This reduced the total amount of writing work available and narrowed the entry pipeline for new writers. The WGA's AI demands must be understood against this background of a profession already experiencing structural contraction.
What AI Could Do to Writing Work
The studios' interest in AI was specific and financially motivated. By 2023, large language models — primarily ChatGPT and its competitors — had demonstrated the ability to generate coherent prose, plot outlines, character dialogue, and draft scripts at speeds no human writer could match and at a cost approaching zero per output unit. The AI-generated content was not at the level of quality associated with skilled professional writers, but it was adequate for certain purposes: generating premise options, breaking story at a basic level, producing first drafts that could be revised, and generating content for lower-tier streaming products where production volume matters more than distinctive quality.
The scenario the WGA feared was specific and financially devastating: studios could use AI to generate first-draft scripts, then hire a single WGA-minimum writer to revise and polish the AI output, claiming credit while dramatically reducing the total writing labor required. A 10-episode season that previously required a writers' room of eight people for five months could potentially require one or two writers for a shorter period, with AI doing the heavy lifting of initial draft generation. The writing credits and the residuals — which flow to writers based on credit — would remain with the humans, but the total amount of paid work would collapse.
This is precisely the displacement-through-augmentation mechanism described in the chapter: not eliminating the category of "writer" entirely, but using AI to allow fewer writers to produce the same output, reducing total employment and total compensation in the profession.
The Strike: Key Demands and Studio Positions
The WGA's AI Proposals
The WGA went into negotiations with specific AI proposals, which represented a sophisticated understanding of both the technology and the economic threat. The core demands:
AI cannot write or rewrite literary material. This was the foundational demand — AI-generated text could not substitute for writer work on projects covered by WGA agreements. The union was not demanding that AI be banned from the industry, but that it not be used as a replacement for covered writers.
AI-generated material cannot be used as source material. This addressed a specific loophole: a studio could theoretically give writers AI-generated scripts and tell them to adapt these "source materials" into new scripts, paying adaptation rates rather than original script rates — lower rates for what would be essentially editing AI output. The WGA demanded this be prohibited.
Disclosure requirements. Writers should be told if AI-generated material is being given to them and should not be required to use AI tools in their work.
Residual and credit protections. The fundamental concern that AI-generated content would be used in ways that diluted the credit and residual structures that provide writers' long-term income.
The Studios' Positions
The AMPTP (Alliance of Motion Picture and Television Producers), representing the major studios, initially resisted WGA's AI proposals on multiple grounds. Studios argued that AI tools were still evolving and that contractual restrictions premised on current AI capabilities could become outdated quickly. They resisted restrictions that might hamper their ability to use AI in pre-development work (generating premises, brainstorming) before human writers were hired. They expressed concern that overly broad prohibitions would restrict legitimate uses — AI tools for research, for spell-checking, for scene breakdown and scheduling logistics.
Behind these stated positions were obvious financial interests. The potential labor cost savings from AI-assisted script development were substantial. And the studios were competing with each other in a streaming environment where production volume was important and profit margins were under pressure from subscriber dynamics and content costs.
The Settlement and Its Terms
What the WGA Won
When the WGA settled with the AMPTP in late September 2023 after 148 days, the AI provisions represented genuine wins — though more limited than some initial reporting suggested.
The settlement included:
Prohibition on AI writing literary material. Companies cannot use AI systems to write or rewrite literary material. This is a strong protection — it directly prevents the scenario of studios using ChatGPT to generate scripts under WGA-covered agreements.
AI content cannot be "source material." Studios cannot give writers AI-generated scripts or other AI material and characterize it as "source material" for adaptation purposes, preventing the rate reduction strategy.
No requirement to use AI. Writers cannot be required to use AI tools as part of their work without consent.
Disclosure obligation. If a company provides writers with AI-generated material, they must disclose it and writers can decline to use it.
Annual meetings. The contract required annual meetings between companies and the WGA to discuss AI developments — an ongoing mechanism for the union to monitor and respond to evolving AI capabilities.
No diminishment of credit or compensation. AI-generated content cannot be used in ways that diminish writers' credits or residuals.
What the WGA Did Not Win
Equally important is what the settlement did not accomplish. The WGA sought but did not obtain:
A prohibition on companies using AI to generate writing for non-WGA covered purposes (development work before writers are hired, for example) — studios retained the right to use AI in pre-development stages where WGA agreements do not apply.
Restrictions on AI training using writers' work — the agreement did not prohibit studios from using existing scripts and other literary material to train AI models, a significant concern for writers who saw their existing work as potential training data for systems that would displace them.
An AI royalty or participation for writers — some union advocates had proposed that writers should receive a portion of any savings realized from AI deployment in their industry; this was not achieved.
The protections apply only to WGA members working under WGA agreements — a small fraction of all writing workers. The millions of freelance writers, content creators, journalists, marketing copywriters, and others who write professionally but are not covered by collective bargaining have no equivalent protections.
SAG-AFTRA's Resolution
SAG-AFTRA, the actors' union that joined the strike in July 2023, settled in November with AI provisions focused on the specific threat facing actors: digital replicas of their likenesses and voices. The settlement required:
Informed consent before an actor's digital likeness or voice is created using AI for use in covered projects.
Compensation for the use of digital replicas, at rates that must be negotiated.
Restrictions on synthetic performers — AI-generated characters that replicate the likeness of background actors without consent.
The actors' AI provisions addressed a different but related scenario: studios using AI to create digital replicas of deceased actors, to replace living actors with digital versions in new productions, or to generate background performers without hiring human extras.
Analysis: What This Case Teaches About AI and Labor
The Power of Collective Action
The WGA strike demonstrates that organized collective action can establish meaningful constraints on AI deployment — but only where workers have the institutional infrastructure to negotiate (union organization, established bargaining relationships, the ability to withhold labor effectively) and the market power to impose real costs on the other party (which Hollywood's global production economics made possible).
This is a significant constraint on generalizability. Hollywood writers are a well-educated, relatively well-compensated professional group with decades of collective bargaining history, strong internal solidarity, and work product that is genuinely irreplaceable in the short term (studios cannot simply import replacement writers from lower-cost markets because the creative product is market-specific and quality-dependent). They also had leverage because the timing — at the beginning of a new AI wave — allowed them to establish norms before AI capabilities became so entrenched that reversing course was economically impossible.
Workers in other sectors facing AI displacement often lack these conditions. Call center workers, data entry processors, administrative staff in white-collar settings — the most immediately at-risk categories — are largely non-union, in highly commoditized labor markets where replacement is easier, and have less market power to impose costs on employers considering AI deployment.
The Limits of Contractual AI Protection
The protections the WGA won are real but narrow in scope and time. They apply to the specific guild and specific production context. As AI capabilities advance, the question of what constitutes "writing" by AI versus AI-assisted human writing will become more contested. The annual meeting mechanism provides a forum for ongoing negotiation, but the balance of power in those negotiations is not fixed.
More fundamentally, the WGA provisions do not address the underlying business model pressure that drives studios toward AI adoption — the economics of streaming production, the competition for subscriber growth, the capital-market pressure for cost reduction. Those pressures will persist. The contract provisions create some friction against the cheapest possible AI displacement strategies, but they do not alter the underlying economic incentives.
The Question of AI Training Data
The WGA's failure to win protections against AI training on writers' work points to a broader unresolved issue in creative labor and AI. Virtually every major AI company has trained its models on text drawn from the internet — which includes decades of published creative work, news articles, screenplays, and other professional writing. The writers whose work trained the models received nothing; the companies that trained the models captured enormous commercial value. The legal questions about whether this training constitutes copyright infringement are before the courts in multiple jurisdictions; the ethical questions about whether it is appropriate to profit commercially from using workers' creative output as training data for systems designed to reduce demand for those workers are clear, even if the legal analysis is not.
Several lawsuits filed in 2023-2024 address precisely this question. The Authors Guild, individual authors, and news organizations have filed suits against major AI companies claiming that training on their copyrighted work without permission or compensation violates copyright law. These cases will shape the legal landscape; their resolution will have significant consequences for both AI development and creative labor.
The Creative Quality Question
One dimension of the WGA case that deserves specific attention is the quality question that underlies the labor economics. The studios' calculus on AI adoption depends critically on whether AI-generated content is "good enough" for their purposes. For the highest-prestige productions — the films and series competing for awards, for critical acclaim, for cultural significance — the answer appears to be no; audience attention and critical recognition require the distinctive human creative vision that current AI cannot replicate. But for volume content — second-tier streaming series, franchise extensions, format-driven genre productions — the quality bar may be lower, and AI-generated material, even if not excellent, may be adequate.
This creates a bifurcation risk in creative labor markets: high-end creative work remains premium, human, and well-compensated; volume creative work is increasingly AI-assisted or AI-generated, with declining human labor and compensation. For writers near the top of the profession, this may not change much. For writers in the middle and lower tiers — the majority of the profession — it represents a significant structural threat.
Implications for Business Professionals
The WGA case offers several specific lessons applicable to business professionals considering AI deployment in their organizations.
Transparency about displacement intent matters. The studios' position was weakened by their apparent reluctance to be transparent about their AI deployment intentions. Workers who believe they are being kept in the dark about plans that will affect their employment are more likely to seek protective collective action and less likely to cooperate with AI integration efforts. Organizations that communicate honestly about AI plans — including their workforce implications — are more likely to achieve productive AI integration with worker cooperation.
Negotiation is possible. Many business leaders have adopted a stance that AI deployment is inevitable and that workers simply must adapt. The WGA case demonstrates that negotiation about the terms of AI deployment is possible and can produce agreements that both protect workers and allow organizations to pursue legitimate AI productivity goals. The binary of "ban AI" vs. "accept displacement without conditions" is a false choice.
The training data issue will not go away. Organizations that use AI systems trained on workers' or creators' work without compensation or consent face ongoing legal and reputational risk. The question of appropriate compensation for the use of professional work as AI training data is actively litigated and likely to produce legal requirements eventually.
Sector-specific approaches are necessary. The appropriate AI governance framework for creative industries differs substantially from the appropriate framework for financial services, healthcare, or logistics. Business professionals should be skeptical of universal AI deployment frameworks and attentive to the specific characteristics of their sector — the nature of the work, the value of human judgment and creativity, the market power of workers, and the quality dynamics that determine whether AI substitution actually serves business interests.
Discussion Questions
-
The WGA won contractual AI protections, but the vast majority of professional writers are not covered by WGA agreements. What mechanisms — if any — could extend comparable protections to non-union workers in creative industries?
-
Should AI companies that train models on copyrighted creative work be required to compensate the creators whose work was used? What would a fair compensation scheme look like?
-
The studios argued that restrictions on AI in pre-development stages were unreasonable because WGA agreements do not apply before writers are hired. Is this a legitimate distinction or a loophole that undermines the purpose of the protections?
-
How should quality considerations factor into business decisions about AI deployment in creative work? What responsibility do companies have to audiences or customers for the quality difference between AI-assisted and fully human creative production?
-
The WGA settlement requires annual meetings on AI — an ongoing negotiation mechanism rather than static rules. What are the advantages and disadvantages of this approach compared to fixed contractual provisions?