Case Study 01: Tristan Harris and the Slide That Changed Everything
A Presentation Nobody Asked For
In the spring of 2013, a twenty-seven-year-old product designer at Google named Tristan Harris sat down to write a presentation he had not been assigned and had not been asked to write. His job title was "Product Philosopher" — a role that had been created for him after Google acquired his startup Apture in 2011, and a role whose organizational standing was, as Harris would later describe it, somewhat unclear. He had the title. The structural authority that would have allowed him to act on the title's implied mandate was less certain.
What Harris wrote over several weeks in early 2013 was a 141-slide deck titled "A Call to Minimize Distraction & Respect Users' Attention." It began with a question that Harris later described as embarrassingly simple: "What responsibility do technologists have for the minds of the people who use what they build?"
The fact that this question needed to be asked — that it was not already the organizing frame of Google's product development — tells us something important about the culture in which Harris was working. Google in 2013 was a company of enormous power and genuine engineering brilliance that had produced products used by well over a billion people. It was also a company whose primary revenue mechanism — search advertising, increasingly supplemented by display advertising — was predicated on capturing user attention and monetizing it. The people who designed Google's products were, for the most part, not thinking about what their work was doing to human attention. They were thinking about what their work was doing to engagement metrics.
Harris was different, and the difference was traceable to his training. Before joining Google, he had studied at Stanford's Persuasion Lab under B.J. Fogg, who spent two decades mapping how technology could be designed to change human behavior. Fogg's 2003 book "Persuasive Technology" had introduced a rigorous vocabulary for what previously had been practiced intuitively: how interfaces could exploit psychological tendencies to produce desired user behaviors. Harris absorbed this framework and found himself, at Google, surrounded by products that were doing exactly what Fogg described — often without the people designing them having any explicit awareness of the mechanisms involved.
This was the observation that animated the presentation. It was not, primarily, an accusation. It was a diagnosis.
What the Presentation Actually Said
The presentation's opening framing has been widely quoted: "Never before in history have a handful of people had such a direct influence over the thoughts of billions." Harris was talking about the engineers and product designers at a small number of technology companies whose products — not just Google's but Facebook's, Twitter's, Instagram's, YouTube's — were shaping how billions of people spent their attention every day.
The argument proceeded in several steps.
First, Harris laid out what he called the "magician's trick" at the heart of technology product design. The technology industry had become expert at exploiting the gap between users' immediate desires and their reflective preferences. A user who checks their phone for a moment might reflectively prefer to be having a conversation; immediately, the phone has captured their attention before they had a chance to invoke their reflective preference. This gap — between the impulsive and the reflective — was being systematically exploited.
Harris drew on Fogg's "captology" framework extensively. He showed how variable reward schedules produced compulsive checking behavior. He documented how the social comparison mechanics embedded in platforms generated anxiety that drove re-engagement. He traced how notification design was not simply a communication tool but an interruption machine, optimized not for when the user needed information but for when the interruption would most effectively pull them back to the platform.
The most significant section of the presentation was what Harris called "the responsibility of the gun shop owner." He argued — with considerable care, because he knew his audience — that technologists were not merely building neutral tools. They were building environments that shaped behavior at scale. A gun shop owner who sold a weapon used in a crime could not be held responsible for the crime itself, but a gun shop owner who designed their shop to maximize impulsive purchases of the most lethal weapons available would bear some share of moral responsibility for what followed. The question was whether technologists were more like the former or the latter — and Harris argued, with documentation, that they were increasingly like the latter.
The second major section of the presentation proposed what Harris called a "respect for users' attention" framework. This was the seed of what would later become Time Well Spent. Harris argued that every design decision should be evaluated against a single question: "Is this serving the user's actual goals, or is it serving the platform's engagement metrics at the user's expense?" He proposed specific design principles: notifications that respected users' stated preferences rather than the platform's re-engagement needs; interfaces that provided natural stopping points rather than infinite scroll; features that helped users do what they came to do rather than pulling them toward ever-more-engagement.
The third section was a call to action, addressed to his colleagues. Harris did not frame this as a regulatory or policy argument. He framed it as a professional ethics argument, drawing on analogies to medicine and architecture: professions that had developed explicit ethical frameworks for dealing with the power they held over people's lives. He was asking his colleagues to take seriously the idea that product design was such a profession — that it carried ethical weight proportional to its influence.
The Viral Moment
Harris sent the presentation to approximately ten colleagues in March 2013. He expected it to circulate modestly within a small group of people who shared his preoccupations.
Within days, it had been forwarded across Google's internal communication systems to thousands of employees. Within weeks, it had been read by what Harris later estimated were at least five thousand people inside the company. It reached senior vice presidents and ultimately Google's then-CEO Larry Page, who reportedly called it "thought-provoking." Sheryl Sandberg, then COO of Facebook, shared it with her team after it was forwarded to her by a mutual connection.
The presentation generated a wave of internal discussion that Harris would later describe with some irony as "the most engaged I've ever seen a technology company about the concept of engagement." People wrote long, thoughtful responses. Teams sent invitations to discuss the ideas. A small internal group formed around Harris to explore what a design ethics function at Google might look like.
The internal attention was, by any measure, remarkable. A self-initiated document from a mid-level employee had penetrated the highest levels of one of the world's most powerful companies and generated genuine, substantive engagement from smart people who, on the evidence, found the ideas compelling.
What Changed — And What Did Not
Here is the honest account of what happened next, which Harris himself has been candid about: structurally, not very much.
Google did not change its advertising model. It did not change the engagement metrics by which it evaluated product success. It did not retool its notification systems or its algorithmic feeds. It did not create a powerful, structurally significant design ethics function with the authority to overrule product decisions. The teams that had been most engaged with the presentation returned, for the most part, to their existing priorities.
Why? Harris's own explanation, developed over several years of public reflection, points to the structural logic of the business. Google's revenue was tied to advertising. Advertising revenue was tied to engagement. Engagement was maximized by the very features Harris's presentation had identified as problematic. The people who found Harris's ideas compelling were also the people whose career advancement depended on hitting engagement metrics. There was no mechanism by which abstract ethical conviction could compete with concrete quarterly numbers.
Harris was given a somewhat expanded internal role, consulting with teams on design ethics questions. He describes this period as useful but frustrating — he could raise concerns, he could occasionally influence a feature decision, but he was working at the margins of a system whose core logic he had no power to change.
He left Google in 2015.
What the Presentation Spawned
If the presentation changed little inside Google, it changed a great deal outside. Harris has been forthcoming about the trajectory: the internal failure taught him something about where change actually comes from.
In 2013 and 2014, Harris began speaking publicly about the ideas in the presentation. His 2016 article "How Technology Hijacks People's Minds" in Thrive Global (later published in Medium) brought his arguments to a general audience and attracted enormous attention — the piece has been read millions of times and translated into dozens of languages. His 2017 TED talk "How a handful of tech companies control billions of minds" became one of the platform's most-viewed talks, with over 45 million views as of 2024.
In 2018, Harris co-founded the Center for Humane Technology with Randima Fernando and several other former technology industry insiders, including Aza Raskin (who had invented the infinite scroll) and Renée DiResta (a researcher on algorithmic manipulation). The CHT's stated mission was to reverse the digital attention crisis by shifting industry incentives, reforming platform design, and supporting regulatory action.
The CHT quickly established itself as the most prominent external advocacy organization focused specifically on platform design ethics. It produced research, hosted events, testified before Congress, and maintained public pressure on technology companies in a way that would have been impossible from inside any single company. Harris testified before the Senate Commerce Committee in June 2019, before the House Energy and Commerce Committee in multiple hearings, and his framing of the attention economy — honed over years — proved effective in translating technical concepts for legislators who needed to understand what they were regulating.
The 2020 Netflix documentary "The Social Dilemma," which Harris co-produced and in which he and other former technology industry insiders spoke at length about the harms of engagement-maximizing design, reached an audience of tens of millions globally and is regularly cited as having shifted public awareness of these issues more substantially than any single prior intervention.
What We Learn
Harris's trajectory from internal advocate to external critic contains several lessons that are applicable well beyond his specific story.
The first is about the structure of institutional change. Ideas that are acknowledged internally but do not conflict with institutional incentives can circulate widely without producing structural change. Thousands of Google employees found Harris's arguments compelling. That widespread internal acknowledgment was insufficient to change the system, because the system's core logic — the revenue model, the success metrics, the career incentives — remained intact. Individual conviction, however widely shared, is not sufficient to change institutions whose material interests run in a different direction.
The second is about the value of external pressure. Harris's public advocacy has almost certainly produced more structural change than his continued internal advocacy would have — not because his arguments are different, but because external pressure can change the political and regulatory environment in ways that internal advocacy cannot. The CHT's contribution to the regulatory environment that produced the EU's Digital Services Act (2022), the UK's Online Safety Act (2023), and multiple US state-level children's privacy bills represents a form of leverage that no internal ethicist at any single company could have generated.
The third is about honesty and its limitations. Harris's presentation was, by all accounts, factually accurate, carefully argued, and genuinely persuasive to people who read it. It did not produce the change it argued for. This is important to hold clearly: being right is not sufficient. Structural change requires structural levers, and identifying the levers is as important as being right about the diagnosis.
The fourth is about what was not in the presentation. Harris's 2013 document focused primarily on individual user harm — on what platform design was doing to individual attention. It did not, in 2013, focus extensively on the broader social and political harms — the algorithmic amplification of extremism, the erosion of shared information environments, the manipulation of elections — that later became central to public discourse about platform accountability. The framing expanded over the following decade, in Harris's own work and in the field more broadly. The 2013 presentation was the beginning of an argument, not its completion.
What a young product designer wrote in the spring of 2013, without assignment and without authorization, helped create the vocabulary in which we now discuss what social media platforms do to human attention. That is not nothing. It is also a demonstration of both the power and the limits of individual ethical action inside institutions whose incentives run in the opposite direction.
Discussion questions for this case study appear in the exercises section. Students who wish to read the full presentation can find references in the further-reading section, though the document itself remains internal to Google. Harris's public essays and testimony draw heavily on it and are publicly available.