Case Study: TikTok and Teen Mental Health — Evidence and Response

"We are running one of the largest uncontrolled experiments in human history on our children." — US Surgeon General Vivek Murthy, Advisory on Social Media and Youth Mental Health (2023)

Overview

In May 2023, the US Surgeon General issued a formal advisory on social media and youth mental health — a rare step reserved for urgent public health concerns. The advisory stated that while social media can provide benefits (connection, community, self-expression), there is "growing evidence that social media use is associated with harm to young people's mental health." The Surgeon General specifically called out features designed to maximize engagement, exposure to inappropriate content, and the normalization of harmful behaviors.

TikTok, the fastest-growing social media platform among young people, became a focal point of the debate. With over 1 billion monthly active users and a median user age significantly younger than other major platforms, TikTok's algorithmic recommendation system — its signature "For You Page" — raised pointed questions about the intersection of platform design, data collection, and adolescent well-being.

This case study examines the evidence, the platform's response, the regulatory landscape, and the deeper governance questions at stake.

Skills Applied: - Evaluating the quality and limitations of empirical evidence - Distinguishing correlation from causation in policy contexts - Analyzing corporate response strategies - Applying data governance frameworks to platform design


The Evidence Landscape

What the Research Shows

The body of research on social media and youth mental health is large, growing, and contested. Understanding the state of evidence is essential for responsible governance.

Correlational studies (many). Multiple cross-sectional studies have found associations between heavy social media use and symptoms of depression, anxiety, loneliness, sleep disruption, and body dissatisfaction among adolescents. The Surgeon General's advisory cited studies showing that adolescents who spend more than three hours per day on social media face double the risk of depression and anxiety symptoms compared to those who use it less.

Longitudinal studies (fewer, stronger). Longitudinal studies — which follow the same individuals over time — provide stronger (but not definitive) evidence. A 2019 study by Orben and Przybylski, using data from over 12,000 UK adolescents, found that social media use explained less than 0.4% of variation in well-being — a statistically significant but practically small effect. A 2022 meta-analysis by Hancock, Liu, and colleagues found small negative associations between social media use and well-being, but with substantial heterogeneity across studies.

Experimental studies (few, limited). True experiments — randomly assigning participants to use or not use social media — are rare and face obvious ethical and practical constraints. A widely cited 2018 study by Hunt and colleagues at the University of Pennsylvania found that limiting social media use to 30 minutes per day for three weeks significantly reduced loneliness and depression. However, the sample was small (143 participants), limited to college students (not adolescents), and short-term.

Internal corporate research (leaked). Frances Haugen's disclosures included internal Facebook research showing that 32% of teen girls said that when they felt bad about their bodies, Instagram made them feel worse. Instagram's own researchers wrote: "We make body image issues worse for one in three teen girls." The internal research was more alarming than published academic studies — possibly because it had access to granular platform data that external researchers cannot obtain.

The Complexity Problem

The evidence resists simple narratives. Several complicating factors make definitive causal claims difficult:

Reverse causation. Does social media use cause poor mental health, or do young people experiencing poor mental health turn to social media for connection and distraction? Both pathways are plausible, and most studies cannot distinguish between them.

Heterogeneous effects. Social media affects different young people differently. Some find community and support; others experience bullying, comparison, and exploitation. Aggregate statistics can mask these differential effects. A platform that is beneficial for 60% of young users and harmful for 15% might look "neutral" in aggregate while causing severe harm to millions.

Algorithmic amplification. The harm may not come from social media use per se but from what the algorithm serves. TikTok's For You Page algorithm learns individual preferences at extraordinary speed, creating a feedback loop: the more a user engages with a type of content, the more of that content they receive. For a teenager exploring content about body image, eating disorders, or self-harm, the algorithm can create a rapidly narrowing tunnel of increasingly extreme content. The platform's design — not just the user's choice — shapes the experience.

Measurement challenges. "Social media use" is typically measured by self-reported time spent on platforms — a crude measure that does not distinguish between passive scrolling (associated with worse outcomes) and active interaction (sometimes associated with better outcomes), or between different types of content.

Dr. Adeyemi, discussing this evidence in class, offered a characteristically precise assessment: "The question is not whether social media affects youth mental health. Of course it does — any major social environment affects mental health. The question is how, for whom, through what mechanisms, and at what scale. Those are the questions that responsible governance requires us to answer before we act. But they are also the questions that platform companies have the most data to answer — and the least incentive to."


TikTok's Data Practices

What TikTok Collects

TikTok's data collection from young users is extensive:

Content interaction data. Every video watched, every like, share, comment, and duration of viewing. The algorithm weights watch time heavily — a video watched to completion receives a stronger signal than one scrolled past.

Device and network data. Device identifiers, operating system, keystroke patterns, battery state, audio settings, and connected devices. IP address, mobile carrier, time zone, and network type.

User-generated content. Videos created (including drafts that are never posted), audio recordings, text in messages, and images in profiles.

Inferred data. TikTok's algorithm creates detailed inferred profiles — estimates of age, gender, interests, emotional state, and susceptibility to certain content categories — that go far beyond what users explicitly provide. These inferences are used to optimize the recommendation system.

The Algorithm's Role

TikTok's recommendation algorithm is widely considered the most sophisticated in the social media industry. Its ability to learn individual preferences within minutes of a new user's first session makes it both compelling and concerning.

For adolescent users, the algorithm's speed creates a specific risk. A teenager who pauses slightly longer on content about dieting, body image, or self-harm sends a signal that the algorithm interprets as interest. Within a short period, the For You Page can shift toward a concentrated feed of such content. The teenager may not have deliberately sought this content — the algorithm inferred their interest from behavioral signals and amplified it.

Investigations by journalists and researchers have documented this pattern. In 2021, the Wall Street Journal created bot accounts posing as 13-year-old users and documented how quickly TikTok's algorithm served content about eating disorders, self-harm, and body dysmorphia after the accounts showed brief interest in such topics. A 2022 investigation by the Center for Countering Digital Hate found that TikTok recommended self-harm content to new teen accounts within 2.6 minutes of initial engagement.


FTC and Department of Justice Actions

In 2019, the FTC fined TikTok's predecessor (Musical.ly) $5.7 million for COPPA violations — at the time, the largest COPPA penalty ever. The violations included collecting personal information from children under 13 without parental consent and failing to delete children's data upon parental request.

By 2024, the stakes had escalated. The FTC referred a new case to the Department of Justice alleging ongoing COPPA violations, including that TikTok continued to collect children's data despite knowledge that children were on the platform and failed to honor deletion requests.

State-Level Actions

Multiple US states filed lawsuits against TikTok alleging harm to children:

  • Utah, Arkansas, and other states passed laws restricting minors' access to social media platforms, some requiring parental consent for users under 18.
  • A coalition of state attorneys general investigated TikTok's impact on youth mental health, alleging that the company designed its platform to be addictive to children while knowing about the mental health consequences.

International Regulatory Action

  • The UK ICO fined TikTok 12.7 million pounds in 2023 for processing children's data without parental consent and without adequate age verification.
  • The Irish Data Protection Commission fined TikTok 345 million euros for GDPR violations related to children's data, including default public settings for children's accounts and a failure to provide transparent information.
  • The EU Digital Services Act required TikTok, as a "very large online platform," to conduct systemic risk assessments specifically addressing risks to minors.

TikTok's Response

TikTok implemented a series of changes:

  • Default screen time limits for users under 18 (60 minutes per day, after which users must enter a passcode to continue).
  • Restrictions on notifications for younger users.
  • A "Family Pairing" feature allowing parents to link their accounts to their teen's account and set controls.
  • Content filters and keyword restrictions for younger users.
  • Expanded "well-being" content guidelines and partnerships with mental health organizations.

These changes were significant but raised the question of whether they addressed symptoms or causes. Screen time limits can be bypassed. Content filters can be circumvented. The core algorithmic system — which learns to serve engagement-maximizing content, regardless of its impact on well-being — remained structurally unchanged.


The Deeper Governance Questions

Data as the Engine

The governance challenge is not merely that TikTok collects data from young users. It is that the data collection enables the algorithmic amplification system that creates harm. Each data point — every watch duration, every scroll speed, every paused moment — feeds a recommendation engine whose objective function is engagement, not well-being.

This connects directly to the VitraMed thread. Just as VitraMed's predictive algorithm optimized for aggregate performance while under-serving specific populations, TikTok's algorithm optimizes for aggregate engagement while potentially harming specific users. The difference is that TikTok's "patients" are children and teenagers, and the "treatment" is a content feed they cannot see the logic of.

Mira, reflecting on the parallel, noted: "At VitraMed, at least the stated purpose was health outcomes — even when the algorithm fell short. The stated purpose of TikTok's algorithm is engagement. When engagement and well-being conflict, the algorithm chooses engagement every time. That's not a bug. It's the objective function."

Children and teenagers "consent" to TikTok's data practices by agreeing to terms of service they cannot comprehend. Parental consent is mediated through Family Pairing — an opt-in feature that most parents never activate. The consent fiction identified throughout this textbook reaches its most acute expression when the data subjects are minors, the data practices are invisible, and the algorithm operates on inferred data that the user never knowingly provided.

The Accountability Gap

When a teenager's mental health is harmed by algorithmic content amplification, who is accountable? The platform, which designed the algorithm? The parent, who "should have" supervised? The regulator, who didn't act fast enough? The teenager, who "chose" to use the platform? The accountability gap distributes blame so widely that, in practice, no one is held responsible for the individual harms that aggregate into a public health crisis.


Discussion Questions

  1. The evidence on social media and youth mental health is correlational, not definitively causal. Does this mean policymakers should wait for stronger evidence before acting? What framework for decision-making under uncertainty is appropriate here? Consider the precautionary principle from Chapter 38.

  2. TikTok's algorithm optimizes for engagement. Should platforms be required to use different objective functions for minor users — optimizing for well-being rather than engagement? How would "well-being" be defined and measured?

  3. The FTC's largest COPPA fine was $5.7 million — a fraction of TikTok's daily revenue. Are financial penalties sufficient deterrents for platforms of this scale? What alternative enforcement mechanisms might be more effective?

  4. Eli observed that the debate about social media and youth mental health tends to focus on individual behavior (screen time, parental controls) rather than structural design (algorithmic amplification, business models). Why does the individual framing dominate public discourse, and what would a structural framing look like in practice?

  5. Consider the heterogeneity of effects. If a platform benefits most users but harms a minority, how should governance frameworks weigh aggregate benefit against concentrated harm? Does the answer change when the users are children?


Further Investigation

  • Read the US Surgeon General's 2023 Advisory on Social Media and Youth Mental Health (available at surgeongeneral.gov).
  • Research the Wall Street Journal's "TikTok Rabbit Hole" investigation and the Center for Countering Digital Hate's study on algorithmic content recommendations.
  • Compare TikTok's "Family Pairing" feature with similar parental control tools on Instagram and YouTube. Which provides the most meaningful governance, and why?