The previous two chapters examined what individuals can do: the environmental design strategies of digital minimalism, and the cognitive defense tools of inoculation and lateral reading. These individual approaches are real and valuable. They are...
In This Chapter
- 1. Introduction: Why Regulation Matters — and Why It's Hard
- 2. Section 230: The 26 Words That Made the Internet
- 3. GDPR: The European Data Privacy Framework
- 4. EU Digital Services Act: The Most Ambitious Platform Regulation
- 5. UK Online Safety Act: The Duty of Care Approach
- 6. Children's Online Protection: COPPA and Its Successors
- 7. Algorithmic Accountability: What It Would Actually Require
- 8. FTC Authority and Enforcement
- 9. International Comparison: Three Regulatory Philosophies
- 10. What Effective Regulation Would Actually Require
- Velocity Media: What Regulation Would Change
- Summary
- Key Terms
Chapter 38: Regulatory Approaches — What Government Can and Cannot Do
1. Introduction: Why Regulation Matters — and Why It's Hard
The previous two chapters examined what individuals can do: the environmental design strategies of digital minimalism, and the cognitive defense tools of inoculation and lateral reading. These individual approaches are real and valuable. They are also insufficient to address harms operating at the scale and structural depth documented in Parts II through V of this textbook.
This chapter turns to the regulatory landscape. What have governments done, are doing, or are attempting to do to constrain the harms that engagement-maximizing platform design produces? What are the limits of each approach? And what would genuinely effective platform regulation require?
The regulatory question is unavoidably political. Different political philosophies produce different regulatory instincts. The United States, shaped by the First Amendment and a history of suspicion toward government information control, has been slower to regulate platforms than Europe. The European Union, shaped by memories of fascist propaganda and a different balance between individual rights and collective welfare, has been more aggressive. China has implemented direct state control that solves some problems by creating others. None of these approaches is without tradeoffs.
This chapter tries to map these tradeoffs honestly rather than advocate for any particular political position. What is clear is that the question of platform governance is one of the defining policy challenges of our era, and that the absence of regulation is itself a policy choice — one that, as this book has documented, has costs.
Some orienting observations before we begin:
Regulations are slower than platforms but more durable. Laws persist. Platform features change weekly. A regulation enacted today addresses the platform of today; it may be obsolete before it's fully enforced. But it also creates baseline expectations and accountability structures that persist across platform iterations.
Most existing regulation was designed for a different information environment. The Communications Decency Act (1996), COPPA (1998), and the foundational interpretations of the First Amendment that constrain US regulatory approaches all predate the smartphone, social media, algorithmic curation, and engagement-maximizing design. Applying 1990s frameworks to 2020s problems produces awkward results.
Regulation creates incentives, not guarantees. Platform behavior is shaped by costs and incentives. Regulation changes those costs and incentives. Effective regulation doesn't require direct government control of platform decisions — it changes the economic landscape in which those decisions are made.
Dr. Aisha Johnson's audit at Velocity Media is now producing findings. As we examine the global regulatory landscape, consider which regulations would meaningfully constrain the practices she is documenting, and which would leave the most harmful practices untouched.
2. Section 230: The 26 Words That Made the Internet
Section 230 of the Communications Decency Act (1996) contains two sentences that have governed the legal relationship between Internet platforms and content for nearly three decades:
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
Twenty-six words. These words gave Internet platforms something they had never had before: immunity from liability for content created by their users. If someone posts defamatory content on Facebook, Facebook is not the publisher of that defamation and cannot be sued as one. If someone posts illegal content on Twitter, Twitter is not legally responsible for that content in the same way a newspaper would be responsible for material it chose to publish.
The immunity was designed to encourage the development of the Internet. In 1996, the web was young, and there was genuine fear that holding platforms liable for user content would either drive them out of business (they couldn't monitor everything) or drive them toward censorship (they would over-remove anything potentially problematic to avoid liability). Section 230 threaded this needle by creating a system where platforms could moderate content without thereby becoming publishers responsible for all content.
What Section 230 Actually Says
Section 230 contains two distinct immunities that are often confused:
The publisher immunity (Section 230(c)(1)): Platforms are not treated as publishers or speakers of user-generated content. A platform cannot be held liable for what users post.
The Good Samaritan immunity (Section 230(c)(2)): Platforms are immunized from liability for actions taken "in good faith" to restrict access to content that is "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." This provision was designed to allow platforms to moderate without the act of moderation making them liable as editors.
Together, these provisions created a remarkable situation: platforms can moderate content (removing what they find objectionable) without thereby taking on editorial liability for what remains, while simultaneously not being held responsible as publishers for what they don't remove. This is the legal architecture that made it possible for large-scale platforms to exist.
What Section 230 Has and Hasn't Done
Section 230 has been enormously valuable for enabling the development of the Internet as a participatory medium. Without it, hosting user-generated content at scale would be legally untenable in the US context.
But Section 230 was not designed for — and does not adequately address — the specific problems created by algorithmic amplification. The statute immunizes platforms for hosting content; it says nothing about the liability implications of platforms' decisions about which content to amplify to which users at what moments.
This is the key legal and policy question around Section 230 reform: does algorithmic amplification count as "publishing" in a way that should be treated differently from passive hosting? When a platform's recommendation algorithm decides to amplify a piece of conspiratorial content to millions of users based on its predicted engagement potential, is that different from a newspaper editor choosing to publish a story? Courts have generally said no — the platform is still protected by Section 230. Critics argue this interpretation is wrong, and that the active role of algorithmic amplification transforms the nature of platform liability.
Reform Proposals
Several specific Section 230 reform proposals have received significant political attention:
EARN IT Act (2020, 2022): Proposed removing Section 230 immunity for platforms that failed to implement "best practices" against child sexual abuse material (CSAM), to be defined by a government commission. Critics argued this would effectively require platforms to weaken end-to-end encryption (since encrypted content cannot be scanned) and would create a government body that could define "best practices" broadly enough to influence content moderation generally.
KOSA (Kids Online Safety Act, 2023-2024): Proposed requiring platforms to have "duty of care" to prevent harm to minors, including specific harms related to mental health, and requiring parental controls, transparency reports, and safety-by-default settings for minors' accounts. Received bipartisan support but also significant criticism from civil liberties organizations concerned about overbroad content restriction.
Platform Accountability and Transparency Act (proposed): Would require platforms to share data with researchers, limiting the current information asymmetry between platforms and researchers trying to study their effects.
Targeted liability for algorithmic amplification: Various legal scholars have proposed carving out algorithmic amplification from Section 230 immunity, treating a platform's active decision to amplify content differently from passive hosting. No major legislation has yet embodied this theory, though Supreme Court cases (Gonzalez v. Google, 2023) have raised the question.
The Section 230 debate is complicated by the fact that different critics want reform for radically different reasons: conservatives arguing that platforms censor conservative speech and should lose immunity if they over-moderate; progressives arguing that platforms amplify harmful content and should have liability that creates incentives to reduce it. These critiques point in opposite directions — more moderation vs. less moderation — making coalition-building for any specific reform difficult.
3. GDPR: The European Data Privacy Framework
The General Data Protection Regulation, which came into force in May 2018, is the world's most comprehensive data privacy law and has had global effects beyond its EU jurisdiction.
GDPR's core premise is that personal data belongs to the individual data subject, not to the organization that collects it. This premise generates a set of specific rights:
Right of access: Individuals can request all personal data an organization holds about them.
Right to erasure (right to be forgotten): Individuals can request deletion of their personal data under specified conditions.
Right to data portability: Individuals can receive their data in a machine-readable format and transfer it to another service.
Right to object: Individuals can object to processing of their data for specific purposes, including direct marketing and profiling.
Consent requirements: Processing of personal data requires either explicit consent or another lawful basis. Consent must be freely given, specific, informed, and unambiguous. Pre-ticked boxes and bundled consents (agree to everything or use nothing) are not valid.
Privacy by design and default: Privacy protections must be built into systems from inception, and default settings must be the most privacy-protective available.
Data minimization: Organizations may collect only data necessary for the specified purpose.
Breach notification: Data breaches must be notified to regulators within 72 hours and to affected individuals without undue delay.
Penalties for GDPR violations can reach 4% of global annual turnover or 20 million euros, whichever is greater — enough to be meaningful even for the largest platforms.
What GDPR Has and Hasn't Changed
GDPR has undeniably changed platform behavior in Europe. Consent banners are ubiquitous. Privacy settings are more accessible. Data retention policies have been updated. Some data uses that were previously standard have been discontinued.
But the practical impact has been more limited than the regulation's ambitious scope suggests:
The enforcement gap is significant. GDPR enforcement is delegated to national data protection authorities (DPAs), and the most important DPA — Ireland's, because many major US tech companies have their European headquarters there — has been chronically under-resourced and slow to impose significant penalties. Large fines (Meta was fined 1.2 billion euros in 2023, Amazon 746 million euros in 2021) make headlines but represent a small fraction of revenues for the largest platforms.
The consent system is functionally broken. GDPR's consent requirements have produced enormous numbers of cookie consent banners that users click through as quickly as possible, producing nominal "consent" that is neither freely given nor genuinely informed. Dark patterns in consent interfaces (large "Accept All" buttons, hidden "Reject" options, repetitive re-asking) have been the subject of enforcement actions but remain widespread.
Data processing has not fundamentally changed. Platforms process vast amounts of personal data for behavioral advertising and algorithmic targeting. GDPR has changed the legal basis for some of this processing (shifting from consent to legitimate interests in some cases) without necessarily reducing the processing itself.
The focus on consent misses algorithmic harms. GDPR addresses data collection and processing primarily through the lens of individual consent and data subject rights. It does not directly address the harm of algorithmic engagement optimization — the problem is not primarily that platforms collect too much data, but that they use data to exploit psychological vulnerabilities. These are related but not identical problems.
4. EU Digital Services Act: The Most Ambitious Platform Regulation
The Digital Services Act (DSA), which came into force in 2023, is the most comprehensive platform regulation enacted anywhere in the world. It applies to all digital services operating in the EU, with the most demanding requirements applying to "Very Large Online Platforms" (VLOPs) with more than 45 million monthly active users — approximately the EU's population.
Core Requirements
Algorithmic transparency. VLOPs must provide users with explanations of why specific content has been recommended to them. Users must have the option to receive recommendations not based on profiling (a "chronological feed" alternative). Platforms must publish information about their recommendation systems' parameters.
Dark pattern prohibition. The DSA explicitly prohibits dark patterns — defined as "practices that materially distort or impair the ability of recipients of the service to make free and informed decisions." This is the first major regulation to directly address platform design manipulation rather than just content.
Mandatory risk assessments. VLOPs must conduct annual "systemic risk assessments" examining potential harms from their platforms to fundamental rights, civic discourse, gender-based violence, minors' wellbeing, and public health. These assessments must be independently audited.
Researcher data access. Platforms must provide vetted researchers with access to data necessary to assess systemic risks. This directly addresses the information asymmetry that has made independent research on platform harms so difficult.
User empowerment. Users can opt out of personalized content recommendations and must be given meaningful control over their algorithmic experience. Targeted advertising to minors is prohibited; advertising targeting based on sensitive personal attributes (religion, political beliefs, sexual orientation) is prohibited.
Transparency reporting. VLOPs must publish detailed transparency reports on content moderation activities, advertising transparency, and risk assessment outcomes.
Interoperability requirements. The largest messaging platforms must allow third-party access (enabling users to message across platforms), reducing network lock-in.
Enforcement and Early Implementation
The DSA is enforced by the European Commission directly for VLOPs (rather than delegating to national authorities, correcting one of GDPR's enforcement weaknesses). Penalties can reach 6% of global annual turnover; repeated violations can result in temporary suspension from the EU market.
Early implementation (2023-2024) has produced several notable developments:
Major platforms including Meta, TikTok, Google, and X (Twitter) formally designated as VLOPs have published their first algorithmic transparency documents.
The European Commission opened formal proceedings against X (Twitter) and TikTok over potential DSA violations related to risk assessment quality and dark patterns.
Several platforms launched "chronological feed" options (without algorithmic curation) in response to the opt-out requirement — a concrete design change attributable to regulation.
The researcher data access provisions have begun enabling independent researchers to study platform algorithms in ways that were previously practically impossible.
Limitations and Open Questions
Enforcement capacity. The European Commission has limited staff for enforcing DSA obligations across dozens of designated platforms. Meaningful enforcement at scale requires sustained investment and political will.
Definitional ambiguity. Terms like "dark pattern" and "systemic risk" have clear core meanings but fuzzy edges. Platforms will test those edges, and the resolution of definitional disputes through enforcement and litigation will take years.
Global jurisdiction limits. The DSA applies within the EU. Platforms can maintain different practices in non-EU jurisdictions, creating regulatory arbitrage — though the "Brussels Effect" (companies simplifying to one global policy rather than maintaining geographic variants) often extends EU regulations informally.
Algorithmic opacity is not fully solved. Requiring platforms to publish descriptions of their recommendation systems is not the same as making those systems genuinely transparent. Platform descriptions of algorithms can be technically accurate while omitting operationally significant details.
5. UK Online Safety Act: The Duty of Care Approach
The United Kingdom's Online Safety Act (OSA), which received Royal Assent in October 2023, takes a different approach from the EU's. Rather than focusing primarily on dark patterns and algorithmic transparency, the OSA focuses on categories of harmful content and places a "duty of care" on platforms to protect users from that content.
Core Provisions
Illegal content. All regulated platforms must take proactive steps to prevent, identify, and remove illegal content — including specific priority offenses: terrorism, child sexual abuse material, fraud, and several others.
Child safety. Platforms that may be accessed by children must implement age-appropriate design and protect minors from harmful content. Age assurance (verification of age) is required for platforms hosting legal but harmful content that should not be accessible to children.
Adult safety. Larger platforms must assess and mitigate risks of legal but harmful content for adults, with particular attention to suicide and self-harm content, eating disorder content, and harassment.
Transparency. Platforms must publish transparency reports on safety measures, content moderation activities, and risk management.
User empowerment. Adults must be able to control whether they see certain categories of legal but harmful content.
Controversies
The OSA has been significantly more controversial than the DSA, for several reasons:
Age verification and privacy. The requirement for age verification on platforms hosting adult content raises significant privacy concerns: age verification typically requires identifying information that creates surveillance risks. Sex workers' advocacy groups were particularly vocal critics, arguing that age verification would drive legal adult content platforms either to collect intrusive data or to shut down, without meaningfully protecting minors who could use VPNs.
End-to-end encryption. Early drafts of the OSA included provisions that critics said would require messaging platforms to scan encrypted messages for CSAM — effectively mandating backdoors in end-to-end encryption. After sustained opposition from cryptographers and privacy advocates, the government acknowledged that current technology cannot scan encrypted content without breaking encryption, but the underlying legal tension remains unresolved.
The "legal but harmful" category. The OSA's application to "legal but harmful" content — content that is legal to produce but potentially harmful to those who see it — raises fundamental speech questions. Who defines what is harmful? What is the liability standard? Critics from both civil liberties and free speech traditions objected to this framing.
Ofcom's capacity. The OSA is enforced by Ofcom, the UK communications regulator. Ofcom must develop and enforce detailed codes of practice for an enormous range of platforms with a workforce not designed for this task.
6. Children's Online Protection: COPPA and Its Successors
The Children's Online Privacy Protection Act (COPPA), enacted in 1998, prohibits collecting personal information from children under 13 without verifiable parental consent. It is the oldest major US platform regulation and, by common assessment, the most inadequate.
COPPA's Design and Its Failures
COPPA's premise was that children needed special protection for their personal data. Its mechanism was consent: parental consent is required before platforms collect identifying information from children under 13.
The result was predictable. Platforms responded to COPPA not by protecting children but by requiring users to claim to be 13 or older, with no verification. The "I am 13 or older" checkbox became the most consequentially ignored form in the history of the Internet. Platforms now have legal deniability (users certified their age) while having done nothing to prevent minors from using their services.
Instagram, TikTok, YouTube, and virtually every major platform nominally prohibit users under 13 while having systems designed to be attractive and accessible to that demographic. Internal research (documented in the Facebook Papers and TikTok investigations) has shown that platforms are well aware they have large under-13 userbases.
The FTC enforces COPPA and has levied significant fines — TikTok was fined $5.7 million in 2019 for COPPA violations, YouTube $170 million in 2019 — but these fines have not changed business models.
Proposed Updates: KOSA
The Kids Online Safety Act (KOSA), which moved significantly through Congress in 2023-2024 with bipartisan support, proposes more substantial changes:
- A "duty of care" requiring platforms to prevent specific harms to minors (anxiety, depression, eating disorders, substance abuse, sexual exploitation)
- Requirements for safety settings to be on by default for minors
- Requirements for parental controls and activity monitoring tools
- Limits on certain algorithmic features (recommending content related to suicide, eating disorders, or controlled substances to minors)
- Annual independent audits of safety measures
KOSA's broad bipartisan support reflects genuine political consensus that existing law inadequately protects children online. But the bill has also attracted criticism: civil liberties organizations argue that the "duty of care" to prevent harms to minors could effectively require platforms to restrict legal content that adults have a right to access, and that attempts to verify minors' ages create privacy and surveillance risks.
Age verification specifically — requiring users to provide identifying documents to access platforms — is simultaneously one of the most desired outcomes (actually preventing minors from accessing adult content) and one of the most privacy-problematic (government and platform databases of who accessed what content).
7. Algorithmic Accountability: What It Would Actually Require
"Algorithmic accountability" is a policy demand that appears across multiple regulatory proposals but is rarely defined precisely. What would genuine accountability for algorithmic systems actually require?
The Transparency Problem
The most commonly proposed accountability mechanism is transparency — requiring platforms to explain how their algorithms work. The DSA's transparency provisions are a version of this. But transparency requirements face a fundamental limitation: platform algorithms are not static rules that can be described; they are machine learning systems that produce outputs through processes that even their creators cannot fully explain.
A platform can publish a description of its recommendation algorithm's parameters (recency, engagement rate, user-content affinity, etc.) while the operational behavior of the algorithm remains effectively opaque. The published description is technically accurate but does not predict the algorithm's actual outputs in specific contexts.
This is not necessarily deception. Machine learning systems operating in complex real-world environments are genuinely difficult to characterize precisely. But it means that transparency requirements, by themselves, may not achieve the accountability they're designed to enable.
Algorithmic Impact Assessments
A more substantive accountability mechanism is the algorithmic impact assessment (AIA) — a requirement that platforms systematically evaluate the potential harms of their algorithmic systems before deployment and regularly thereafter, similar to the environmental impact assessments required for construction projects.
Effective AIAs would require:
Pre-deployment testing: Before deploying a new recommendation algorithm or major algorithm change, platforms would conduct standardized tests for specified harms — exacerbation of mental health problems, amplification of misinformation, radicalization pathways, disparate impact on protected groups.
Third-party auditing: AIA results would be subject to independent verification, not just self-certification. Auditors would need access to platform systems, data, and personnel.
Public disclosure: Substantive AIA findings would be made publicly available (with appropriate protection of genuinely proprietary technical details) so researchers, journalists, civil society, and regulators can assess them.
Ongoing monitoring: AIAs would not be one-time exercises but ongoing monitoring obligations with defined metrics and reporting requirements.
The DSA's mandatory risk assessment requirements are a step in this direction, but the operationalization details — what tests must be run, what standards must be met, what constitutes an unacceptable risk — are largely left to platforms in the first implementation cycle.
The Independent Auditor Problem
Algorithmic accountability through auditing faces a deeper challenge: who is qualified to audit platform algorithms?
Effective auditing requires access to platform systems, data, and personnel that platforms have strong incentives not to provide. It requires technical expertise that is rare and expensive. And it requires independence from platform interests that may be difficult to ensure when auditors are paid by the platforms they audit.
The EU's DSA attempted to address this by creating a pool of certified auditors with defined standards. The early experience suggests that finding enough qualified, independent auditors for all designated platforms is a genuine logistical challenge.
Researcher data access (required under the DSA) is a partial complement: by giving independent researchers access to data that was previously unavailable, it enables ongoing assessment that supplements formal auditing. The collective intelligence of thousands of researchers with data access may produce more comprehensive assessment than periodic formal audits.
8. FTC Authority and Enforcement
The Federal Trade Commission (FTC) is the primary US federal agency with authority to address deceptive and unfair commercial practices, including dark patterns and exploitative platform design.
FTC Authority Over Dark Patterns
Section 5 of the FTC Act prohibits "unfair or deceptive acts or practices in or affecting commerce." Dark patterns — interface designs that trick users into actions they didn't intend, manipulate users into purchasing decisions through deceptive framing, or make cancellation more difficult than subscription — fall squarely within this authority.
The FTC's 2022 report on dark patterns ("Bringing Dark Patterns to Light") documented the problem comprehensively and signaled enforcement intent. The report identified four primary categories of concern: misleading subscription practices, hidden costs and fees, interfaces that deceive about data collection, and designs that manipulate children.
Enforcement actions followed. The FTC and DOJ sued Amazon in 2023 for using dark patterns in its Prime subscription enrollment and cancellation flow — specifically for making cancellation deliberately more complicated than enrollment. This case represented the most significant application of dark pattern enforcement to a major platform.
What the FTC Can and Cannot Do
Can do with existing authority: - Challenge specific dark patterns as deceptive or unfair practices - Require companies to change specific design practices - Impose civil penalties for violations of consent orders - Require consumer redress in some cases
Cannot do without new legislation: - Establish comprehensive baseline privacy standards for all platforms - Require affirmative product safety testing for algorithmic systems - Mandate algorithmic impact assessments - Regulate platform business models directly - Impose structural remedies (like separations between advertising and platform operations) without specific statutory authority
The FTC's enforcement record, while growing, has been limited by capacity, resources, and the deliberate pace of administrative enforcement compared to the speed of platform innovation. The agency has also faced legal challenges to its rulemaking authority, with courts reviewing whether specific FTC rules exceed its statutory mandate.
Rulemaking in Progress
The FTC has engaged in or proposed rulemaking on several relevant fronts:
Commercial Surveillance NPRM (2022): Proposed rules that would prohibit or limit certain commercial surveillance practices, including behavioral advertising targeting and the collection of personal data for certain purposes. If finalized, these rules would represent the first comprehensive US privacy regulation at the federal level.
Junk Fees NPRM (2023): While broader than platforms specifically, this proposed rulemaking would prohibit hidden fees and subscription enrollment dark patterns across industries, with direct application to subscription-based digital services.
The Biden administration FTC, under Chair Lina Khan, was significantly more aggressive on technology enforcement than previous FTCs. Whether this trajectory continues depends on political changes that this textbook cannot predict.
9. International Comparison: Three Regulatory Philosophies
The global regulatory landscape for platforms reflects three broadly different regulatory philosophies, each with characteristic strengths and weaknesses.
The European Precautionary Principle
The EU's regulatory approach is grounded in the "precautionary principle": when an activity threatens harm, precautionary measures should be taken even if causal relationships are not fully established scientifically. This justifies acting before harms are proven — regulating platforms proactively rather than waiting for documented harm.
The EU's regulatory toolkit includes GDPR (data protection), the DSA (platform obligations), the Digital Markets Act (competition), and the AI Act (algorithmic transparency and safety). This is the most comprehensive framework of any jurisdiction.
The EU approach is enabled by a different balance between individual rights and collective welfare than the US tradition — Europeans are generally more comfortable with government regulation of commercial practices that impose social costs. The EU also has greater political unity on the regulatory goals: the political debate is more about how to regulate than whether to.
The European approach produces regulations that are more comprehensive but slower to enforce and subject to regulatory capture as well-resourced platforms engage in extensive compliance theater.
The US Speech-Protective Tradition
US platform regulation has been constrained by the First Amendment in ways that have no direct parallel in European law. The US tradition treats restrictions on speech as presumptively unconstitutional and requires government to show compelling interests and least-restrictive means for any content-based restriction.
This constitutional framework makes European-style content regulation difficult in the US. Prohibiting platforms from recommending certain types of content would likely face First Amendment challenges. Requiring platforms to carry certain content (must-carry rules) faces similar challenges. The legal space for US platform content regulation is significantly narrower than for EU regulation.
The US approach has also been shaped by a strong tradition of industry self-regulation — the preference for market solutions over regulatory mandates — that has consistently delayed comprehensive regulation. The result is a patchwork of sector-specific regulations (COPPA for children's privacy, FTC authority over deceptive practices) rather than a comprehensive framework.
The US approach allows faster platform innovation and avoids the compliance costs of comprehensive regulation. It also produces greater harms because the costs of harmful practices are externalized to society rather than internalized by platforms.
China's State Control Model
China's approach is at the opposite pole: direct state control over platform content, algorithms, and business practices. China's Cyberspace Administration has authority to require algorithmic changes, approve new features, and penalize platforms for hosting content the state considers harmful.
The Chinese model has solved some problems (certain types of foreign propaganda, some forms of coordinated manipulation) while creating others (suppression of political dissent, surveillance infrastructure, limited civil society information freedom). It is not a model that liberal democracies can or should adopt.
But China's experience is relevant to one specific question: what is technically possible in terms of government-directed changes to platform algorithms? The answer is: quite a lot. Platforms can be required to apply specific filters, ranking adjustments, and content restrictions at the technical level. The question is not whether this is technically possible but whether it is compatible with the political values of the society implementing it.
Which Works Best?
Comparing regulatory approaches is complicated by the difficulty of establishing counterfactuals and the different political contexts in which they operate. Some tentative observations:
The EU approach has produced the most comprehensive changes in platform behavior, but enforcement is uneven and compliance is frequently superficial.
The US approach has been effective in specific domains (FTC enforcement against specific dark patterns, state-level privacy laws following California's CCPA) but has failed to produce comprehensive baseline protections.
China's approach demonstrates what is technically possible under state control while illustrating why such control is incompatible with civil liberties.
The direction of travel globally is toward greater regulation: more countries are adopting data protection frameworks, and platform-specific regulation is spreading from the EU to the UK, Australia, Canada, and beyond.
10. What Effective Regulation Would Actually Require
Having surveyed the regulatory landscape, what can we say about what effective platform regulation would actually require? This is a normative question — it reflects a judgment about what outcomes we want — as well as a technical and political one.
For regulation to genuinely constrain engagement-maximization harms, it would need to address several elements that existing regulation largely misses:
Aligning Platform Incentives with User Wellbeing
The root cause of most documented platform harms is the misalignment between platforms' incentive (maximize engagement) and users' interests (wellbeing, accurate information, meaningful connection). Effective regulation would need to change this incentive structure.
One approach: mandatory wellbeing metrics alongside engagement metrics. Platforms could be required to measure and report user wellbeing outcomes (using validated instruments) and to ensure that algorithmic optimization does not systematically reduce wellbeing. This would require platforms to have visibility into harms they currently have incentives not to see.
Another approach: advertising revenue structures that don't reward engagement at the expense of wellbeing. The advertising auction model, in which ad prices are partly determined by engagement rates, creates direct financial incentives for engagement maximization. Regulatory intervention in advertising market structure could change this.
Meaningful Algorithmic Accountability
Effective algorithmic accountability would require:
- Pre-deployment safety testing with specific harm criteria and independent verification
- Ongoing monitoring obligations with public reporting
- Researcher data access enabling independent assessment
- Liability for algorithm-amplified harms that is proportionate to the amplification involved
None of these are fully achieved by existing regulation, though the DSA's risk assessment requirements and researcher data access provisions move in this direction.
Baseline User Protections
A regulatory floor for user treatment would include:
- Default privacy-protective settings (hardest rather than easiest data sharing)
- Prohibition of dark patterns that override user decisions or create friction for privacy-protective choices
- Meaningful transparency about what data is collected and how it is used
- Real consent (not dark pattern consent) for data uses beyond basic service provision
- User data portability that enables genuine competition
The GDPR addresses most of these in principle; the implementation and enforcement gap remains large.
Children's Online Protection That Actually Works
Effective protection of minors would require age verification that balances protection with privacy, default safety settings for verified minors' accounts, prohibition of engagement-maximizing features for children that have been associated with harm (infinite scroll, variable reward notification patterns, social comparison features), and meaningful parental controls without requiring surveillance.
The technology for privacy-preserving age verification exists (zero-knowledge proof systems that can verify age without revealing identity) but has not been adopted at scale. Regulatory mandates could accelerate adoption.
Interoperability to Reduce Lock-In
Network effects that make platforms "sticky" — hard to leave because your social connections are there — are currently entirely unregulated in the US (though the Digital Markets Act addresses them in the EU). Interoperability requirements — the ability to use social media functions across platforms without being trapped in any one — would reduce the leverage that network effects give platforms over users.
International Coordination
Platform harms are global. Regulatory solutions that stop at national borders are insufficient for global platforms. International coordination on minimum standards — similar to what exists in financial regulation, aviation safety, and drug safety — would prevent regulatory arbitrage without requiring harmonization of every political judgment.
Velocity Media: What Regulation Would Change
Dr. Aisha Johnson's audit at Velocity Media is documenting what the practices described in Parts II through V look like inside a real platform. The question her findings raise is: which of the regulatory frameworks described in this chapter would constrain the practices she finds?
GDPR would require more genuine consent for the data Velocity collects for behavioral advertising. The DSA's dark pattern prohibition would apply to Velocity's design features that manipulate users into spending more time on platform. The risk assessment requirement would require Velocity to formally evaluate the mental health harms documented in the audit.
But none of these regulations would eliminate the fundamental tension: Velocity's business model is built on engagement maximization, and engagement maximization produces the harms the audit documents. Regulations that change the forms of engagement maximization without changing the incentive to maximize engagement are working around the edges of the problem.
What would change the incentive? Liability for harm — creating financial costs for documented harmful outcomes — would internalize the externalities that Velocity currently imposes on society. Advertising market regulation that disconnects ad revenue from engagement intensity would remove the direct financial incentive for harmful engagement maximization. And potential antitrust action that reduced Velocity's market power — so users had real alternatives — would create competitive pressure to treat users better.
These are harder regulatory interventions, requiring more political will and more regulatory capacity than transparency requirements and risk assessments. But they are the interventions that address the incentive structure rather than its symptoms.
Summary
The global regulatory landscape for platform governance is more active than it has ever been, with the EU Digital Services Act representing the most ambitious attempt to constrain platform harms through regulation. But the record of existing regulation — GDPR's enforcement gap, COPPA's circumvention through age checkbox, Section 230's failure to address algorithmic amplification — demonstrates that well-intentioned regulation frequently fails to achieve its goals.
Effective regulation would need to address what existing regulations mostly miss: the fundamental misalignment between engagement-maximizing platform incentives and user wellbeing. Transparency, risk assessments, and dark pattern prohibitions are valuable but insufficient if the underlying economic incentives remain unchanged.
The comparison across regulatory philosophies — EU precautionary principle, US speech-protective tradition, Chinese state control — reveals that the limits of regulation are both technical and political. The technical limits are real but surmountable: what is possible under state control demonstrates what is technically feasible. The political limits are harder: meaningful regulation requires political will to overcome the lobbying power of enormously profitable platforms and the genuine political tensions between different values (speech freedom, privacy, safety, competition) that regulation necessarily implicates.
The next chapters will address what platform redesign and collective action look like — the other structural levers that, alongside regulation, would need to change to address the harms this book has documented.
Key Terms
Section 230: The US statutory provision (Communications Decency Act, 1996) that immunizes Internet platforms from liability for user-generated content and for good-faith content moderation decisions.
GDPR (General Data Protection Regulation): EU data protection law (2018) establishing data subject rights, consent requirements, and enforcement obligations for organizations that process personal data of EU residents.
Digital Services Act (DSA): EU platform regulation (2023) requiring algorithmic transparency, dark pattern prohibition, systemic risk assessments, and researcher data access for large online platforms.
Online Safety Act (OSA): UK platform regulation (2023) establishing a duty of care framework focused on protection from harmful content, age verification, and children's safety.
COPPA: Children's Online Privacy Protection Act (US, 1998) prohibiting collection of personal information from children under 13 without verifiable parental consent.
Algorithmic impact assessment (AIA): A proposed requirement for platforms to systematically evaluate potential harms from algorithmic systems before deployment and regularly thereafter.
Precautionary principle: The EU regulatory philosophy that precautionary measures should be taken when activities threaten harm, even without full scientific proof of causal relationships.
Prebunking: In regulatory context, interventions that change the information environment before harmful content is encountered. Analogous to vaccination — building resistance preemptively.
Good Samaritan immunity: Section 230's protection for platforms that moderate content in good faith, allowing moderation without thereby assuming editorial liability.