Case Study 33-1: The EU Digital Services Act — A New Framework for Platform Accountability
Overview
When the European Commission unveiled its Digital Services Act proposal in December 2020, it represented a fundamental rethinking of the relationship between governments, technology platforms, and online speech. The DSA was not merely a regulatory update — it was a statement about who bears responsibility for the information environment of democratic society. After two years of trilogue negotiations between the Commission, Parliament, and Council, the final text was adopted in October 2022. By early 2024, with phased implementation complete, the DSA had become the most comprehensive government regulatory framework for online content in the democratic world, and a template that regulators in Canada, the UK, Brazil, and elsewhere were watching closely.
This case study examines the DSA's structure, enforcement mechanisms, initial implementation experience, and what the regulation reveals about the possibilities and limits of the European approach to platform governance.
Background: The Regulatory Gap the DSA Was Designed to Fill
The DSA replaced the 2000 E-Commerce Directive, which had been drafted at a time when Facebook did not exist, YouTube had not been founded, and the dominant vision of the internet involved information retrieval rather than social interaction. The E-Commerce Directive's basic premise — that online intermediaries should enjoy liability protections similar to those of telecommunications carriers, as long as they did not have knowledge of specific illegal content — had held for over two decades. During that period, the platforms that operated under this framework grew from startups into trillion-dollar global infrastructure.
The problem the DSA's architects identified was not primarily that the liability framework was wrong in 1996. It was that the world had changed in ways the original framework did not anticipate:
Scale: The platforms had grown to a scale where their content moderation decisions affected billions of people. Facebook's Community Standards applied to more users than the laws of any single country.
Algorithmic amplification: The 2000 framework contemplated platforms as passive conduits — hosts of content that users uploaded. Modern platforms are not passive conduits. Their recommendation algorithms actively decide what each user sees, creating enormously consequential editorial functions without corresponding accountability.
Systemic risks: Coordinated disinformation campaigns, election manipulation, and radicalization pipelines were enabled by platform design choices — not just by individual bad actors posting content. The existing framework had no mechanism for requiring platforms to assess or mitigate these systemic risks.
Opacity: Despite their societal significance, major platforms operated with minimal transparency. Academic researchers had essentially no right to access data needed to study algorithmic effects. Advertisers had more data than regulators or the public.
The DSA was designed to address each of these gaps, while preserving what had made the European approach to online speech — the liability exemption that had enabled the internet economy — functional.
DSA Architecture: The Tiered Approach
The DSA's most important structural feature is its tiered approach, creating different obligations for different categories of platforms based on their size and societal significance.
Tier 1: All Digital Intermediary Services
All digital intermediary services operating in the EU face baseline obligations including: - Transparency about their terms of service, including clear information about content moderation policies - Mechanisms for recipients to flag potentially illegal content - Acting expeditiously on notices of illegal content from trusted flaggers - Cooperation with law enforcement requests according to applicable legal frameworks - Publication of basic transparency reports annually
These baseline obligations apply broadly, from small web hosting providers to major platforms, though the compliance burden is proportionate to size.
Tier 2: Online Platforms
Platforms that go beyond mere hosting and allow users to store and disseminate content publicly face additional requirements including: - Notice-and-action systems with specific requirements for handling complaints - Out-of-court dispute settlement mechanisms - Transparency reports with more detailed information on content moderation - Notice to users when their content is removed or restricted - Point of contact for authorities
Tier 3: Very Large Online Platforms and Search Engines (VLOPs/VLOSEs)
Platforms designated as VLOPs — with more than 45 million EU monthly active users — face the most significant obligations. These are the provisions that most directly address systemic misinformation risks:
Annual systemic risk assessments: VLOPs must identify, analyze, and assess systemic risks arising from their services. For misinformation and disinformation, the relevant risks include: "actual or foreseeable negative effects on civic discourse and electoral processes." Risk assessments must consider how algorithmic systems, advertising systems, and terms of service application may generate or amplify these risks.
Risk mitigation measures: Following assessment, VLOPs must adopt "reasonable, proportionate and effective mitigation measures." For election integrity and disinformation risks, this might include: reduced algorithmic amplification of unverified viral content, enhanced fact-checking partner integration, or special protocols during election periods.
Independent audits: VLOPs must submit to annual independent audits of their risk assessment and mitigation processes. Auditors evaluate whether risk assessments are methodologically sound and whether mitigation measures are actually implemented.
Researcher data access: VLOPs must provide vetted researchers access to data for research on systemic risks. This access must be "adequate" for the research purpose and provided within a reasonable time.
Enhanced advertising transparency: VLOPs must maintain a public advertising repository showing all ads shown on the platform, with targeting parameters — enabling scrutiny of political advertising practices.
Recommender system transparency: VLOPs must provide users with at least one recommendation option not based on profiling. Users must have access to information about the main parameters of recommendation systems.
Prohibition on certain targeting: The DSA prohibits VLOP advertising targeting based on sensitive personal data (health, political opinions, sexual orientation) and prohibits targeting of minors for advertising purposes.
Enforcement Mechanism
The DSA's enforcement structure reflects a key political compromise: member states retain primary enforcement authority for most platforms, while the European Commission has direct enforcement authority over VLOPs.
National Digital Services Coordinators (DSCs): Each member state designates a national DSC responsible for enforcement within their territory for non-VLOP platforms, and as a point of contact for their jurisdiction's users in VLOP matters. DSCs handle complaints, conduct investigations, and can impose penalties up to 6% of global annual turnover (or 1% for information provision failures).
European Commission: The Commission has exclusive direct enforcement authority over VLOPs and VLOSEs. Commission investigations can result in: - Information requests and on-site inspections - Interim measures in cases of serious risk (emergency powers) - Non-compliance decisions with penalties up to 6% of global annual turnover - Non-compliance with information requests: up to 1% of annual turnover - Periodic penalty payments up to 5% of average daily turnover for continued non-compliance
Structural separation and the VLOP problem: A design tension in the DSA enforcement framework is the division of authority: national DSCs handle most platforms, but VLOPs — the platforms with the most societal impact — are handled by the Commission. This concentrates VLOP oversight in a single, resource-constrained institution. As of 2024, the Commission's DSA enforcement team was building capacity but faced the challenge of supervising 24 VLOPs with varied business models, technical architectures, and risk profiles simultaneously.
Initial Implementation: What the First Year Revealed
The phased DSA implementation — with VLOP obligations beginning in August 2023 and full implementation for all platforms in February 2024 — provided the first real-world test of the regulatory framework.
Risk Assessments: Variation in Quality
VLOPs submitted their first risk assessments in late 2023. The Commission reviewed these assessments and its public statements suggested significant variation in quality and depth. Some assessments were described as substantive engagements with algorithmic risk; others were criticized as formulaic responses that did not meaningfully address how specific design choices created misinformation risks.
The fundamental challenge in risk assessment is that platforms are assessing themselves. Even with independent audits, the primary data and analytical capacity for understanding how algorithms work resides within the platforms. The information asymmetry between platforms and regulators — a feature of all technology regulation — is particularly acute for algorithmic systems, which even internal platform teams often struggle to fully characterize.
Researcher Access: Progress With Friction
The researcher access provision represented a significant breakthrough in principle — academic researchers had long identified the inability to access platform data as the critical bottleneck for understanding misinformation dynamics. In practice, implementation proceeded slowly. Platforms developed vetting procedures for researcher applications; academic institutions worked through formal data agreements; the first data access arrangements were established in 2024.
The quality of access varied significantly. Some platforms provided robust API access enabling quantitative analysis; others provided narrower access, raising questions about whether the access was "adequate" under the DSA standard. The Commission opened preliminary investigations into several VLOPs' researcher access provisions.
Formal Investigations: Testing the Enforcement Teeth
The Commission opened formal DSA investigations against X (Twitter) in December 2023, examining: risk assessment adequacy for illegal content and disinformation risks, recommender system transparency, advertising transparency, and actions taken during the Hamas-Israel conflict in October 2023 (where the Commission alleged X had failed to act on illegal content relating to violence and terrorism). X's responses to Commission information requests were publicly characterized by Commission officials as inadequate.
The Commission also opened investigations into Meta (relating to the Hamas-Israel conflict and election integrity concerns) and TikTok (relating to election integrity risks and data practices).
These early investigations tested the Commission's enforcement capacity and established precedents for what "adequate" DSA compliance means in practice. The investigations attracted significant attention from other platforms as indicators of regulatory intent.
Comparison to the US Approach
The contrast between the DSA's approach and the US regulatory landscape is illuminating, revealing how fundamentally different constitutional frameworks produce different regulatory outcomes.
Jurisdictional Basis
The DSA applies to all services with EU users, regardless of where they are headquartered. This extraterritorial effect is explicit and deliberate — European regulators are not willing to accept that the platforms that shape European public discourse should be regulated only by the country where they happen to be incorporated (primarily the US).
US regulation has no analog to this extraterritorial assertion. The FTC, FEC, and other agencies regulate platforms as domestic entities; there is no mechanism for the US to impose risk assessment or algorithmic accountability requirements on foreign platforms serving US users.
Section 230 vs. DSA Liability Framework
Section 230's sweeping immunity — platforms are not publishers of user content — means that platforms face no legal consequence for hosting or amplifying misinformation (absent specific fraud or national security law application). The DSA does not eliminate liability protection, but significantly conditions it: platforms maintain their liability shield only if they comply with DSA obligations. Failure to maintain adequate complaint procedures or to implement required risk mitigation measures could affect the liability protection.
Transparency
The US has no mandatory transparency reporting requirements for major platforms' content moderation practices. Platforms publish transparency reports voluntarily, with significant variation in what they disclose. The DSA mandates specific, standardized transparency reports covering content moderation decisions, advertising, and algorithmic systems — with the Commission reviewing and publishing the data. This creates the possibility of cross-platform comparison that voluntary disclosure does not enable.
Researcher Access
The researcher data access provision has no US analog. US academics seeking platform data typically face arbitrary decisions about API access — decisions that platforms have increasingly restricted (Twitter/X's 2023 API policy changes being the most prominent example). DSA researcher access is legally mandated, with platforms required to provide access that is "adequate" for research purposes. This represents a structural transformation in the information environment for platform research.
What the US Approach Has That the DSA Lacks
The DSA is not uniformly more protective than US regulation in every respect. The US approach to political advertising — requiring disclosure of who paid for political ads and subjecting campaign finance to FEC regulation — has mechanisms for political influence transparency that the DSA is only beginning to develop through its advertising transparency repository. The US also has a longer tradition of whistleblower protection that has enabled some of the most significant platform accountability reporting (the Facebook Papers were enabled partly by whistleblower protections).
Critical Assessments: What the DSA Gets Right and Wrong
What the DSA Gets Right
Risk-based approach: By requiring risk assessment rather than mandating removal of specific content, the DSA avoids requiring the EU to maintain an official list of "false" content — the epistemological authority problem that plagues content-removal mandates. Platforms must examine their systems' effects, not just individual content decisions.
Systemic focus: Requiring assessment of algorithmic risks addresses the mechanism through which misinformation causes harm at scale — the recommendation engine that amplifies content to millions of users — rather than focusing on individual posts that may have already spread by the time they are addressed.
Researcher access: Providing structured access for vetted researchers creates the possibility of independent, empirical assessment of platform claims about their own risk mitigation effectiveness. This is a significant accountability improvement.
Meaningful penalties: The 6% of global annual turnover maximum represents a substantial deterrent — large enough to constitute a genuine business risk for even the largest platforms.
What the DSA Gets Wrong or Leaves Unresolved
Information asymmetry: The Commission remains dependent on platforms for data about how their systems work. Even with audits and researcher access, the gap between platform technical knowledge and regulator technical capacity is substantial. Regulatory capture risk — where regulators adopt platform framings of what constitutes adequate risk mitigation — is real.
Enforcement capacity: The Commission's DSA enforcement team is building capacity for a large and technically complex portfolio. The simultaneous supervision of 24 VLOPs with different architectures, languages, and risk profiles is demanding. Underfunding of enforcement is a perennial problem in competition and technology regulation.
Definition of systemic risk: The DSA's requirement to assess and mitigate systemic risks to "civic discourse" is broad and ambiguous. Platforms have significant discretion in determining what constitutes an adequate risk assessment and mitigation. Without clearer methodological standards, assessments may systematically understate risks in ways that are difficult for regulators to challenge.
Platform architectural choices: The DSA can require risk assessment and mitigation, but it does not prohibit specific architectural choices — like engagement-maximizing recommendation algorithms — that may be the root cause of systemic risk. Mitigation requirements added on top of a fundamentally engagement-maximizing architecture may produce less change than architectural regulation.
Discussion Questions
-
The DSA's risk assessment approach requires platforms to assess "systemic risks" to civic discourse. Who should determine what constitutes an adequate risk assessment methodology — the platforms themselves, the Commission, independent researchers, or some combination? What are the arguments for and against each?
-
The DSA's enforcement relies partly on independent audits of platform risk assessments. What would a meaningful audit look like? What would auditors need access to, and what qualifications would auditors need?
-
The US has not adopted a regulatory framework comparable to the DSA. What are the constitutional, political, and industry-lobbying obstacles to DSA-style regulation in the United States? Are any of these obstacles surmountable?
-
The DSA's researcher access provisions could transform academic understanding of how platforms work. What research questions should be prioritized in the early period of DSA researcher access? What methodological challenges would researchers using platform data face?
-
Critics of the DSA argue that it imposes significant compliance burdens while being unlikely to produce meaningful change in the information environment. Evaluate this critique: what evidence would support or undermine it? What would "meaningful change" look like and how would you measure it?
Key Takeaways
- The DSA is the most comprehensive democratic government attempt to regulate very large online platforms, applying to all services with more than 45 million EU users regardless of headquarters location.
- The DSA's approach to misinformation is systemic and risk-based: it requires platforms to assess and mitigate risks from their algorithmic systems to civic discourse, rather than mandating removal of specific content.
- Enforcement is split between national Digital Services Coordinators (for most platforms) and the European Commission (for VLOPs), with penalties up to 6% of global annual turnover.
- The DSA's first year of implementation revealed significant variation in compliance quality, challenges in enforcing researcher access provisions, and early Commission investigations into multiple VLOPs.
- The DSA contrasts sharply with the US approach, which relies on Section 230 immunity, voluntary self-regulation, and limited government regulatory authority constrained by the First Amendment.
- Key unresolved challenges include information asymmetry between platforms and regulators, enforcement resource constraints, and the ambiguity of what constitutes adequate risk mitigation.