Chapter 38: Quiz — Regulatory Approaches


Questions with Answers

Question 1 Section 230 of the Communications Decency Act (1996) primarily provides platforms with:

A) A right to monitor all user communications for safety purposes B) Immunity from liability for content created by their users and for good-faith content moderation C) Authority to collect user data without individual consent D) Exemption from antitrust regulation

Answer: B. Section 230 provides two immunities: (c)(1) protects platforms from being treated as publishers of user-generated content, and (c)(2) (the Good Samaritan provision) protects platforms from liability for good-faith decisions to restrict content. Together, they allow platforms to host user content and moderate it without assuming editorial liability.


Question 2 The phrase "26 words that created the modern Internet" refers to what specific Section 230 provision?

A) "The Internet shall be free and open to all users regardless of location or demographic." B) "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." C) "Online platforms are not responsible for ensuring the accuracy of user-submitted content." D) "Interactive computer services may remove offensive or illegal content without penalty."

Answer: B. These are the 26 words of Section 230(c)(1) that give platforms immunity from liability for user-generated content. This provision is often described as foundational to the development of user-generated content platforms, as without it, hosting such content would expose platforms to enormous liability.


Question 3 Under GDPR, valid consent for data processing must be:

A) Given once per year and assumed ongoing unless withdrawn B) Implied by continued use of a service after a privacy policy update C) Freely given, specific, informed, and unambiguous D) Provided by a parental or guardian authority for adults as well as minors

Answer: C. GDPR Article 7 specifies that consent must be freely given (no coercion or imbalance of power), specific (for a specific purpose), informed (users know what they're consenting to), and unambiguous (requiring a clear affirmative action — no pre-ticked boxes). This standard is regularly violated by cookie consent banners that use dark patterns.


Question 4 GDPR's maximum penalty for violations is:

A) 1% of global annual turnover or 10 million euros, whichever is greater B) 2% of global annual turnover or 10 million euros, whichever is greater C) 4% of global annual turnover or 20 million euros, whichever is greater D) A fixed 500 million euro penalty per violation

Answer: C. GDPR's maximum penalty is 4% of global annual turnover or 20 million euros, whichever is greater. For context, Meta's 1.2 billion euro fine in 2023 was one of the largest ever issued under GDPR.


Question 5 The EU Digital Services Act (DSA) applies its most demanding requirements to "Very Large Online Platforms" (VLOPs) defined as platforms with:

A) More than 10 million monthly active users in the EU B) More than 45 million monthly active users in the EU (approximately the EU's population divided by 10) C) Annual revenues exceeding 10 billion euros D) More than 100 million global users regardless of EU presence

Answer: B. VLOPs are defined as platforms with more than 45 million monthly active users in the EU — approximately one-tenth of the EU's population. This threshold captures the very largest platforms (Meta, Google, TikTok, Amazon, X/Twitter) while not imposing the most burdensome requirements on smaller platforms.


Question 6 Which of the following is a requirement of the DSA that was NOT present in GDPR?

A) Consent for data processing B) Right to data portability C) Explicit prohibition of dark patterns in platform design D) Privacy by design and default

Answer: C. GDPR focused on data protection and consent. The DSA explicitly prohibits dark patterns — interface designs that manipulate users' choices — making it the first major regulation to directly address platform manipulation techniques beyond data collection.


Question 7 The UK Online Safety Act's "duty of care" approach differs from the EU DSA's approach primarily in that it:

A) Focuses on platform transparency and algorithmic accountability rather than content B) Focuses on categories of harmful content and requires platforms to protect users from it C) Applies only to state-owned media companies D) Is enforced by users through private rights of action rather than a government regulator

Answer: B. The OSA's "duty of care" framework requires platforms to prevent specified harmful content from reaching users, particularly children. The DSA focuses more on transparency, dark patterns, and systemic risk. The OSA's content-focused approach has raised more free speech concerns than the DSA's design-focused approach.


Question 8 COPPA (Children's Online Privacy Protection Act, US, 1998) primarily addresses:

A) Children's access to age-inappropriate content B) Collection of personal information from children under 13 without verifiable parental consent C) Platform features specifically designed to attract children D) School districts' use of educational technology platforms

Answer: B. COPPA's core mechanism is consent: platforms may not collect personal information from children under 13 without verifiable parental consent. Its practical failure is that platforms have responded by requiring users to self-certify age with no actual verification, giving them legal deniability while not actually protecting children.


Question 9 What was the practical effect of COPPA's age verification mechanism (the "I am 13 or older" checkbox)?

A) It effectively prevented most minors from accessing regulated platforms B) It created legal deniability for platforms while doing nothing to prevent minors from accessing their services C) It shifted liability for underage use entirely to parents D) It reduced the number of minors using major platforms by approximately 40%

Answer: B. The checkbox mechanism allowed platforms to say "users certified their age" while building platforms that were clearly attractive to and accessible by minors. The mechanism was a compliance theater solution that satisfied the letter of COPPA while violating its spirit. Internal platform research has repeatedly documented large under-13 userbases.


Question 10 The "Good Samaritan" immunity in Section 230 was designed to:

A) Protect platforms that report user criminal activity to law enforcement B) Protect platforms from liability for good-faith content moderation, allowing them to moderate without becoming editors responsible for all remaining content C) Provide immunity to users who report platform violations to regulators D) Allow platforms to share user data with law enforcement without user consent

Answer: B. Section 230(c)(2) was designed to solve a specific problem: if platforms can be held liable for content they moderate (because the act of moderation makes them publishers), they will either stop moderating (bad) or over-moderate (bad). The Good Samaritan immunity allows platforms to moderate in good faith without thereby assuming liability for what they leave up.


Question 11 The "continued influence effect" has policy relevance for Section 230 reform because:

A) It shows that section 230's immunity was influenced by ongoing lobbying pressure B) It demonstrates that harmful content continues to influence users even after platforms remove it, supporting the case for prevention over removal C) It shows that Section 230 has continued influence on subsequent legislation D) It demonstrates that platforms' moderation practices are influenced by advertiser relationships

Answer: B. The continued influence effect — false or harmful information keeps affecting beliefs even after removal — is part of the policy argument for requiring platforms to prevent amplification of harmful content rather than simply allowing removal after-the-fact. It supports the case that "notice and takedown" models are insufficient.


Question 12 The EARN IT Act (proposed 2020, 2022) raised concerns primarily because:

A) It would have required platforms to pay substantial taxes on advertising revenue B) It would have effectively required platforms to weaken end-to-end encryption to scan for CSAM C) It would have transferred Section 230 immunity enforcement from courts to a commission D) It would have required all social media users to verify their identity with government documents

Answer: B. The EARN IT Act would have conditioned Section 230 immunity on compliance with "best practices" for detecting child sexual abuse material, to be defined by a government commission. Critics argued this effectively mandated the ability to scan encrypted messages — which would require breaking end-to-end encryption — and gave a government commission potentially broad authority over content moderation practices.


Question 13 An "algorithmic impact assessment" (AIA) would most closely resemble which existing regulatory mechanism?

A) A patent filing, documenting the novelty of an algorithm B) An environmental impact assessment, requiring systematic evaluation of potential harms before deployment C) A financial audit, verifying that reported revenues are accurate D) A product liability inspection, certifying that products meet safety standards after market entry

Answer: B. The AIA concept is explicitly modeled on environmental impact assessments: a requirement to systematically evaluate potential harms before deployment, with independent verification and public disclosure. Like EIAs, AIAs would shift the burden from post-hoc harm documentation to pre-deployment risk assessment.


Question 14 The EU's "precautionary principle" in platform regulation means:

A) Regulating platforms before there is scientific consensus on the harms they cause B) Taking regulatory precautions only when harms have been proven beyond scientific doubt C) Allowing platforms to self-regulate with government oversight as a precautionary backstop D) Requiring government preclearance before any new platform features are deployed

Answer: A. The precautionary principle holds that regulatory action is justified when activities threaten harm even without complete scientific proof of causal relationships. Applied to platform regulation, it justifies regulating before population-scale harms are established beyond all doubt — reflecting a different risk tolerance than the US approach, which generally waits for more established evidence.


Question 15 What is the primary weakness of GDPR's consent-based data protection model for addressing platform harms?

A) The consent requirements are technically too complex for most users to understand B) Consent-based frameworks address data collection but not the harm of algorithmic engagement optimization that uses data to exploit psychological vulnerabilities C) GDPR applies only to European companies, allowing US platforms to collect unlimited data D) Consent can only be given by adults, leaving children's data unprotected

Answer: B. The chapter's key critique of GDPR is that consent frameworks address how data is collected rather than what it's used for. The primary harm of engagement-maximizing platforms is not that they collect too much data per se, but that they use data to identify and exploit psychological vulnerabilities. Even platforms with fully GDPR-compliant data collection could use that data to harm users.


Question 16 The "Brussels Effect" in technology regulation refers to:

A) The EU's ability to impose GDPR penalties on non-EU companies with EU operations B) The tendency for EU regulations to become de facto global standards because multinationals simplify to one global practice C) Brussels-based lobbying by technology companies against EU regulation D) The EU Commission's authority to override national data protection authority decisions

Answer: B. The Brussels Effect describes how EU regulation often achieves global reach through market power rather than formal jurisdiction: companies with global operations often find it more efficient to apply EU standards everywhere than to maintain geographically differentiated practices. GDPR produced privacy changes globally that the EU did not formally mandate outside its borders.


Question 17 What did the FTC's 2023 lawsuit against Amazon allege regarding dark patterns?

A) That Amazon manipulated product search results to favor Amazon-branded products B) That Amazon used dark patterns to make Prime subscription enrollment easier and cancellation more difficult C) That Amazon collected children's data without COPPA-required parental consent D) That Amazon's price display interfaces deceptively obscured total costs

Answer: B. The FTC and DOJ's Amazon complaint alleged that Amazon made Prime subscriptions difficult to cancel — involving multiple screens, unclear options, and deliberate friction — while making enrollment quick and easy. This asymmetric friction is a classic dark pattern: making the platform-favored action easy and the user-favored action hard.


Question 18 Interoperability requirements in platform regulation are designed primarily to address:

A) The technical incompatibility between different operating systems B) The lock-in effect created by network effects, where platforms become hard to leave because your social connections are there C) The inability of users on different age cohorts to communicate with each other D) The problem of misinformation spreading from one platform to another

Answer: B. Interoperability requirements — allowing users to communicate and maintain connections across different platforms — are designed to reduce the network effect lock-in that gives dominant platforms leverage over users and prevents competitive entry. The EU Digital Markets Act includes interoperability requirements for messaging platforms.


Question 19 The fundamental misalignment that the chapter argues existing regulation mostly fails to address is:

A) The misalignment between platform terms of service and actual platform behavior B) The misalignment between platform incentives (maximize engagement) and user interests (wellbeing, accurate information) C) The misalignment between regulatory standards in different countries D) The misalignment between platform algorithms and content creator interests

Answer: B. The chapter argues that the root cause of most platform harms is the fundamental misalignment between the platform incentive (maximize engagement, which drives advertising revenue) and user interests (wellbeing, accurate information, meaningful connection). Regulations that change the forms of engagement maximization without changing the underlying incentive are working around the problem rather than addressing it.


Question 20 Which regulatory mechanism would most directly address the fundamental incentive misalignment identified in Question 19?

A) More detailed transparency reporting requirements B) Additional consent requirements for data processing C) Liability for algorithm-amplified harms, creating financial costs proportionate to documented harmful outcomes D) Mandatory inclusion of a "chronological feed" option

Answer: C. If platforms bear financial liability for harms their algorithms amplify — if engagement-maximizing recommendations that produce documented harm generate legal costs — the incentive calculation changes. Currently, the costs of harmful engagement maximization fall on users and society; the benefits (revenue) accrue to platforms. Liability would internalize some of those costs, changing the economic calculation that currently drives harmful design.


Question 21 The DSA's researcher data access requirement addresses what specific problem in platform accountability?

A) The problem that researchers cannot trust platform-provided transparency reports B) The information asymmetry where platforms have detailed data about their effects and independent researchers have very little C) The problem that academic research on social media cannot be published in peer-reviewed journals D) The regulatory gap where researchers can study platforms but cannot recommend regulatory action

Answer: B. One of the most significant obstacles to accountability is the information asymmetry between platforms (which have detailed, proprietary data about user behavior, algorithm effects, and harm metrics) and independent researchers (who have been largely dependent on what platforms choose to share). The DSA's researcher data access requirement directly addresses this by requiring platforms to provide vetted researchers with data access necessary to assess systemic risks.


Question 22 Based on the chapter's analysis, effective platform regulation that addresses the root cause of documented harms would require:

A) Primarily technical standards for algorithm design B) Primarily content moderation requirements C) Primarily transparency and disclosure requirements D) Changes to platform incentive structures, including liability for harm and potentially advertising market regulation, alongside transparency and accountability mechanisms

Answer: D. The chapter concludes that while transparency, risk assessments, and dark pattern prohibitions are valuable, they are insufficient if the underlying economic incentives remain unchanged. Effective regulation would need to change platform incentive structures — through liability for harm, advertising market regulation that disconnects revenue from harmful engagement intensity, and competitive pressure from interoperability — alongside the accountability mechanisms that existing regulation focuses on.