Chapter 41 Quiz
Ethics of Truth, Deception, and the Epistemic Commons
Instructions: Answer each question, then reveal the answer using the toggle below.
Question 1
According to Kant's categorical imperative, why is lying always morally wrong, even when it might produce better consequences?
Reveal Answer
Kant's categorical imperative grounds the wrongness of lying in two related arguments. First, using the universalizability test: if everyone lied whenever it was convenient, the very institution of communication would collapse — statements would lose their meaning because no one could rely on them. The maxim "lie when convenient" cannot be universalized without self-contradiction. Second, using the humanity formula: lying treats the deceived person as a mere means to the liar's ends, bypassing their rational autonomy without their consent. Even if lying produces better consequences in a specific case, it is intrinsically wrong because it violates the rational infrastructure of communication and fails to respect the dignity of the person deceived.Question 2
What is the distinction between "sincerity" and "accuracy" in Bernard Williams's account of the virtues of truth?
Reveal Answer
Williams distinguishes two fundamental virtues of truth. **Sincerity** is the virtue of not asserting what one does not believe — it is the commitment to say only what you actually think is true. **Accuracy** is the virtue of taking adequate care to ensure your beliefs are actually true before asserting them — it is an epistemic diligence requirement. A person can be sincere (genuinely believes what they say) but lack accuracy (they formed their belief carelessly without adequate investigation). In the digital age, this distinction is crucial: many sharers of misinformation are sincere but lack accuracy. Ethical communication requires both virtues.Question 3
What is "misleading implicature," and how does it relate to Grice's theory of conversational norms?
Reveal Answer
Paul Grice identified conversational implicature as the process by which speakers convey meaning beyond the literal content of their words, relying on shared assumptions about cooperative communication. When a speaker says something that is literally true but that violates cooperative norms (e.g., relevance, completeness), listeners draw inferences about what the speaker means to convey. **Misleading implicature** involves deliberately exploiting these conversational norms to create false impressions without making literally false statements. For example, saying "I would never steal from a friend" in response to "Did you take the money?" implies one is innocent without literally saying so. The statement may be true while the implication is false. This is a form of deception that does not constitute lying in the strict sense but may be equally culpable morally.Question 4
What philosophical foundations support the claim that there is a positive right to accurate information?
Reveal Answer
Three major philosophical traditions provide foundations for a positive epistemic right. From a **Kantian perspective**: rational autonomy — the capacity to make genuinely informed choices — is a precondition for human dignity. Denying individuals accurate information compromises their capacity for rational self-determination. From a **consequentialist/democratic theory perspective**: democratic self-governance requires an informed citizenry; without access to accurate information, citizens cannot make the choices democratic theory requires of them, so democratic welfare grounds a positive duty to ensure information access. From a **contractualist perspective** (Scanlon): no reasonable person could reject a principle requiring that members of a political community have access to accurate information about matters significantly affecting their lives. Each foundation points toward positive duties on governments, institutions, and media organizations to provide and protect access to accurate information.Question 5
Define testimonial injustice and provide an example of how it might occur in a social media context.
Reveal Answer
**Testimonial injustice** (Miranda Fricker's term) occurs when a speaker is assigned less credibility than they deserve because of prejudice — typically prejudice related to their identity characteristics such as race, gender, class, or accent. The wrong is done specifically to the person in their capacity as a knower: their reliability as an epistemic agent is denied on identity-based rather than evidence-based grounds. In a social media context: a member of a marginalized community posts about a local environmental hazard affecting their neighborhood. The post is automatically flagged as potentially misleading by a content moderation system trained primarily on mainstream sources that lack coverage of the issue, and it receives limited organic distribution because the account has few followers and low "authority" signals. Meanwhile, the same claim made by a credentialed environmental consultant would receive different treatment. The credibility discount the original poster faces is not based on the quality of their evidence but on their status — a form of testimonial injustice mediated by algorithmic systems.Question 6
What is the "selection problem" in fact-checking, and why does it have ethical significance?
Reveal Answer
The **selection problem** refers to the fact that no fact-checking organization can check every claim made in public discourse; choices must be made about which claims to investigate. These selection decisions are inherently consequential: by choosing to check some claims and not others, fact-checkers implicitly determine whose statements are held to public scrutiny. The ethical significance is that the selection process has distributive consequences. Research shows that selection tends to favor: claims by politically powerful figures, widely circulating claims in accessible languages, claims about which checkable evidence exists. This systematically under-checks claims by less powerful speakers, claims in non-dominant languages, and claims in communities with less media visibility. The selection process thus replicates existing patterns of epistemic inequality — holding some speakers to epistemic accountability while leaving others comparably unchecked. This is an epistemic justice concern: who gets fact-checked is not epistemically neutral.Question 7
Explain the "harm asymmetry" concept in platform content moderation. Why might under-moderation and over-moderation not be symmetrically bad outcomes?
Reveal Answer
The **harm asymmetry** refers to the observation that the harms of under-moderation (allowing harmful false content to circulate) and over-moderation (suppressing legitimate speech) are not necessarily equal in magnitude, type, or distribution, even though both represent failures of content moderation. Under-moderation harms can include: preventable deaths from health misinformation, distortion of democratic elections, silencing of marginalized voices through harassment, real-world violence from incitement. Over-moderation harms can include: suppression of legitimate political speech, silencing of minority viewpoints, impairment of accountability journalism. These harms are not symmetrically bad because: (a) they fall on different people, (b) they have different immediate and long-term severity, and (c) their severity varies by context. In a public health emergency, the harms of allowing life-threatening health misinformation to spread likely outweigh the harms of over-labeling some legitimate content. A proportionate moderation approach should incorporate this context-sensitive asymmetry rather than treating all moderation errors as equivalent failures.Question 8
What is epistemic paternalism? How does it create tension with Mill's principle of epistemic liberty?
Reveal Answer
**Epistemic paternalism** refers to interventions in individuals' epistemic processes — their access to information, their exposure to particular content, or their belief-formation processes — that override or circumvent their epistemic autonomy on the grounds that the intervention serves their interests or others'. This creates direct tension with Mill's arguments in *On Liberty*. Mill argued that the free competition of ideas, including false ones, is the best available mechanism for arriving at truth: false ideas are most effectively defeated in open debate, not by suppression; suppressing ideas risks suppressing truths not yet recognized; and even false ideas, by forcing the defense of correct beliefs, keep truth "alive" rather than reducing it to dead dogma. Platform content moderation that removes or labels misinformation is a form of epistemic paternalism: it substitutes the platform's (or fact-checker's) judgment about what is true for the individual user's own epistemic process. Mill's framework would permit this only when the false content causes demonstrable harm to others — not merely because it might lead users to form false beliefs about their own interests.Question 9
What is the "tragedy of the epistemic commons," and how does it parallel Garrett Hardin's original commons tragedy?
Reveal Answer
Garrett Hardin's "tragedy of the commons" describes how rational individual behavior can destroy a shared resource: each user captures the full benefit of their use while distributing the cost of depletion across all users, leading to overuse. The **tragedy of the epistemic commons** applies this logic to the shared information environment. Individual sharing decisions capture private benefits (social currency, engagement, in-group approval) at low individual cost, while the costs — degradation of the shared epistemic environment through the spread of misinformation — are distributed across all users of that environment. When millions of individuals make sharing choices that prioritize engagement over accuracy, the aggregate effect degrades the quality of the epistemic commons on which everyone depends for belief formation. The key parallel: the tragedy is not caused by malicious intent, but by misaligned incentives. Most misinformation sharers are not trying to deceive; they are responding rationally to social and psychological incentives that do not reward epistemic care. Correcting the tragedy requires institutional redesign — changing the incentive structure — rather than merely appealing to individual virtue, just as Hardin's commons tragedy requires institutional solutions beyond appeals to voluntary restraint.Question 10
How does Fricker's concept of hermeneutical injustice apply to the digital age?
Reveal Answer
**Hermeneutical injustice** occurs when a social group lacks the conceptual resources to understand or articulate their own experience because their experience has been systematically excluded from the collective development of concepts and language. In the digital age, this manifests in at least two ways. First, content moderation systems that identify "misinformation" are trained on existing knowledge structures, which may encode the biases of dominant epistemic communities. Claims or experiences that are heterodox within mainstream frameworks — even if legitimate — may be labeled as misinformation because the systems lack the conceptual nuance to distinguish legitimate heterodoxy from pseudoscience. Communities whose knowledge traditions differ from those encoded in training data face hermeneutical injustice: their ways of knowing are not recognized by the systems that govern epistemic access. Second, some digital communities develop new concepts and vocabulary that later enter mainstream discourse — #MeToo is an example where digital networks enabled the development of shared concepts for experiences that had previously lacked collective names. This illustrates both the hermeneutical injustice that preceded the digital tools and the capacity of digital networks to remediate hermeneutical injustice by enabling marginalized communities to develop shared conceptual resources.Question 11
Why might consequentialist reasoning ultimately support relatively firm truth-telling norms, rather than licensing lying whenever the expected consequences favor it?
Reveal Answer
Several considerations push consequentialists toward firm truth-telling norms even from a consequentialist starting point. First, **rule consequentialism**: even if individual lies sometimes produce better consequences than the truth, a general practice of "lying when I calculate good consequences will follow" may produce worse aggregate consequences than near-universal truth-telling, because the latter sustains trust while the former erodes it. Second, **epistemic limitations**: humans are notoriously poor predictors of consequences, especially in the complex, multi-agent environments characteristic of information ecosystems. A lie told with good intentions can spread further than the truth, mutate into worse falsehoods, and produce far more harm than intended. This uncertainty about consequences provides a consequentialist argument for firm truth-telling rules that function as epistemic guardrails. Third, **trust externalities**: individual lies impose negative externalities on others who rely on the general trustworthiness of communication; a consequentialist calculation that only considers the direct consequences of a specific lie, without accounting for these systemic effects, is incomplete.Question 12
What is the "private censorship problem" in platform content moderation, and how does it differ from traditional concerns about government censorship?
Reveal Answer
The **private censorship problem** refers to the concern that when private companies make content moderation decisions that effectively silence speakers — removing content, suspending accounts, reducing algorithmic reach — they exercise power over public discourse comparable to government censorship, but without the constitutional constraints, democratic accountability, or transparency requirements that govern state censorship. It differs from traditional government censorship concerns in several ways: (1) private companies are not bound by constitutional free speech protections in most jurisdictions; (2) platform moderation decisions are not made through democratic processes; (3) the scope and opacity of algorithmic moderation make it difficult for affected speakers to understand or contest decisions; (4) platforms' market dominance means there may be no practical alternative for reaching a public audience. However, the private censorship concern can be overstated: private editorial decision-making (by newspapers, publishers, bookstores) has always been permitted without this constituting "censorship" in the constitutionally significant sense. The distinctive concern is not that platforms are private, but that their scale and dominance make their editorial decisions functionally public in a way that earlier private editorial decisions were not.Question 13
What is the "duty of correction," and how does it relate to the original duty of epistemic care?
Reveal Answer
The **duty of correction** holds that when we learn that information we have previously shared was false or misleading, we have a responsibility to make reasonable efforts to correct the record — notifying those we shared with, correcting public posts, issuing retractions or updates. It relates to the duty of epistemic care (the duty to take reasonable care in assessing information before sharing) as a secondary obligation: if the primary duty (epistemic care before sharing) is fulfilled and information turns out to be false despite good-faith efforts, the duty of correction takes over. If the primary duty was not fulfilled — if content was shared carelessly — the duty of correction is correspondingly more urgent. The duty of correction is generally weaker than the duty of care (it does not require exhaustive efforts to correct every person who may have encountered the false information) but is not trivial. It is amplified by platform influence: a public figure with millions of followers who has shared misinformation has a stronger duty of correction than a private individual, both because the epistemic harm was greater and because their correction would reach more people. The correction asymmetry (corrections are less effective than original claims) makes timely correction especially important.Question 14
In the context of the epistemic commons, why should epistemic infrastructure be treated as democratic infrastructure?
Reveal Answer
The **epistemic commons** — the shared informational environment of accurate knowledge, reliable institutions, and sound epistemic norms — is a precondition for democratic self-governance. Democracy requires that citizens be able to form well-grounded beliefs about political candidates, policies, and public affairs; make genuine informed choices among options; and hold their representatives accountable based on accurate assessments of their performance. These democratic functions depend on epistemic infrastructure: independent journalism to investigate and report on public affairs; scientific institutions to produce reliable knowledge about empirical questions; educational systems that cultivate epistemic virtues; and public forums with sufficient epistemic quality to enable genuine deliberation. When epistemic infrastructure deteriorates — through the collapse of local journalism, underfunding of scientific institutions, degradation of educational quality, or the pollution of public forums with misinformation — democratic capacity deteriorates with it. Citizens who cannot access or evaluate accurate information cannot govern themselves effectively. Treating epistemic infrastructure as democratic infrastructure means recognizing that public investment in these institutions is not merely a cultural expenditure but a requirement of democratic maintenance, comparable to investment in courts, electoral systems, and other democratic institutions.Question 15
What distinguishes "malinformation" from ordinary misinformation, and what are the unique ethical challenges it poses?
Reveal Answer
**Malinformation** refers to the disclosure of true information with the intent to harm — for example, publishing accurate private information to damage a person's reputation, leaking true information about a political figure with the intent to manipulate an election, or releasing accurate corporate communications to sabotage a business negotiation. Malinformation poses unique ethical challenges precisely because its content is true. Standard justifications for restricting misinformation — that false information harms people by producing false beliefs — do not apply when the information is accurate. Yet malinformation can cause significant harm: to individuals' privacy, to democratic processes, to legitimate institutional interests. The ethical analysis must therefore focus not on truth value but on intent, context, and consequences. The key questions are: (1) Does the person whose information is disclosed have a legitimate privacy interest? (2) Is the harm caused proportionate to any public interest served by disclosure? (3) Is the disclosure serving the audience's genuine epistemic interests, or is it weaponizing true information for private or political gain? Whistleblowing involves true information disclosed in ways that may harm specific parties; its ethics depend on whether the public interest in the disclosure outweighs the private harm and whether disclosure is the minimum necessary for the legitimate epistemic purpose.Question 16
How do Elinor Ostrom's principles for governing common-pool resources apply to the epistemic commons?
Reveal Answer
Elinor Ostrom showed that commons can be effectively governed without privatization or top-down regulation when users develop their own institutional arrangements. Her eight principles map onto the epistemic commons as follows: (1) **Clearly defined boundaries**: epistemic communities need defined membership and scope — who participates in which information spaces and under what norms. (2) **Congruence with local conditions**: moderation norms should be calibrated to the specific epistemic and cultural context of each community, not imposed uniformly from outside. (3) **Collective-choice arrangements**: the norms governing shared information spaces should be developed with meaningful participation from those governed by them. (4) **Monitoring**: epistemic quality needs ongoing monitoring, e.g., through external fact-checking and algorithmic audits. (5) **Graduated sanctions**: epistemic violations should be met with proportionate responses — labels, reduced distribution, eventual removal — rather than immediate maximum penalties. (6) **Conflict resolution**: clear, accessible mechanisms for contesting moderation decisions. (7) **Rights recognition**: legitimate spaces for epistemic community governance must be recognized by external authorities (platforms, governments). (8) **Nested enterprises**: epistemic governance at multiple scales (community, platform, national, international) should be coordinated rather than redundant.Question 17
What is the "calibration problem" in epistemic paternalism, and why does it complicate the ethical case for content moderation?
Reveal Answer
Even when the ethical case for epistemic intervention is strong in principle — because false content causes harm to others — there is a practical **calibration problem**: we lack reliable knowledge about which specific interventions effectively reduce epistemic harm and which are counterproductive. Research on correction effects shows mixed results: warning labels sometimes reduce engagement with labeled content but may increase credibility among those who distrust the labeling authority. The "backfire effect" (corrections strengthening false beliefs) has not been reliably replicated in most laboratory studies, but effects vary significantly by population and context. Content removal may effectively limit the spread of a specific falsehood while generating grievance and "censorship" narratives that are more epistemically damaging than the original false content would have been. This calibration problem means that the ethical case for intervention cannot rest only on the in-principle harm-prevention justification. It must also engage with empirical evidence about the effectiveness of specific interventions in specific contexts. An intervention that is ethically justified in principle but consistently counterproductive in practice may do more epistemic harm than no intervention. This requires a commitment to ongoing empirical evaluation of intervention effects — something that platform policies have historically been poorly designed to enable.Question 18
What role does the "selection problem" in fact-checking play in perpetuating or challenging epistemic injustice?
Reveal Answer
The **selection problem** — fact-checkers must choose which claims to investigate from among far more claims than they can check — directly interacts with epistemic justice concerns. Selection decisions are not epistemically neutral: by choosing to check powerful politicians' claims more often than equivalent claims by less powerful actors, fact-checkers reinforce the pattern of subjecting some epistemic actors to scrutiny while implicitly crediting others. This can perpetuate testimonial injustice by asymmetrically policing the epistemic claims of already-credibility-discounted groups. Conversely, if selection decisions systematically target claims from marginalized communities whose heterodox views challenge mainstream frameworks, fact-checking can become an instrument of epistemic oppression — using the authority of credentialed fact-checkers to delegitimize claims that deserve serious engagement. Addressing epistemic injustice in fact-checking requires: deliberate attention to selection criteria that do not simply follow the existing hierarchy of attention and credibility; outreach to communities whose claims are systematically under-checked; and transparency about selection methodology that allows communities to identify and challenge systematic biases in what gets checked.Question 19
According to the chapter, what specific obligations do individuals with large social media platforms have that ordinary users do not?
Reveal Answer
The chapter argues that epistemic responsibility should be **scaled to epistemic influence** — those who have large platforms exercise power over others' beliefs at scale, and this power generates commensurate responsibilities. Specifically, individuals with large platforms (politicians, celebrities, journalists, academics) have: (1) A heightened **duty of epistemic care** before sharing: they must invest more effort in verifying information before sharing because the downstream epistemic harm of sharing misinformation is proportionally greater. (2) A stronger **duty of correction**: if they share misinformation, their larger platform means the misinformation reached more people, and their correction can reach more people too — so the obligation to correct is both more urgent and more actionable. (3) A **duty of epistemic framing**: they must be especially careful not to share true information in misleading ways, since their framing is particularly likely to be adopted by followers. (4) A **duty of disclosure**: conflicts of interest that might bias their epistemic judgments must be disclosed to their audience. This scaling of responsibility does not deprive public figures of the right to be wrong; it attaches special obligations of care proportionate to their capacity for epistemic impact.Question 20
What did the COVID-19 lab-leak hypothesis case reveal about the relationship between scientific consensus and platform content moderation?
Reveal Answer
The COVID-19 lab-leak hypothesis case revealed several important limitations of using "scientific consensus" as a standard for content moderation. In early-to-mid 2020, several major platforms and fact-checkers classified content suggesting that SARS-CoV-2 might have originated from a laboratory leak as misinformation or "conspiracy theory," citing the position of mainstream public health authorities. By 2021-2023, the FBI and Department of Energy had assessed the lab-leak hypothesis as at least possible, senior scientists acknowledged the question was genuinely open, and consensus positions had shifted significantly toward uncertainty. This case demonstrates: (1) **Scientific consensus is not equivalent to settled fact** — consensus can be wrong, can be premature, and can lag behind emerging evidence; (2) **Platforms that moderate based on consensus can suppress legitimate scientific inquiry** — particularly heterodox views that later prove well-founded; (3) **Fact-check verdicts can persist** long after the evidentiary basis that supported them has shifted, creating lasting epistemic distortions; (4) **Moderation systems need robust correction mechanisms** as compelling as the original labeling; (5) **The difference between "contradicts current consensus" and "is false" is crucial** and platforms often conflate them.Question 21
How does Hannah Arendt's analysis of totalitarianism relate to the pessimistic scenario for the future of truth in the digital age?
Reveal Answer
Hannah Arendt observed that totalitarian societies did not require citizens to believe in the regime's propaganda — they required citizens to be willing to accept any claim from authority, regardless of evidence, and to abandon the capacity for independent judgment. The goal of totalitarian propaganda was not to create specific false beliefs but to destroy the very habit of truth-seeking: a population that has given up trying to know the truth is maximally manipulable. The chapter's pessimistic scenario echoes this analysis in a digital context. The risk is not simply that citizens will believe specific false things — it is that the proliferation of synthetic media, the fragmentation of epistemic authorities, and the deliberate manipulation of information environments may produce a population that has given up trying to determine what is true, retreating instead into tribal epistemologies where claims are accepted or rejected based on source identity rather than evidence. This condition — epistemic nihilism or cynicism — is the precondition for authoritarian manipulation: if citizens cannot distinguish true from false, they are susceptible to whoever makes the most confident, emotionally resonant assertion most effectively. The digital-age analogue to Arendt's observation is that the threat to democracy may come not from censorship of truth but from the overwhelming of truth-tracking capacities by the sheer volume and sophistication of competing manipulations.Question 22
What is the relationship between the epistemic commons concept and the classical economic theory of public goods?
Reveal Answer
Economists define **public goods** by two properties: non-excludability (it is difficult or impossible to prevent anyone from using the good) and non-rivalry (one person's use does not diminish the supply available to others). Information is a classical public good: a true belief, once disseminated, cannot be "used up," and it is difficult (though not impossible) to exclude people from accessing publicly available information. The **epistemic commons** applies the public goods framework to the collective stock of shared knowledge, institutions, and epistemic norms that members of a society rely on. Like other public goods, the epistemic commons suffers from the problem of **underprovision**: because individuals cannot fully capture the benefits of contributing to epistemic quality (producing careful, accurate journalism, for example), and because they cannot be excluded from the benefits of others' epistemic contributions, private incentives tend to undersupply epistemic public goods relative to their social value. This undersupply problem provides a market-failure justification for public investment and public governance of epistemic infrastructure, parallel to the standard market-failure justification for public provision of national defense, basic research, and other public goods. The tragedy-of-the-commons dynamic (sharing misinformation is individually rational but collectively harmful) is the negative externality side of the same public goods problem: individual choices impose epistemic costs on others that are not reflected in the individual's incentives.Question 23
Evaluate the claim that "spin" is always unethical. What is the strongest defense of spin as a legitimate communicative practice, and what is the strongest critique?
Reveal Answer
**Strongest defense of spin as legitimate**: In genuinely adversarial and transparent contexts — formal debates, legal proceedings, official position papers — spin (presenting information in the most favorable possible light) is a recognized and accepted convention. Audiences understand that advocates present their strongest case; the adversarial structure is publicly marked; and truth is expected to emerge from the contest of competing advocates before a neutral arbiter (judge, audience, voter). In this context, spin is not deception but advocacy — a legitimate mode of communication whose conventions are mutually understood. **Strongest critique of spin**: The defense of spin assumes conditions (transparent adversarial structure, neutral arbiter, audience sophistication) that frequently do not obtain in real-world communication. On social media, the "adversarial" frame is absent — users typically do not know they are receiving one-sided advocacy. No neutral arbiter is present to synthesize competing spins into truth. Powerful actors can spin far more effectively than less-resourced opponents, systematically distorting the information environment in their favor. And the cumulative effect of pervasive spin degrades the epistemic quality of public discourse even when individual acts of spin are locally defended. **Assessment**: Spin is ethically permissible in explicitly adversarial, marked, and balanced contexts, but ethically problematic when it exploits the absence of these conditions — particularly in the diffuse, unmarked environment of social media where the gap between what is literally said and what is intended to be conveyed operates without the corrective mechanisms that make adversarial advocacy epistemically defensible.Question 24
What three collective-action solutions to the tragedy of the epistemic commons does the chapter identify, and what are the strengths and weaknesses of each?
Reveal Answer
The chapter identifies three standard solutions to commons problems applied to the epistemic domain: **1. Privatization** — assigning control of information spaces to private entities who then have incentives to maintain quality. Strength: private controllers have direct incentives to maintain quality if their audience values it. Weakness: private controllers may maximize engagement rather than accuracy; they may be unaccountable to the public; and market competition may produce a "race to the bottom" in which attention-capturing but epistemically low-quality content wins. **2. Regulation** — external rules limiting uses of the epistemic commons that degrade its quality (truth-in-advertising laws, misinformation liability, platform accountability standards). Strength: external requirements can mandate quality without depending on market incentives. Weakness: regulation can suppress legitimate expression, entrench incumbent power, be captured by political actors with interests in manipulating the information environment, and may be slow to adapt to new epistemic threats. **3. Cooperation through norms and institutions** — developing shared epistemic norms and institutions maintained through social rather than governmental enforcement (equivalent to Ostrom's common-pool resource management). Strength: norm-based governance is flexible, resistant to political capture, and can operate without coercive enforcement. Weakness: it requires participants to internalize epistemic values, which is a long-term project; it may be insufficiently robust against well-resourced bad actors; and it tends to work best within well-defined communities, becoming harder to sustain across diverse, fragmented publics. The chapter suggests cooperation through norms and institutions is the most promising long-term solution, but acknowledges that regulation and market mechanisms have roles in the short and medium term.Question 25
What does the chapter mean by the claim that "the future of truth is not determined by forces beyond individual control"?