Chapter 41: Exercises

Ethics of Truth, Deception, and the Epistemic Commons


Part A: Analytical Exercises

Exercise 1: Classifying Deception

For each of the following statements, identify whether it represents (a) an outright lie, (b) a technically true but misleading statement, (c) misleading implicature, (d) selective emphasis, (e) strategic omission, or (f) legitimate persuasion. Justify your classification with reference to the conceptual distinctions developed in Section 41.2.

  1. A pharmaceutical company's advertisement states: "Nine out of ten doctors recommend Brand X pain reliever." (The company commissioned a survey that polled only 10 doctors, none of whom were pain specialists.)
  2. A politician says: "Crime dropped 12% during my administration." (They omit that violent crime increased 8% while only property crime fell enough to produce the overall 12% figure.)
  3. A job applicant says "I've never been convicted of a crime" — which is true, but only because their prior arrest resulted in a plea deal that was later expunged.
  4. A social media influencer posts glowing reviews of a supplement without disclosing that they are being paid to promote it.
  5. A news headline reads: "Scientists Find Link Between Coffee and Cancer." (The study found a weak statistical association in one animal model, with no established causal mechanism.)
  6. A candidate says "My opponent supported a bill that cut education funding." (The opponent voted for a broad budget compromise in which education funding was reduced by 0.3% while overall funding to schools increased through other channels.)

Exercise 2: Applying the Categorical Imperative

Consider the following scenario: A journalist is interviewing a source who has inside information about corruption at a government agency. The source asks whether the journalist is recording the conversation. The journalist is recording it (lawfully, in a one-party consent jurisdiction) but knows that the source will be less forthcoming if they know they are being recorded. The journalist says nothing and allows the source to assume they are not being recorded.

a) Analyze this situation using Kant's universalizability test. What is the maxim the journalist is acting on? Can it be universalized without contradiction?

b) Analyze it using Kant's humanity formula. Is the journalist treating the source as an end or as a mere means?

c) Analyze it from a consequentialist perspective. What consequences follow from the journalist's behavior in this instance? What consequences would follow from a general practice of journalists behaving this way?

d) What would Bernard Williams's account of sincerity require in this situation? Does the journalist violate the virtue of sincerity by saying nothing?

e) What is your all-things-considered ethical judgment? Does the public interest value of the resulting investigation alter the ethical analysis?

Exercise 3: The Right to Know — Philosophical Reconstruction

Construct a philosophical argument for a positive right to accurate information. Your argument should:

a) Identify the normative foundation (deontological, consequentialist, contractualist, or virtue-ethical) on which the right rests. b) Specify who the right-bearer is (all persons, all citizens, particular classes of affected persons). c) Specify who the duty-bearer is (governments, corporations, media organizations, individuals). d) Articulate the content of the right — what specifically it entitles the right-bearer to. e) Identify at least two genuine competing considerations that limit or complicate the right.

Then write a two-paragraph rebuttal of your argument from a libertarian perspective, and respond to that rebuttal.

Exercise 4: Epistemic Injustice Analysis

Read the following scenario carefully, then answer the questions below.

Scenario: In 2021, researchers at a prominent university's medical school published a preprint suggesting that a widely used industrial chemical was associated with elevated cancer rates in communities adjacent to manufacturing facilities. Many of these communities were predominantly low-income and largely composed of racial minorities. The preprint received limited mainstream media attention. Several months later, a corporate-funded study published in a peer-reviewed journal found no such association, and this study received extensive coverage. When community members and local advocacy groups attempted to raise the issue on social media, their posts were flagged by automated content moderation systems as "health misinformation" because they referenced the unreviewed preprint rather than the peer-reviewed study. Corporate communications teams used this labeling to dismiss community concerns in public forums.*

a) Identify the specific forms of epistemic injustice, using Fricker's framework, that appear in this scenario. b) Is the content moderation system's behavior in this scenario a form of testimonial injustice? Defend your answer. c) What hermeneutical resources would the affected communities need to effectively articulate and advocate for their epistemic interests? d) What reforms to content moderation policies would Fricker's framework suggest? e) How does this scenario illustrate the "whose misinformation gets corrected?" problem discussed in Section 41.4?


Part B: Applied Ethical Dilemmas

Exercise 5: The Platform Moderation Trolley Problem

You are the head of content policy at a major social media platform. Consider the following three scenarios. For each, state what action you would take and provide a rigorous ethical justification drawing on at least two of the frameworks discussed in the chapter.

Scenario A: A viral post claims that a particular vaccine batch has caused hundreds of deaths and calls for immediate withdrawal of the vaccine from the market. Independent investigation by your team finds that the post is based on a misreading of VAERS data — the deaths mentioned occurred after vaccination, but established causation has not been shown. The claims are spreading rapidly and have led several hospitals to pause vaccination programs. Removal would suppress speech that some users defend as legitimate medical inquiry.

Scenario B: A sitting head of government posts a claim that the upcoming election will be "rigged" because of mail-in voting fraud. The claim is contradicted by election security officials and substantial evidence, but the official has millions of followers and the claim is being widely shared domestically and internationally. Removing the post would be seen by the official's supporters as political interference; leaving it may contribute to undermining confidence in the election outcome.

Scenario C: An anonymous account with 200 followers posts a detailed thread claiming that a prominent scientist has fabricated data in a major climate study. The thread contains some legitimate technical criticisms alongside several serious mischaracterizations. The scientist denies the allegations. The post is gaining traction in climate-skeptic networks but has not yet crossed into mainstream platforms. Should you remove it, label it, reduce its distribution, or leave it alone?

Exercise 6: The Fact-Checker's Dilemma

You are a senior editor at a fact-checking organization. Your team has thoroughly investigated a claim made by a prominent political figure and your verdict is "Mostly False." However, two members of your team raise the following concerns before publication:

  • Researcher A argues that the claim, while factually imprecise, reflects a genuine and widespread public concern that deserves to be addressed rather than merely debunked. Publishing "Mostly False" will alienate the communities whose concerns the claim expresses, reducing their trust in fact-checkers generally.

  • Researcher B argues that the claim is made by a politician who has not previously been fact-checked by your organization, while politicians of the opposing party have been checked frequently. Publishing this fact-check contributes to a perception of political bias that will undermine your organization's credibility.

a) Respond to Researcher A's concern. Does the epistemic function of fact-checking require engaging with the underlying concerns that false claims express, or is it sufficient to demonstrate factual inaccuracy?

b) Respond to Researcher B's concern. Is the selection of which claims to fact-check a relevant ethical consideration, or is it irrelevant to the ethical analysis of publishing a specific fact-check?

c) Draft a response to a public complaint that your organization is politically biased. The response should be honest about the limitations and difficulties of fact-checking without abandoning the epistemic value of the practice.

Exercise 7: The Whistleblower's Truth

A government employee discovers that their agency has been systematically falsifying air quality data in communities near industrial sites, exposing residents to illegal levels of pollutants. The falsification was ordered by agency leadership under pressure from industry lobbyists. The employee has documented proof.

a) Apply the concept of "malinformation" to this scenario. The employee possesses true information whose disclosure will harm the government agency and its leadership. Does the ethics of malinformation apply here? Why or why not?

b) Construct the strongest case that the employee has an obligation, not merely a right, to disclose this information.

c) Construct the strongest case against disclosure. What legitimate interests would argue for maintaining confidentiality?

d) Apply Mill's harm principle. Does the harm to others caused by the falsification justify disclosure even if the employee must violate their employment contract and potentially face legal consequences?

e) Using the concept of the epistemic commons, explain what harm the government agency's falsification has done to the shared information environment, and how disclosure might repair that harm.


Part C: Debate-Style Exercises

Exercise 8: Resolved — Platforms Have a Moral Obligation to Remove Demonstrably False Health Information

Prepare arguments for both the affirmative and the negative positions. Your arguments should: - Reference at least three philosophical frameworks (e.g., Kantian ethics, consequentialism, Mill's liberty principle, virtue ethics, contractualism). - Engage with the strongest version of the opposing argument. - Address the "who decides what's true?" problem. - Acknowledge the harm asymmetry between under- and over-moderation.

After preparing both arguments, write a 300-word synthesis identifying where the strongest version of each position lies and what empirical questions would most affect the outcome of the debate.

Exercise 9: Resolved — Anonymous Speech Online Should Be Protected as an Epistemic Right

Anonymous speech has a long history as a vehicle for truth-telling when disclosure would carry disproportionate risks (whistleblowers, dissidents in authoritarian regimes, individuals with stigmatized conditions sharing health information). At the same time, anonymity enables misinformation, harassment, and coordinated manipulation campaigns to avoid accountability.

a) Construct the strongest argument for protecting anonymous speech as an epistemic right. b) Construct the strongest argument that epistemic responsibility requires some form of accountability for online speech that is not compatible with full anonymity. c) Propose a middle-ground position that accounts for both the epistemic value of anonymity and the epistemic risks of unaccountable speech.

Exercise 10: Resolved — It Is Never Ethical to Share a Story You Have Not Personally Verified

a) Defend this position by reference to the duty of epistemic care developed in Section 41.9. b) Refute this position by identifying categories of cases in which sharing unverified information is ethically permissible or even obligatory. c) Develop a more nuanced principle that captures the genuine epistemic duty of care without collapsing into an unworkable absolute prohibition.


Part D: Reflective Exercises

Exercise 11: Personal Epistemic Audit

Conduct a one-week audit of your own information-sharing behavior on social media. For each post, article, or claim you share, answer the following questions:

  1. Did I verify this claim before sharing it?
  2. If not, why not — was it because I lacked time, because I assumed it was true, because I agreed with it politically, or because I simply didn't think about it?
  3. Was the source of the information one with a track record of accuracy?
  4. Did I add any framing, commentary, or context that might have created impressions beyond what the original content warranted?
  5. If the information turned out to be false or misleading, did I correct the record?

At the end of the week, write a 500-word reflection on what your audit reveals about your epistemic practices and responsibilities. Where do you fall short of the duties discussed in Section 41.9? What practical steps could you take to improve?

Exercise 12: Mapping Your Epistemic Ecosystem

Draw a diagram of your personal "epistemic ecosystem" — the sources of information you regularly rely upon, the institutions you trust, the social networks through which information reaches you, and the practices you use to evaluate what you encounter.

a) Identify at least three epistemic vulnerabilities in your ecosystem — places where false or misleading information might enter without adequate checking. b) Identify the trusted sources in your ecosystem. On what basis do you trust them? Are your trust assignments epistemically justified? c) How does your epistemic ecosystem compare to those of people with very different political, cultural, or social backgrounds? What shared epistemic infrastructure do you rely on in common? d) Based on your analysis, what is the most important change you could make to your epistemic practices or sources to reduce your vulnerability to misinformation?

Exercise 13: The Ethics of Correction

Consider a time when you shared information that turned out to be false or significantly misleading. (If no such instance comes to mind, consider a hypothetical in which you are most likely to make such a mistake.)

a) What would the duty of correction, as described in Section 41.9, require of you? b) What practical obstacles would make fulfilling that duty difficult (the original post may be difficult to find, your followers may have already moved on, correction may require admitting error publicly)? c) Is the duty of correction stronger when the false information you shared was politically convenient for you than when it was neutral? Why or why not? d) Design a social norm — something that could realistically be adopted on a major social media platform — that would make the practice of correction more common and more visible.


Part E: Applied Policy Analysis

Exercise 14: Designing an Epistemic Bill of Rights

Draft a proposed "Epistemic Bill of Rights" for citizens in a democracy. Your bill of rights should:

a) Include at least five specific rights, with a brief justification for each. b) Specify corresponding duties that are generated by each right (for governments, platforms, media organizations, and individuals). c) Address potential conflicts between proposed rights (e.g., the right to accurate information vs. the right to privacy). d) Consider how the rights would be enforced — what institutional mechanisms would give them effect.

Compare your bill of rights with those proposed by other students. What points of consensus and disagreement emerge?

Exercise 15: Evaluating Community Notes

X (formerly Twitter)'s "Community Notes" program allows users to collaboratively add context to potentially misleading tweets. Study the publicly available data on Community Notes (available at communitynotes.twitter.com/docs/resources) and answer the following:

a) What epistemic virtues does the Community Notes model instantiate? Which epistemic vices does it risk? b) Does the crowdsourced model address the "who decides what's true?" problem more satisfactorily than expert-based fact-checking? What are its advantages and disadvantages relative to expert fact-checking? c) Apply Fricker's framework: are there systematic biases in who writes Community Notes and what content gets noted? What would Fricker predict about the epistemic justice implications of a crowdsourced note-writing system? d) Design a modification of the Community Notes model that would address at least one of the epistemic justice concerns you have identified.

Exercise 16: Platform Ethics Audit

Choose one major social media platform (Instagram, TikTok, YouTube, Facebook, X, LinkedIn, or another of your choice) and conduct a structured ethical audit of its content moderation policies. Your audit should:

a) Describe the platform's stated policies on misinformation, with specific reference to what types of false content are prohibited, reduced in distribution, or labeled. b) Evaluate these policies against the ethical frameworks discussed in the chapter. Which frameworks do they appear to be based on? Which concerns do they fail to address? c) Identify at least two documented cases in which the platform's moderation decisions were contested. What do these cases reveal about the difficulties of implementing the policies in practice? d) Recommend three specific changes to the platform's policies that would better serve the ethical goals articulated in this chapter, with justification.


Part F: Synthesis Exercises

Exercise 17: The Harm Asymmetry in Practice

The chapter introduces the concept of "harm asymmetry" — the observation that the harms of under-moderation and over-moderation are not symmetrically distributed. Apply this concept to the following three content categories:

a) Anti-vaccine claims during an active public health emergency. b) Unverified allegations of criminal misconduct against a private individual. c) Contested historical claims about events (e.g., genocide denial, disputed casualty figures in historical conflicts).

For each category: (1) describe the specific harms of under-moderation, (2) describe the specific harms of over-moderation, (3) assess which type of harm is more serious and why, and (4) propose a moderation approach calibrated to the specific harm asymmetry in that category.

Exercise 18: Epistemic Commons Governance Proposal

Drawing on Elinor Ostrom's principles for governing common-pool resources, design a governance framework for the epistemic commons. Ostrom identified eight principles for successful commons governance, including clearly defined boundaries, congruence between rules and local conditions, collective-choice arrangements, monitoring, graduated sanctions, conflict resolution mechanisms, recognition of rights to organize, and nested enterprises.

Apply each of Ostrom's principles to the epistemic commons governance challenge. For each principle, identify: (1) what the equivalent institution or mechanism would be in the epistemic domain, (2) what existing initiatives partially fulfill this function, and (3) what reforms would be needed to more fully realize the principle.

Exercise 19: From Ethics to Policy

The chapter makes philosophical arguments about epistemic responsibility, the right to know, and the ethics of platform moderation. But philosophical arguments must be translated into institutional design, legal frameworks, and policy mechanisms to have real-world effect.

Choose one of the philosophical positions developed in the chapter — e.g., the positive right to accurate information, the duty of correction, the proportionate moderation principle — and develop a concrete policy proposal that would give effect to that philosophical position. Your proposal should include:

a) A clear statement of the policy objective, derived from the philosophical position. b) Identification of the relevant institutional actors (government agencies, private platforms, civil society organizations) and their roles. c) Specific mechanisms — legal requirements, incentive structures, professional standards, technical interventions — through which the policy would operate. d) An analysis of potential objections to the policy, and responses to those objections. e) A proposed evaluation framework: how would we know if the policy is working?

Exercise 20: The Ethics of Algorithmic Curation

Algorithms that curate social media feeds make consequential epistemic choices: they determine which information users encounter, in what order, and in what volume. These choices are not ethically neutral — they shape belief formation at massive scale.

a) Argue that algorithmic curation constitutes a form of epistemic paternalism, even when users have consented to the use of a curating algorithm in general terms. b) Apply the harm asymmetry framework to the specific decision of whether to algorithmically reduce the distribution of content that has been flagged as potentially misleading but not conclusively false. c) Should users have a right to know how the algorithm curating their information environment works? Construct this as a question of epistemic rights and epistemic autonomy. d) Design an "epistemically responsible" algorithm — one that optimizes not only for engagement but for epistemic quality. What signals would such an algorithm use? What tradeoffs would it make?


Part G: Case-Based Ethical Reasoning

Exercise 21: The Satire Boundary Problem

Consider the following spectrum of satirical content:

  1. A clearly labeled satirical article from The Onion attributing absurd statements to a real politician.
  2. A meme that presents a fabricated quote by a real public figure in a realistic style with no satire label.
  3. A satirical news segment on a late-night comedy show that includes some false factual claims mixed with genuine criticism.
  4. A deepfake video showing a real politician making statements they never made, posted on a platform that allows "clearly marked" synthetic content, but with a "satire" label in small text below a realistic-looking thumbnail.

For each item: (a) assess the epistemic harm potential, (b) assess the legitimate expressive value, and (c) recommend a moderation approach that appropriately balances these considerations.

Then: identify the principles that emerge from your analysis that might serve as general guidelines for platform moderation of satirical and parodic content.

Exercise 22: The Expert Under Fire

A climate scientist with decades of research experience and a strong publication record posts a thread on social media explaining the scientific consensus on climate change. Her thread is heavily cited, widely shared, and rated as accurate by fact-checkers.

Three years later, a different climate scientist with comparable credentials publishes a peer-reviewed paper suggesting that one specific projection in the previous consensus summary was overstated, and that the mechanism for a particular feedback loop is different from what was previously understood. This new paper generates significant controversy within the climate science community.

a) During the three-year period before the new paper, should platforms have moderated content that disputed the original scientist's thread? Using what standards? b) After the new paper is published, what moderation status should the original thread carry? Should it be labeled? Corrected? Removed? c) How does this scenario illustrate the difference between "scientific consensus" and "settled science," and why does this distinction matter for platform moderation policy? d) Does the duty of correction apply to the original scientist, the platforms that amplified her original thread, or the fact-checkers who validated it? What does responsible correction look like in this scenario?

Exercise 23: The Aggregation Problem

A news website publishes 500 articles over the course of a year. An independent analysis finds that: - 450 articles are factually accurate and fairly presented. - 35 articles contain minor factual errors, all subsequently corrected. - 10 articles contain significant factual errors that were not corrected. - 5 articles are technically accurate but substantially misleading due to selective emphasis. - For every political story involving one party, the framing choices systematically favor one side.

a) On balance, is this news outlet trustworthy? How should we aggregate the epistemic evaluation of different aspects of an outlet's performance? b) A platform uses this analysis to reduce the distribution of all articles from this outlet. Is this a proportionate response? What alternative approaches might be more appropriate? c) Apply Williams's virtues of sincerity and accuracy to the news outlet as an institution. Does it exhibit these virtues? Can institutions, as opposed to individuals, exhibit epistemic virtues? d) A reader who encounters this analysis is trying to decide whether to continue using the outlet as a source. What decision rule would best serve their epistemic interests?

Exercise 24: Epistemic Responsibility at Scale

A public health researcher with 850,000 social media followers discovers a new study suggesting that a widely used household cleaning product may be associated with elevated risk of a serious autoimmune condition. The study is published in a reputable peer-reviewed journal, but the researcher recognizes that the findings are preliminary, based on a relatively small sample, and have not been replicated.

The researcher faces a choice: share the information now, because followers should know about potential risks; or wait for replication, because premature amplification of preliminary findings can cause disproportionate alarm.

a) What does the duty of epistemic care, scaled to the researcher's large following, require in this situation? b) If the researcher decides to share, what epistemic responsibilities attach to how they frame the information? Draft an exemplary version of how the researcher might share this information responsibly. c) If the researcher decides not to share until replication, what responsibility do they bear to their followers regarding their decision-making process? d) Does the answer change if the study involves a product that the researcher's institution has a financial relationship with (creating a potential conflict of interest)? How should conflicts of interest be disclosed in epistemic contexts?

Exercise 25: The Correction Asymmetry

Research in communication science has documented a "correction asymmetry": corrections to misinformation do not fully undo the belief change caused by the original misinformation, even when corrections are equally prominent and trusted. This asymmetry has several implications.

a) If corrections are systematically less effective than original claims, what additional obligations does this place on platforms, fact-checkers, and individuals before sharing unverified content? b) Does the correction asymmetry strengthen or weaken the case for pre-emptive content moderation (removal or labeling before content spreads widely)? c) A politician makes a false claim that generates significant media coverage; the correction, issued the following week, receives minimal coverage. Evaluate the politician's epistemic responsibility, given the likely correction asymmetry. d) How should we factor the correction asymmetry into the design of epistemic institutions — journalism, fact-checking, platform content policies?

Exercise 26: Building Epistemic Infrastructure

The chapter argues that "epistemic infrastructure" — journalism, scientific institutions, educational systems, and public forums — should be understood as democratic infrastructure requiring public investment and governance.

a) Construct a model of what robust epistemic infrastructure would look like, identifying the key institutions, their functions, their funding mechanisms, and their accountability structures. b) Identify the most serious current threats to epistemic infrastructure in your country (declining local journalism, underfunded public scientific institutions, erosion of educational quality, etc.). c) Propose three specific policy interventions — at the level of local, national, or international governance — that would strengthen epistemic infrastructure. For each, address: who pays, who governs, who benefits, and who might object. d) Is public investment in epistemic infrastructure compatible with freedom of the press and editorial independence? Where are the risks of government involvement in the information ecosystem, and how might they be mitigated?

Exercise 27: The Global Dimension

Most of the frameworks discussed in this chapter were developed in Western philosophical traditions and tested against examples primarily drawn from Western democracies. Consider the following questions:

a) How might Kantian ethics, Mill's harm principle, and Fricker's epistemic injustice framework apply differently in political contexts characterized by authoritarian governance, where "official truth" is enforced through censorship? b) The WHO's "infodemic" framework, developed during COVID-19, attempted to define and combat health misinformation globally. What challenges arise when applying a single global standard of epistemic responsibility across diverse cultural and political contexts? c) Content moderation policies developed by U.S.-based platforms are applied globally, often with significantly less investment in non-English content moderation. From the perspective of epistemic justice, evaluate this asymmetry. d) Propose a framework for cross-cultural epistemic ethics — principles for governing the information environment that could command assent across diverse political and cultural traditions. What level of abstraction and what core commitments would be necessary?

Exercise 28: Looking Forward

The chapter presents both optimistic and pessimistic scenarios for the future of the epistemic commons. This exercise asks you to go beyond these scenarios.

a) Identify three specific technologies that are likely to significantly affect the epistemic commons over the next decade (e.g., advances in AI-generated content, new forms of provenance verification, changes in platform business models, developments in neuroscience with implications for belief formation). For each technology, assess both the epistemic risks and the epistemic opportunities. b) What role should democratic governance play in shaping the development and deployment of these technologies, with reference to the epistemic commons? c) If you could make one change — to law, technology, culture, or education — that would do the most to protect the epistemic commons over the next 20 years, what would it be? Justify your choice by reference to the frameworks developed in this chapter. d) Write a 250-word letter to a student who will study this material 25 years from now, describing what you hope — and fear — the epistemic environment will look like in their time, and what obligations you believe your generation carries to theirs.

Exercise 29: Comprehensive Ethical Framework

Develop your own comprehensive ethical framework for evaluating cases involving truth, deception, and the epistemic commons. Your framework should:

a) Identify the most important ethical principles governing communication in a democratic society, drawing on but not limited to the frameworks discussed in this chapter. b) Specify how these principles should be prioritized when they conflict (e.g., how should the principle of epistemic autonomy be weighed against the principle of harm prevention?). c) Apply your framework to at least three of the cases discussed in the exercises above, demonstrating that your framework generates consistent and defensible judgments. d) Identify the most significant limitations of your framework — where its guidance is uncertain, where it may be culturally specific, or where it leaves important questions unresolved.

Exercise 30: Capstone Reflection

Having completed this textbook, write a 750-word reflective essay addressing the following questions:

a) What is the most important thing you have learned about the relationship between information, democracy, and individual responsibility? b) Has studying this material changed how you engage with information — what you read, what you share, whom you trust, how you evaluate claims? c) What do you believe are the two or three most important institutional reforms needed to protect the epistemic commons in your country? d) As an individual citizen, scholar, or professional, what specific commitments do you intend to make to the epistemic commons — in your information-seeking practices, your professional conduct, your political engagement, or your community participation? e) Is the future of truth ultimately hopeful or alarming? What most influences your assessment?