Quiz: Power, Knowledge, and Data

Test your understanding before moving to the next chapter. Target: 70% or higher to proceed.


Section 1: Multiple Choice (1 point each)

1. Foucault's concept of power/knowledge (pouvoir/savoir) holds that:

  • A) Knowledge is always used by the powerful to oppress the powerless.
  • B) Power and knowledge are inseparable — knowledge is produced within power relationships and reinforces those relationships.
  • C) Power derives exclusively from controlling access to information.
  • D) Knowledge is neutral and objective, but powerful actors sometimes misuse it.
Answer **B)** Power and knowledge are inseparable — knowledge is produced within power relationships and reinforces those relationships. *Explanation:* Section 5.1.1 explains that Foucault did not simply argue that power *uses* knowledge (as Option A implies) or that knowledge is neutral but exploitable (Option D). His claim is deeper: that knowledge itself is produced *within* power relations. The psychiatrist diagnosing a patient, the statistician defining a demographic category, and the data analyst choosing which metrics to track are all simultaneously producing knowledge and exercising power. Option C reduces power to information control, missing Foucault's insight that power operates through the very categories and norms knowledge creates.

2. Which of the following best illustrates disciplinary power as Foucault described it?

  • A) A government passes a law making it illegal to use encryption without a backdoor for law enforcement.
  • B) Employees modify their browsing habits because they know their employer's monitoring software might be recording their screens, even during periods when it may not be active.
  • C) A social media company sells user data to advertisers without users' knowledge.
  • D) A data broker compiles profiles of consumers and sells them to insurance companies.
Answer **B)** Employees modify their browsing habits because they know their employer's monitoring software might be recording their screens, even during periods when it may not be active. *Explanation:* Section 5.1.2 defines disciplinary power as the internalization of surveillance norms — people regulate their own behavior because they believe they *might* be observed. Option B perfectly captures this: the employees change their behavior not because they are being punished (sovereign power, Option A) or because population-level data is being analyzed (biopower), but because the *possibility* of observation is enough to discipline them. Option A is sovereign power (legal prohibition). Options C and D describe data extraction practices but not the self-regulation mechanism that defines disciplinary power.

3. Biopower, as described in Section 5.1.3, primarily operates through:

  • A) Direct commands issued by authorities to individuals.
  • B) Statistics, demographics, and population-level management.
  • C) Economic incentives offered to individual consumers.
  • D) Legal frameworks that criminalize undesirable behavior.
Answer **B)** Statistics, demographics, and population-level management. *Explanation:* Section 5.1.3 explains that biopower governs *populations* as biological entities rather than disciplining individual bodies. It operates through population analytics, predictive policing, public health surveillance, and insurance actuarial models. The "characteristic move" of biopower is to convert individual lives into statistical patterns and then govern the patterns. Options A and D describe sovereign power. Option C describes market mechanisms, not biopolitical governance.

4. According to Section 5.2.2, which of the following is a reason why transparency alone cannot resolve information asymmetry?

  • A) Organizations always refuse to disclose any information about their algorithms.
  • B) Users generally prefer not to know how their data is used.
  • C) Even when algorithms are disclosed, they may be too complex for non-experts to evaluate meaningfully.
  • D) Transparency laws are unconstitutional in most democratic countries.
Answer **C)** Even when algorithms are disclosed, they may be too complex for non-experts to evaluate meaningfully. *Explanation:* Section 5.2.2 identifies "complexity opacity" as one of four reasons transparency is insufficient. A machine learning model with millions of parameters is technically "transparent" if published, but it is not *comprehensible* to non-experts. Option A is incorrect because the chapter discusses what happens when organizations *do* disclose, not a blanket refusal. Option B is not an argument the chapter makes. Option D is factually incorrect — many democracies have transparency laws.

5. Eli's grandmother says the cameras and microphones on lampposts remind her of informant networks used during Jim Crow. This comparison most directly illustrates:

  • A) Data colonialism, because surveillance technologies are being exported from powerful nations to vulnerable communities.
  • B) The continuity of disciplinary power across historical contexts — the mechanism of behavioral modification through possible observation operates regardless of the specific technology.
  • C) Epistemic injustice, because her experience is being dismissed by authorities.
  • D) Information asymmetry, because she does not know what data the lampposts collect.
Answer **B)** The continuity of disciplinary power across historical contexts — the mechanism of behavioral modification through possible observation operates regardless of the specific technology. *Explanation:* Section 5.1.2 uses Eli's grandmother's comparison to show that the panoptic mechanism — behavior modification through the possibility of observation — is not unique to digital technology. She recognizes the same power dynamic in lampposts that she experienced with informant networks. While Option D (information asymmetry) may also be present, the comparison she draws is specifically about how the *possibility* of being watched changes behavior — the core of disciplinary power. Option A misapplies data colonialism. Option C could apply in other contexts but is not what the passage illustrates.

6. Nick Couldry and Ulises Mejias's concept of data colonialism argues that:

  • A) Only people in former colonial territories are subjected to data extraction by technology companies.
  • B) The extraction of behavioral data by platforms constitutes a new form of colonialism that appropriates human life for capital accumulation.
  • C) Data collection is unethical only when it is conducted by companies headquartered in historically colonial powers.
  • D) The term "colonialism" should be reserved exclusively for historical relationships involving territorial conquest.
Answer **B)** The extraction of behavioral data by platforms constitutes a new form of colonialism that appropriates human life for capital accumulation. *Explanation:* Section 5.4.1 explains that Couldry and Mejias draw a structural parallel between historical colonialism (which appropriated land and labor) and platform economics (which appropriates behavioral data). The parallel involves appropriation of raw material, unequal exchange, ideological justification, and erasure of autonomy. Option A is incorrect — data colonialism operates globally, exploiting populations within wealthy nations as well. Option C imposes a geographic restriction the framework does not make. Option D represents one critique of the framework (Section 5.4.2), not the framework's own claim.

7. The "transparency paradox" described in Section 5.2.2 refers to the fact that:

  • A) The more transparent a system is, the more vulnerable it becomes to hacking.
  • B) Transparency requirements can be formally satisfied while the most consequential information remains practically inaccessible due to complexity, information overload, or strategic disclosure.
  • C) People who demand transparency about data systems are usually hiding something themselves.
  • D) Governments are inherently less transparent than corporations.
Answer **B)** Transparency requirements can be formally satisfied while the most consequential information remains practically inaccessible due to complexity, information overload, or strategic disclosure. *Explanation:* Section 5.2.2 describes four ways transparency fails: complexity opacity (algorithms too complex to evaluate), information overload (76 working days to read all privacy policies), strategic disclosure (revealing "over 100 factors" without specifying them), and power-preserving transparency (knowing how you are exploited does not give you power to stop it). The paradox is that formal transparency can coexist with practical opacity. Options A, C, and D are not arguments made in the chapter.

8. Sousveillance, as described in Section 5.3.3, refers to:

  • A) The use of encryption to protect data from surveillance.
  • B) The monitoring of authorities by the public, rather than the monitoring of the public by authorities.
  • C) The practice of deliberately generating false data to confuse surveillance systems.
  • D) Government agencies surveilling each other to prevent abuses of power.
Answer **B)** The monitoring of authorities by the public, rather than the monitoring of the public by authorities. *Explanation:* Section 5.3.3 defines sousveillance (a term coined by Steve Mann) as "watching from below" — citizens monitoring authorities rather than the reverse. Examples include body cameras on police officers, citizen journalism, and leak platforms like WikiLeaks. Option C describes obfuscation, not sousveillance. Option A describes a privacy-protective technology but not the concept of sousveillance. Option D describes inter-governmental oversight, not public monitoring of power.

9. Miranda Fricker's concept of hermeneutical injustice is most relevant to which data governance challenge?

  • A) Companies charging different prices to different customers based on their data profiles.
  • B) People experiencing harms from algorithmic systems but lacking the conceptual vocabulary to name and communicate what is happening to them.
  • C) Data brokers selling personal information without the data subject's knowledge.
  • D) Government surveillance programs operating in secret.
Answer **B)** People experiencing harms from algorithmic systems but lacking the conceptual vocabulary to name and communicate what is happening to them. *Explanation:* Section 5.5.1 defines hermeneutical injustice as occurring "when people lack the conceptual resources to understand their own experiences." The chapter notes that before terms like "surveillance capitalism," "dark patterns," and "algorithmic bias" existed, people who experienced these phenomena had difficulty articulating what was happening. Eli describes this directly: "People felt surveilled, but when they complained, they were told the sensors were 'just for traffic.' It wasn't until we learned the technical vocabulary ... that we could name what was happening." Options A, C, and D describe real injustices but not specifically the *hermeneutical* gap — the absence of vocabulary to make sense of experience.

10. The data justice framework described in Section 5.6 asks all of the following questions EXCEPT:

  • A) Who benefits from data systems and who bears the costs? (distributive justice)
  • B) Who participates in decisions about data governance? (procedural justice)
  • C) How can data systems be made more profitable while reducing legal liability? (efficiency justice)
  • D) Whose experiences and knowledge are represented in data systems? (recognition justice)
Answer **C)** How can data systems be made more profitable while reducing legal liability? (efficiency justice) *Explanation:* Section 5.6 identifies four dimensions of data justice: distributive (who benefits and who bears costs), procedural (who participates in decisions), recognition (whose experiences are represented), and epistemic (whose interpretations are treated as authoritative). There is no "efficiency justice" dimension — the framework deliberately centers justice concerns over commercial optimization. Options A, B, and D are the actual dimensions of the framework.

Section 2: True/False with Justification (1 point each)

For each statement, determine whether it is true or false and provide a brief justification.

11. "According to Foucault, power is something that is possessed by dominant groups and wielded against subordinate groups."

Answer **False.** *Explanation:* Section 5.1.1 explains that Foucault challenged this very understanding. He argued that power is not simply "possessed" by one group and "wielded" against another. Instead, power operates through the production of knowledge, the establishment of norms, and the creation of categories. Power is relational and distributed — it operates through relationships, institutions, and practices, not as a possession. The chapter notes that Foucault's concept goes "beyond 'power over'" to include the power to define what counts as knowledge, what categories are used, and what norms are established.

12. "The chapter argues that obfuscation — deliberately generating misleading data — is always ethically justified as a form of resistance to surveillance."

Answer **False.** *Explanation:* Section 5.3.3 presents the ethics of obfuscation as an open debate with multiple perspectives. One view holds obfuscation is legitimate self-defense against disproportionate surveillance. Another argues it undermines the integrity of beneficial data systems (accurate search results, effective public health surveillance). A third asks whether the ethical judgment depends on *who* is obfuscating and *why* — a political dissident obscuring search history is different from a corporation obscuring pollution data. The chapter does not settle this debate or declare obfuscation always justified.

13. "Information asymmetry, as described in Section 5.2, is primarily a problem of individual ignorance that can be solved through better digital literacy education."

Answer **False.** *Explanation:* The chapter characterizes information asymmetry as a *structural* feature of the data economy, not a matter of individual ignorance. Section 5.2.2 explains that even with perfect transparency, the power asymmetry persists due to complexity opacity, information overload, strategic disclosure, and power-preserving transparency. The problem is not that individuals need to learn more — it is that the architecture of data systems creates inherent imbalances that individual education cannot overcome. A person who fully understands how their insurance company uses social media data still cannot change the practice unilaterally.

14. "The data colonialism framework has been criticized for potentially trivializing historical colonialism by equating voluntary platform use with colonial domination."

Answer **True.** *Explanation:* Section 5.4.2 explicitly presents this critique. Critics argue that historical colonialism involved physical violence, forced labor, and genocide, and that equating these with data extraction may trivialize historical suffering. They also note that platform users have agency — they can (in principle) choose not to use platforms — whereas colonized subjects had no such option. The chapter acknowledges these critiques have merit while noting that the framework's proponents argue the structural parallel illuminates otherwise invisible dynamics.

15. "The chapter argues that state data power and corporate data power operate through identical mechanisms."

Answer **False.** *Explanation:* Sections 5.3.1 and 5.3.2 describe distinct mechanisms for corporate and state data power. Corporate data power operates through market power (data-driven monopolies), epistemic power (controlling what people know and believe), and labor power (asymmetric workplace surveillance). State data power operates through surveillance, classification (defining legal categories like citizen/non-citizen), conscription (compelling data production through census or tax reporting), and legitimation (using data to justify policy). While the two forms can overlap and reinforce each other, they operate through different mechanisms and raise different governance challenges.

Section 3: Short Answer (2 points each)

16. Dr. Adeyemi tells Mira: "The categories you use, the metrics you prioritize, the questions you ask — all of that is power." Explain what she means using the power/knowledge framework. Then identify one example from your own experience where the choice of data categories shaped outcomes.

Sample Answer Dr. Adeyemi is applying Foucault's power/knowledge framework to Mira's work in Institutional Research. Her point is that the data Mira's office produces does not neutrally describe the university — it actively shapes it. When the office decides that "graduation rate" is a key metric, resources flow toward programs that improve graduation rates. When it defines "student success" as GPA rather than, say, community engagement or personal growth, departments oriented toward measurable grades receive more support than those focused on harder-to-quantify outcomes. The power lies not in giving orders but in defining what counts — what gets measured, what gets funded, what gets cut. An original example: university course evaluations that use numerical ratings (e.g., "Rate this instructor 1-5") produce a specific kind of knowledge that favors popular teaching styles and can disadvantage instructors who teach challenging material or use unconventional methods. The choice to quantify teaching quality through student satisfaction scores — rather than, say, long-term learning outcomes or intellectual risk-taking — shapes hiring, tenure, and teaching practices. *Key points for full credit:* - Explains the power/knowledge connection: data categories shape reality, not just describe it - Recognizes that the power lies in defining what counts - Provides a relevant original example showing how measurement choices produce consequences

17. Eli describes how his Detroit neighborhood's community organizations conducted their own noise surveys to challenge ShotSpotter data. Explain how this action constitutes "counter-data" and why it is significant as a form of resistance. What does this example reveal about the relationship between data and credibility?

Sample Answer Counter-data involves communities producing their own data to challenge official narratives and institutional datasets. Eli's neighborhood organizations used the tools of data collection — systematic noise surveys — to contest the conclusions drawn from ShotSpotter's institutional data. This is significant because it challenges the monopoly on data production that institutions typically hold. When the city relies solely on ShotSpotter to determine where gunshots occur, the technology's classifications (and its errors) become authoritative. Community-produced noise surveys introduce a competing dataset that can expose biases, errors, or gaps in the institutional data. This example reveals a deep connection between data and credibility. ShotSpotter data was treated as authoritative partly because it was produced by a technology company and used by law enforcement — institutions with epistemic authority. Community observations, even when systematic, faced an uphill battle for credibility precisely because they were produced by residents without institutional backing. By adopting the formal methodology of data collection (surveys rather than anecdotes), the community organizations positioned their knowledge on more equal epistemic footing. This echoes Fricker's analysis of testimonial injustice: the community's lived experience was initially given less credibility than a corporate sensor network, and counter-data was a strategy for overcoming that injustice. *Key points for full credit:* - Defines counter-data and applies it correctly to the example - Explains why community data production is a form of resistance to institutional power - Connects the example to credibility and epistemic authority

18. The chapter describes "power-preserving transparency" — a situation where knowing how you are being exploited does not give you the power to stop it. Provide the example the chapter uses, then explain why this phenomenon poses a challenge to governance approaches that rely primarily on disclosure and informed consent.

Sample Answer The chapter uses the example of learning that your insurance company uses your social media activity to price your premiums. Even with this knowledge, your options are limited: you can stop using social media (a significant personal and social cost) or accept the surveillance (no change in the power dynamic). Transparency has not shifted the balance of power — it has merely made the exploitation visible. This poses a fundamental challenge to governance models based on disclosure and consent because such models assume that informed individuals can make meaningful choices. But if the only choices are "accept the system or withdraw from essential services," the consent is structurally coerced regardless of how well-informed the individual is. Disclosure-based governance works well when informed consumers can switch to competitors or negotiate terms. It fails when: (a) there are no meaningful alternatives (as with essential digital infrastructure), (b) the costs of opting out are disproportionately high, or (c) the power asymmetry is so great that knowledge alone cannot close it. This is why the data justice framework (Section 5.6) argues for structural change — changing the systems that produce asymmetry — rather than relying solely on individual empowerment through transparency. *Key points for full credit:* - Correctly identifies the insurance/social media example - Explains why transparency without power redistribution preserves the status quo - Connects to the limitations of consent-based governance

19. Compare epistemic power (Section 5.3.1) with epistemic injustice (Section 5.5). How are these two concepts related? Could a platform exercise epistemic power without committing epistemic injustice? Could epistemic injustice occur without epistemic power?

Sample Answer Epistemic power, as described in Section 5.3.1, is the ability to shape what people know and believe — for example, Google's search rankings determining what information is accessible, or Facebook's news feed determining which events are visible. Epistemic injustice, from Section 5.5, occurs when someone is wronged in their capacity as a knower — either through testimonial injustice (their testimony is given less credibility due to prejudice) or hermeneutical injustice (they lack the conceptual resources to understand their own experiences). The two concepts are related but distinct. Epistemic power creates the conditions under which epistemic injustice can occur at scale. A platform that controls information access (epistemic power) can systematically suppress the voices of marginalized communities (testimonial injustice) or fail to surface the vocabulary that would help people name their experiences (hermeneutical injustice). A platform could exercise epistemic power without committing epistemic injustice — for example, a well-designed public library database exercises epistemic power (shaping information access) but strives to do so equitably. Conversely, epistemic injustice can occur without platform-scale epistemic power: a doctor dismissing a patient's reported symptoms based on racial bias commits testimonial injustice through interpersonal prejudice, not institutional epistemic power. However, when epistemic power and epistemic injustice combine — as they do in algorithmic systems that both control information and systematically devalue certain voices — the effects are amplified dramatically. *Key points for full credit:* - Accurately defines both concepts - Identifies how epistemic power creates conditions for epistemic injustice - Answers both "Could..." questions with reasoning

Section 4: Applied Scenario (5 points)

20. Read the following scenario and answer all parts.

Scenario: CityScope and the Neighborhood Safety Score

CityScope, a data analytics company, has partnered with the city of Lakewood to create a public "Neighborhood Safety Score" for every census tract. The score integrates data from police dispatch records, 911 call logs, code enforcement complaints, social media posts geotagged within each tract, and property value trends. Each neighborhood receives a score from 0 to 100, published on CityScope's website and available via API to real estate platforms, insurance companies, and lenders. CityScope describes the tool as "an objective, data-driven safety assessment that empowers residents and businesses to make informed decisions."

Within six months of launch, residents of predominantly Black and Latino neighborhoods notice that their scores are significantly lower than comparable neighborhoods with predominantly white populations. Real estate agents report that the scores are depressing property values in low-score areas. Insurance companies have begun using the scores to justify premium increases. Community organizations in affected neighborhoods argue that the scores reflect historical over-policing (more police presence means more recorded incidents, not necessarily more crime) and discriminatory code enforcement patterns.

CityScope responds that its methodology is "transparent" — the data sources and weighting formula are published on its website — and that the company "simply reflects the data."

(a) Analyze CityScope's claim that it "simply reflects the data" using the power/knowledge framework from Section 5.1. What forms of power are embedded in the score's design, and why is the claim of objectivity problematic? (1 point)

(b) Identify the information asymmetries in this scenario. Who knows what, and who is disadvantaged by what they do not know? (1 point)

(c) Evaluate CityScope's transparency claim using the four limitations of transparency described in Section 5.2.2. Which limitations apply here? (1 point)

(d) Apply the data colonialism framework (Section 5.4) to this scenario. In what ways does CityScope's relationship with affected communities parallel the dynamics Couldry and Mejias describe? Where does the analogy break down? (1 point)

(e) Using the data justice framework (Section 5.6), propose four specific reforms — one for each dimension (distributive, procedural, recognition, epistemic). For each reform, explain which dimension it addresses and how it would change the power dynamics. (1 point)

Sample Answer **(a)** CityScope's claim that it "simply reflects the data" is a textbook example of the power/knowledge dynamic Foucault describes. The score does not neutrally reflect reality — it actively constructs it. The choice to weight police dispatch records heavily embeds historical policing patterns (which reflect resource allocation decisions, not objective crime distribution) into a measure presented as objective. The decision to include code enforcement complaints incorporates discriminatory enforcement patterns. The decision to include property values creates a feedback loop: low scores depress values, which further lower scores. CityScope exercises the power to *define* what safety means, which data sources constitute evidence of safety, and how that evidence should be weighted — and then presents these choices as objective measurement. This is precisely Foucault's insight: the categories used to produce knowledge are themselves exercises of power. **(b)** Key information asymmetries include: - **CityScope vs. residents:** CityScope knows the precise weighting formula, the data cleaning decisions, the edge cases, and the model's known limitations. Residents see only a number (0-100) and a published methodology that most lack the technical expertise to evaluate critically. - **CityScope vs. downstream users:** Real estate agents, insurers, and lenders use the score as an authoritative input, but may not understand (or care about) its biases. The score launders subjective choices into an objective-seeming number. - **Affected residents vs. decision-makers:** Residents in low-score neighborhoods experience the score's consequences (higher insurance, lower property values) but have the least power to challenge or modify the methodology. Insurance companies and lenders who benefit from the score have no incentive to question it. - **Historical asymmetry:** The data sources themselves (police records, code enforcement) encode decades of discriminatory practices that current residents had no role in creating but now bear the consequences of. **(c)** All four limitations of transparency apply: - **Complexity opacity:** The weighting formula is published, but evaluating whether it produces fair outcomes requires statistical expertise most residents lack. - **Information overload:** Publishing the methodology does not make it comprehensible to affected communities who must also navigate daily life, work, and other concerns. - **Strategic disclosure:** CityScope discloses data sources and weights but may not disclose known limitations, alternative design choices considered and rejected, or internal analyses showing disparate impact. - **Power-preserving transparency:** Even residents who fully understand the methodology's biases cannot change it. They can see how the system harms them without having the power to alter the system. CityScope's transparency is formally correct but functionally insufficient. **(d)** The parallels with data colonialism include: CityScope *appropriates* community-generated data (911 calls, social media posts, property transactions) as raw material for a product that benefits the company commercially. The exchange is *unequal*: communities generate the data and bear the consequences of the scores, while CityScope and its clients capture the value. The *ideological justification* — "empowering residents with objective data" — mirrors the colonial "civilizing mission," presenting extraction as a benefit to the extracted. The analogy breaks down in important ways: residents are not physically coerced, they operate within a democratic system where they can (in theory) challenge the score politically, and CityScope does not exercise sovereign territorial control. However, the structural dynamic — appropriation of lived experience as raw material for external profit, justified by an ideology of objectivity — maps closely onto Couldry and Mejias's framework. **(e)** Four reforms, one per data justice dimension: 1. **Distributive justice:** Require that any revenue CityScope generates from the Neighborhood Safety Score be shared with the communities whose data feeds the model. If insurance companies pay for API access, a percentage should fund community development in low-score neighborhoods. This addresses who benefits and who bears costs. 2. **Procedural justice:** Establish a community oversight board with binding authority over the score's methodology, composed of elected representatives from the neighborhoods most affected by the scores. This ensures that those who bear the consequences participate in the decisions that produce them. 3. **Recognition justice:** Incorporate community-generated data sources alongside institutional ones — community safety surveys, resident-reported positive indicators (mutual aid networks, community event participation, neighborhood satisfaction) — so that the score reflects residents' own understanding of their neighborhoods, not only institutional records shaped by policing and code enforcement patterns. 4. **Epistemic justice:** Fund independent audits conducted by researchers chosen by the community oversight board, and require CityScope to respond publicly to audit findings. This redistributes epistemic authority — the power to evaluate and interpret the data — from CityScope alone to the communities affected by its products.

Scoring & Review Recommendations

Score Range Assessment Next Steps
Below 50% (< 15 pts) Needs review Re-read Sections 5.1-5.3 carefully, redo Part A exercises
50-69% (15-20 pts) Partial understanding Review specific weak areas, focus on Part B exercises for applied practice
70-85% (21-25 pts) Solid understanding Ready to proceed to Chapter 6; review any missed topics briefly
Above 85% (> 25 pts) Strong mastery Proceed to Chapter 6: Ethical Frameworks for the Data Age
Section Points Available
Section 1: Multiple Choice 10 points (10 questions x 1 pt)
Section 2: True/False with Justification 5 points (5 questions x 1 pt)
Section 3: Short Answer 8 points (4 questions x 2 pts)
Section 4: Applied Scenario 5 points (5 parts x 1 pt)
Total 28 points