Quiz: How Algorithms Shape Society

Test your understanding before moving to the next chapter. Target: 70% or higher to proceed.


Section 1: Multiple Choice (1 point each)

1. According to the chapter, the social definition of an algorithm emphasizes:

  • A) The efficiency and elegance of the computational procedure.
  • B) The number of steps required to produce output from input.
  • C) The classifications, rankings, recommendations, or decisions produced and their consequences for people.
  • D) The programming language and hardware on which the algorithm runs.
Answer **C)** The classifications, rankings, recommendations, or decisions produced and their consequences for people. *Explanation:* Section 13.1.2 defines an algorithm in its social dimension as "a computational process that takes data about people or their behavior as input and produces a classification, ranking, recommendation, or decision that affects their opportunities, resources, or treatment." This shifts emphasis from *how* the algorithm works to *what* it does and to whom. Options A and B describe the technical view; Option D describes implementation details irrelevant to the social definition.

2. The chapter's table contrasting the technical view and the social view of algorithms states that in the social view, an algorithm's outputs should be evaluated by:

  • A) Efficiency, speed, and computational cost.
  • B) Fairness, transparency, and accountability.
  • C) The number of users served and throughput rate.
  • D) Whether the code compiles without errors.
Answer **B)** Fairness, transparency, and accountability. *Explanation:* The table in Section 13.1.2 explicitly contrasts the technical evaluation criterion ("evaluated by efficiency and accuracy") with the social evaluation criterion ("evaluated by fairness, transparency, and accountability"). This distinction is foundational to Part 3's argument that algorithmic systems must be assessed not just on technical performance but on their social impacts.

3. Collaborative filtering, as described in Section 13.3, recommends content to a user based primarily on:

  • A) The attributes and metadata of the content itself (genre, topic, keywords).
  • B) The behavior of other users who have similar preference patterns.
  • C) The user's demographic information (age, gender, location).
  • D) A manual curation process by human editors.
Answer **B)** The behavior of other users who have similar preference patterns. *Explanation:* Section 13.3 explains that collaborative filtering works by identifying users with similar preference histories and recommending items that similar users have liked. The logic is: "Users like you also liked X." Content-based filtering (Option A) recommends based on item attributes. Demographic-based approaches (Option C) are a simpler form not emphasized as a primary type. Manual curation (Option D) is not algorithmic filtering.

4. The "filter bubble" concept, discussed in the chapter, refers to:

  • A) A technical firewall that prevents users from accessing certain websites.
  • B) The tendency of recommendation algorithms to progressively narrow the range of content a user encounters, reinforcing existing preferences and limiting exposure to diverse perspectives.
  • C) A deliberate censorship policy implemented by social media platforms.
  • D) The process by which spam filters remove unwanted email.
Answer **B)** The tendency of recommendation algorithms to progressively narrow the range of content a user encounters, reinforcing existing preferences and limiting exposure to diverse perspectives. *Explanation:* The filter bubble, a term coined by Eli Pariser, describes how personalization algorithms can create informational echo chambers. As the algorithm learns what a user engages with, it shows more of the same and less of the different, potentially narrowing the user's information diet. Options A, C, and D describe unrelated concepts. The chapter discusses filter bubbles in the context of recommendation systems and algorithmic gatekeeping.

5. According to Section 13.4, the fundamental challenge of content moderation at scale is:

  • A) Content moderation technology is too expensive for platforms to afford.
  • B) The volume of content produced (hundreds of millions of posts per day) makes human review of all content impossible, while automated systems lack the contextual judgment needed for accurate moderation decisions.
  • C) Governments refuse to provide clear guidelines about what content should be removed.
  • D) Users consistently support the removal of all controversial content.
Answer **B)** The volume of content produced (hundreds of millions of posts per day) makes human review of all content impossible, while automated systems lack the contextual judgment needed for accurate moderation decisions. *Explanation:* Section 13.4 presents the "scale problem" as the central challenge: platforms like Facebook and YouTube receive hundreds of millions of posts daily. Human review at that scale is impossible (Facebook would need to hire more moderators than many countries' entire workforces). Automated systems, while fast, cannot reliably assess context, satire, cultural significance, or the nuances of language. This creates a structural gap between the need for judgment and the capacity for judgment.

6. Eli's critique of predictive policing in his Detroit neighborhood highlights which recurring theme from the textbook?

  • A) The Consent Fiction — residents did not consent to being algorithmically targeted.
  • B) The Accountability Gap — no single actor bears clear responsibility for the consequences.
  • C) The Power Asymmetry — the system is deployed on communities that have no power to inspect, challenge, or opt out of it.
  • D) All of the above.
Answer **D)** All of the above. *Explanation:* Eli's critique engages all three themes. Residents did not consent to predictive policing (Consent Fiction). The algorithm's errors fragment accountability across the company, the police department, the city council, and the data itself (Accountability Gap). And the system exercises power over communities that lack the resources, information, or institutional standing to challenge it (Power Asymmetry). The chapter presents Eli's experience as an example of how the three themes compound.

7. The chapter argues that the word "algorithm" often functions as a:

  • A) Precise technical descriptor that helps the public understand how decisions are made.
  • B) Rhetorical shield against accountability, wrapping human value judgments in the language of scientific neutrality.
  • C) Legal term with a clear definition in U.S. federal law.
  • D) Synonym for "artificial intelligence" in all contexts.
Answer **B)** Rhetorical shield against accountability, wrapping human value judgments in the language of scientific neutrality. *Explanation:* Section 13.1.4 explicitly analyzes the language of algorithms, arguing that terms like "algorithm," "data-driven," and "predictive analytics" obscure power relationships. When an institution says "the algorithm decided," it shifts accountability away from the humans who designed, deployed, and chose to rely on the system. The word "algorithm" sounds scientific and neutral, making the decision harder to challenge than one framed as a human judgment.

8. Which of the following best describes the concept of "algorithmic gatekeeping" as used in this chapter?

  • A) The use of passwords and encryption to restrict access to algorithmic systems.
  • B) The power of algorithmic systems to determine what information, opportunities, or resources individuals can access, effectively controlling the gates through which social goods flow.
  • C) The practice of hiring specialized gatekeepers to oversee algorithmic development teams.
  • D) A security protocol that prevents unauthorized modifications to algorithmic code.
Answer **B)** The power of algorithmic systems to determine what information, opportunities, or resources individuals can access, effectively controlling the gates through which social goods flow. *Explanation:* Section 13.5 defines algorithmic gatekeeping as the exercise of sorting, filtering, and prioritizing power by algorithms that control access to information, economic opportunity, social services, and other resources. Unlike traditional gatekeepers (editors, hiring managers, bank officers), algorithmic gatekeepers operate at massive scale, are often invisible to those they affect, and may be unaccountable to any specific authority.

9. According to the chapter, the COMPAS risk assessment tool is used in which domain of algorithmic decision-making?

  • A) Search and information access
  • B) Hiring and employment
  • C) Criminal justice
  • D) Healthcare
Answer **C)** Criminal justice. *Explanation:* Section 13.2.2 discusses COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) as a risk assessment tool used in criminal justice. It predicts the likelihood that a defendant will reoffend, and its scores influence bail decisions, sentencing, and parole. The chapter introduces COMPAS here and notes that it will be examined in depth in Chapter 14.

10. Virginia Eubanks's concept of "digital poorhouses," referenced in Section 13.2.2, describes:

  • A) Low-cost computing devices distributed to underserved communities.
  • B) Automated systems in social services that subject low-income populations to levels of surveillance, scoring, and automated judgment that wealthier populations would never tolerate.
  • C) Online platforms that provide financial literacy education to people in poverty.
  • D) Data centers located in economically depressed areas to provide local employment.
Answer **B)** Automated systems in social services that subject low-income populations to levels of surveillance, scoring, and automated judgment that wealthier populations would never tolerate. *Explanation:* Section 13.2.2 cites Eubanks's *Automating Inequality* (2018), which documents how automated welfare eligibility systems, homelessness prediction algorithms, and child protective services tools disproportionately affect low-income people. The "digital poorhouse" concept highlights that algorithmic sorting in social services creates a two-tier system: wealthy populations interact with algorithms as consumers (recommendations, personalization), while low-income populations are subjected to algorithms as subjects (surveillance, scoring, denial of benefits).

Section 2: True/False with Justification (1 point each)

For each statement, determine whether it is true or false and provide a brief justification.

11. "Algorithms are objective because they are mathematical — they process data without the emotions, biases, or subjective judgments of human decision-makers."

Answer **False.** *Explanation:* Section 13.1.3 directly addresses and rejects this claim. While algorithms are formally mathematical — they execute calculations correctly — the *inputs* to those calculations (data, features, objectives, design choices) are products of human decisions and social structures. An algorithm that computes perfectly on biased data produces biased results. Mathematical precision does not equal social objectivity. The chapter argues that presenting algorithms as "objective" is a rhetorical strategy that obscures the value judgments embedded in every algorithmic system.

12. "Content moderation errors — such as failing to remove hate speech or incorrectly removing legitimate speech — affect all communities equally."

Answer **False.** *Explanation:* Section 13.4 discusses how content moderation errors are distributed unequally. Automated systems perform significantly worse on content in non-English languages, non-Western cultural contexts, and marginalized dialects. Communities that speak less-represented languages are more likely to have harmful content left up (because the system cannot detect it) and legitimate content taken down (because the system misinterprets it). The human cost of moderation also falls disproportionately on low-wage workers in the Global South.

13. "The Consent Fiction in algorithmic systems extends beyond data collection consent to encompass consent to being judged, scored, and sorted by algorithmic systems."

Answer **True.** *Explanation:* Section 13.2.3 explicitly extends the Consent Fiction from Part 2 (consent to data collection) into Part 3 (consent to algorithmic decision-making). The chapter argues that even if data collection consent were perfect, it would not necessarily cover being algorithmically judged based on that data. Consenting to share health data for "treatment optimization" does not obviously include consent to being scored by a predictive model or ranked against other patients.

14. "Dynamic pricing algorithms always benefit consumers by offering lower prices during periods of low demand."

Answer **False.** *Explanation:* Dynamic pricing is discussed in Section 13.2.1 as an example of algorithmic decision-making in everyday life. While dynamic pricing can sometimes produce lower prices during low-demand periods, it also produces higher prices during high-demand periods — which may disproportionately affect people with less schedule flexibility (workers who cannot choose when to commute, families who must travel during school holidays). The algorithm optimizes for revenue, not consumer welfare, and the benefits and costs are unevenly distributed.

15. "The chapter describes the shift from human to algorithmic decision-making as a transfer of authority — and argues that authority without accountability is power without responsibility."

Answer **True.** *Explanation:* Section 13.1.3 makes this argument directly, framing the algorithmic turn as a transfer of authority from human decision-makers (who can be questioned, appealed, and held accountable) to algorithmic systems (which cannot). The chapter identifies this as the Accountability Gap "in its purest form" — when authority is exercised algorithmically, responsibility fragments across developers, deployers, users, and the data itself, and no single actor bears clear accountability.

Section 3: Short Answer (2 points each)

16. The chapter presents a table comparing "What They Say" and "What It Means" for common algorithmic language (Section 13.1.4). Choose two entries from that table and explain, in your own words, why the euphemistic framing matters for governance and accountability. How does the language obscure the power dynamics at work?

Sample Answer "Predictive analytics" translates to "the system guesses what you'll do in the future." The euphemistic framing matters because "predictive analytics" sounds like a neutral description of a statistical capability, while "guessing what you'll do" foregrounds the uncertainty, the presumption, and the power relationship involved. A system that "performs predictive analytics" sounds like it discovers truth; a system that "guesses whether you'll commit a crime" sounds like it makes judgments under uncertainty — which is what it actually does. The neutral language makes it harder for the public and policymakers to question the system's authority. "Personalization" translates to "the system decides what reality to show you." This matters because "personalization" implies a service being customized for your benefit — you're getting a *better* experience. But "deciding what reality to show you" foregrounds the gatekeeping power: the system is not just curating content; it is shaping your informational environment and, by extension, your understanding of the world. Framing this as "personalization" obscures the fact that someone else (the platform's algorithm, optimizing for engagement) is determining the boundaries of your information diet. *Key points for full credit:* - Selects two specific entries from the table - Explains the power dynamic concealed by each euphemism - Connects the framing to governance or accountability implications

17. Explain the concept of the "algorithmic turn" as presented in this chapter. Why does the chapter describe it as more than simply a shift from analog to digital processes? What specifically changes about institutional accountability when decisions are delegated to algorithms?

Sample Answer The "algorithmic turn" refers to the historical moment when institutions began systematically delegating consequential decisions — about credit, hiring, criminal justice, healthcare, and social services — to computational systems. It is more than digitization (converting paper records to databases) because it involves a transfer of *decision-making authority* from humans to code. What changes about accountability is fundamental. When a human judge makes a sentencing decision, we can scrutinize the judge's reasoning, appeal the decision, and ultimately hold the judge responsible. When an algorithm contributes to that decision, accountability fragments: the developer claims the tool is advisory, the judge claims the algorithm is just one factor, the deploying jurisdiction claims it's improving consistency, and the training data is treated as a neutral reflection of reality. No single actor bears clear responsibility. The algorithmic turn thus creates what the chapter calls the Accountability Gap — authority is exercised, but responsibility is diffuse. *Key points for full credit:* - Defines the algorithmic turn as delegation of consequential decisions to computational systems - Distinguishes it from mere digitization - Explains how algorithmic decision-making fragments accountability

18. According to the chapter, why are algorithms "a profoundly inadequate substitute for reasoning" in tasks that require moral judgment, contextual understanding, or empathy (Section 13.1.3)? Provide a specific example — from the chapter or your own — of a decision that requires reasoning beyond statistical pattern recognition.

Sample Answer The chapter argues that algorithms compute rather than reason — they identify statistical patterns in training data and apply those patterns to new cases. While this is powerful for well-defined tasks (spam detection, image recognition), it is inadequate for decisions that require moral judgment, contextual understanding, or empathy because these capacities involve more than pattern matching. Moral judgment requires weighing competing values. Contextual understanding requires grasping the particular circumstances of a case. Empathy requires recognizing the human stakes of a decision. Example: A child protective services algorithm might flag a family for investigation based on statistical risk factors — prior contact with the system, poverty indicators, neighborhood characteristics. But the decision about whether a child is actually at risk requires understanding the family's specific circumstances: Is the prior contact a pattern of harm or a resolved situation? Is the poverty indicator a sign of neglect or a sign of economic hardship in a family that is doing its best? A human caseworker can ask questions, observe interactions, and exercise judgment about context. An algorithm applies a score. The score may be accurate in aggregate, but for any individual family, it cannot substitute for the contextual reasoning that the decision demands. *Key points for full credit:* - Distinguishes between computation (pattern matching) and reasoning (contextual judgment) - Identifies what algorithms cannot do: moral judgment, contextual understanding, empathy - Provides a specific, well-developed example

19. The chapter discusses the Australia "Robodebt" scandal as an example of algorithmic decision-making in social services. Using only the information provided in Section 13.2.2, explain what happened, who was affected, and what this case reveals about the risks of automated decision-making in public benefit systems.

Sample Answer The Robodebt scandal involved an automated system used by the Australian government to identify welfare recipients who allegedly owed debts. The system used an automated income-averaging process that incorrectly flagged approximately 470,000 welfare recipients as owing money to the government, triggering aggressive debt collection efforts. The program was later found to be unlawful, and the Australian government settled a class-action lawsuit for $1.2 billion. The case reveals several critical risks of automated decision-making in public benefit systems. First, the people most affected — welfare recipients — were those with the least power to challenge automated decisions, illustrating the Power Asymmetry. Second, the automated system processed hundreds of thousands of cases without the human review that might have identified the systemic error. Third, the consequences of algorithmic errors in social services are not abstract — they involve real people facing aggressive debt collection demands for debts they do not owe, with potential impacts on their mental health, financial stability, and trust in government institutions. *Key points for full credit:* - Accurately describes the Robodebt system and its scale - Identifies the population affected (welfare recipients) - Connects to broader themes (Power Asymmetry, risks of automation)

Section 4: Applied Scenario (5 points)

20. Read the following scenario and answer all parts.

Scenario: HealthScore Pro

A national health insurance company introduces "HealthScore Pro," an algorithmic system that analyzes members' claims data, prescription histories, fitness app data (voluntarily shared by 40% of members for a premium discount), and demographic information to generate a "Wellness Score" from 1 to 100.

The Wellness Score is used for three purposes: (1) determining premium discount eligibility (scores above 75 receive a 10% discount), (2) prioritizing members for enrollment in preventive care programs (limited slots allocated to lowest-scoring members first), and (3) flagging members for "care optimization outreach" — phone calls from health coaches.

The insurer's marketing materials describe HealthScore Pro as "an innovative wellness tool that empowers members to take charge of their health journey." The algorithm's specific weights and logic are proprietary. Members can see their score but not how it was calculated.

(a) Identify the algorithmic decision-making functions of HealthScore Pro. For each function, classify it using the six domains from Section 13.2.2 and assess whether the stakes are low, moderate, or high. (1 point)

(b) Decode the insurer's marketing language ("innovative wellness tool that empowers members to take charge of their health journey") using the framework from Section 13.1.4. What is the language obscuring? Rewrite the description in language that makes the social reality visible. (1 point)

(c) Analyze the Consent Fiction in this scenario. Consider the 40% of members who share fitness app data and the 60% who do not. Is consent meaningfully voluntary when sharing data is tied to a financial incentive? What about the algorithmic scoring itself — did members consent to being scored? (1 point)

(d) Apply the Accountability Gap framework. If a member's Wellness Score is inaccurate — perhaps because their claims data reflects under-utilization of healthcare due to access barriers rather than good health — who is responsible? Map the potential accountability claims of the insurer, the algorithm developers, the fitness app company, and the member. (1 point)

(e) Propose three governance measures that should be implemented before this system is deployed. For each, explain what harm it prevents and which principle (transparency, accountability, consent, or fairness) it serves. (1 point)

Sample Answer **(a)** HealthScore Pro performs three algorithmic functions: - **Premium discount determination** — maps to healthcare/insurance domain; stakes are **high** because it directly affects the cost of essential coverage, and members with lower scores pay effectively more. - **Preventive care program enrollment** — maps to healthcare domain; stakes are **high** because access to preventive care can affect long-term health outcomes. - **"Care optimization outreach" targeting** — maps to healthcare domain; stakes are **moderate** because receiving a phone call from a health coach is less consequential than the other two, but being algorithmically selected for outreach still involves profiling and could feel intrusive. **(b)** Decoded: "An innovative wellness tool" means "a proprietary scoring algorithm." "Empowers members" means "assigns scores that determine what members pay and what services they receive." "Take charge of their health journey" implies the member has agency, but in reality the algorithm has agency — the member cannot inspect, challenge, or understand their score. A more transparent description: "HealthScore Pro is a proprietary algorithm that assigns each member a numerical score based on claims data, prescriptions, demographic information, and optionally shared fitness data. This score determines premium discounts, access to preventive care programs, and whether a member is contacted by a health coach. Members can see their score but cannot see how it was calculated." **(c)** Consent is compromised in multiple ways. For the 40% who share fitness data, the 10% premium discount creates a financial incentive that may make the choice less than fully voluntary — particularly for members on tight budgets who cannot afford to forgo the discount. For the 60% who do not share fitness data, they are still scored using claims, prescriptions, and demographics — they did not consent to being algorithmically scored and ranked. The Consent Fiction extends here to the domain of algorithmic decision-making: signing up for health insurance presumably involves consent to claims processing, but it does not obviously include consent to being assigned a proprietary Wellness Score that determines premiums and access to care programs. Members consented to *insurance*, not to *algorithmic scoring*. **(d)** Accountability fragments across multiple actors: The insurer claims the algorithm is a "wellness tool" and that members can improve their score through healthy behavior. The algorithm developers claim the system is performing as designed. The fitness app company claims they merely shared data as permitted by the user agreement. The member is told to "take charge" — implying the score is their responsibility. If a member's low score reflects under-utilization of healthcare due to access barriers (they avoid the doctor because of transportation challenges or can't afford copays), no actor bears clear responsibility for the resulting harm. The insurer sees low utilization as low engagement; the algorithm encodes that as low wellness; the member pays higher premiums. The Accountability Gap is structural: the system converts a social condition (access barriers) into an individual judgment (low Wellness Score) with financial consequences, and no one is accountable for the translation. **(e)** Three governance measures: 1. **Score transparency and explanation** (transparency): Members should be able to see not just their score but the factors that contributed to it and their relative weights. This prevents the black box problem and enables members to identify and contest errors. It serves the principle of transparency. 2. **Bias audit before deployment** (fairness): The system should be audited for disparate impact across race, income level, age, and disability status before deployment. If members with fewer healthcare interactions score lower, this may systematically disadvantage populations facing access barriers. The audit serves the principle of fairness by ensuring the system does not encode structural inequality as individual wellness. 3. **Meaningful opt-out without penalty** (consent): Members should be able to opt out of being scored without forgoing premium discounts or access to programs. If opting out is not possible, the scoring must be governed by clear rules, independent oversight, and a contestation process. This serves the principle of consent by ensuring that members have genuine choice.

Scoring & Review Recommendations

Score Range Assessment Next Steps
Below 50% (< 15 pts) Needs review Re-read Sections 13.1-13.3 carefully, redo Part A exercises
50-69% (15-20 pts) Partial understanding Review specific weak areas, focus on Part B exercises
70-85% (21-25 pts) Solid understanding Ready to proceed to Chapter 14; review any missed topics
Above 85% (> 25 pts) Strong mastery Proceed to Chapter 14: Bias in Data, Bias in Machines
Section Points Available
Section 1: Multiple Choice 10 points (10 questions x 1 pt)
Section 2: True/False with Justification 5 points (5 questions x 1 pt)
Section 3: Short Answer 8 points (4 questions x 2 pts)
Section 4: Applied Scenario 5 points (5 parts x 1 pt)
Total 28 points