Exercises: How Algorithms Shape Society

These exercises progress from concept checks to challenging applications. Estimated completion time: 3-4 hours.

Difficulty Guide: - ⭐ Foundational (5-10 min each) - ⭐⭐ Intermediate (10-20 min each) - ⭐⭐⭐ Challenging (20-40 min each) - ⭐⭐⭐⭐ Advanced/Research (40+ min each)


Part A: Conceptual Understanding ⭐

Test your grasp of core concepts from Chapter 13.

A.1. Section 13.1.1 provides the technical definition of an algorithm (finite steps, definiteness, input, output, effectiveness), and Section 13.1.2 provides the social definition. Explain, in your own words, why the technical definition is "radically insufficient" for understanding how algorithms shape society. Use a specific example not found in the chapter.

A.2. The chapter distinguishes between an algorithm as a "set of instructions" (technical view) and a "system of power" (social view). For each of the following, identify which view is emphasized and explain why the other view would reveal something important that the first obscures:

  • (a) A bank's marketing material describes its new lending model as "a data-driven approach to faster decisions."
  • (b) A computer science textbook explains how collaborative filtering computes the cosine similarity between user preference vectors.
  • (c) A city council resolution states that "predictive policing technology has been deployed to optimize public safety resource allocation."

A.3. Define algorithmic gatekeeping as presented in Section 13.5. Then explain how algorithmic gatekeeping differs from traditional gatekeeping (e.g., a newspaper editor deciding which stories to publish). Identify at least two characteristics that make algorithmic gatekeeping more powerful and one characteristic that makes it potentially less accountable.

A.4. The chapter introduces the concept of the "algorithmic turn" (Section 13.2). Explain what this term means and why it represents more than simply a shift from analog to digital processes. What specifically changes about accountability when decisions move from human to algorithmic systems?

A.5. Section 13.3 describes three types of recommendation systems: collaborative filtering, content-based filtering, and hybrid approaches. In two to three sentences each, explain the core logic of collaborative filtering and content-based filtering. Then identify one weakness of each approach that the hybrid model attempts to address.

A.6. The Consent Fiction, as extended in Section 13.2.3, moves from the domain of data collection to the domain of decision-making. Explain this extension. Why is it significant that the Consent Fiction now encompasses not just what data is collected about you but what decisions are made about you?

A.7. Using the table from Section 13.1.4 ("What They Say" vs. "What It Means"), decode the following statement from a hypothetical company press release: "Our platform leverages advanced predictive analytics and personalization to deliver optimized, data-driven experiences." Rewrite this sentence in language that makes the social reality visible.


Part B: Applied Analysis ⭐⭐

Analyze scenarios, arguments, and real-world situations using concepts from Chapter 13.

B.1. Consider the following scenario:

A mid-size city implements a "Smart Parking" system. Sensors detect which parking spaces are occupied, and a mobile app directs drivers to open spots. The system also implements dynamic pricing: parking costs more during high-demand periods and less during low-demand periods. The city argues this reduces congestion, emissions from circling for parking, and revenue shortfalls. Data collected includes: license plate numbers (via cameras), app user accounts, GPS coordinates, time-stamped parking transactions, and duration of stay.

Using the six-question Applied Framework from Chapter 1 and the concepts from Chapter 13, analyze this system. Specifically address: (a) What algorithmic decisions are being made? (b) Who benefits and who bears the costs of dynamic pricing? (c) What forms of the Consent Fiction are present?

B.2. Section 13.4 discusses content moderation at scale. Read the following statement carefully:

"Content moderation is fundamentally a solved problem. AI systems can now detect hate speech, misinformation, and violent content with over 95% accuracy. The remaining errors are a small price to pay for the enormous volume of harmful content removed."

Identify at least four problems with this argument, drawing on the chapter's discussion of the scale problem, contextual judgment, cultural variation, and the human cost of content moderation. For each problem, cite the relevant section of the chapter.

B.3. The chapter discusses six domains of consequential algorithmic decision-making (Section 13.2.2). Choose one domain not extensively discussed in the chapter (e.g., housing, education, immigration, or insurance). Research a specific algorithmic system used in that domain and analyze it using the framework from Section 13.1.2: What data does it take as input? What consequential decision does it produce as output? Who is affected? Who is accountable?

B.4. Mira's experience with VitraMed's patient risk model illustrates the transition from algorithmic sorting in theory to algorithmic sorting in practice. Trace Mira's arc in this chapter: What does she understand at the beginning? What realization does she reach by the chapter's end? How does her professional position at VitraMed complicate her ability to act on what she discovers?

B.5. Section 13.4.3 describes the "human cost" of content moderation — the psychological toll on human moderators who review violent, abusive, and disturbing content. Analyze this situation through the lens of the Power Asymmetry. Who designs the content moderation system? Who profits from it? Who bears the psychological burden? How does the geographic and economic positioning of content moderators (often in lower-income countries) relate to the broader themes of this textbook?

B.6. Eli connects predictive policing to his neighborhood in Detroit. Explain why predictive policing creates a feedback loop (the concept is introduced briefly in Section 13.2.2 and will be expanded in Chapter 14). How does the algorithm's prediction of "high-crime areas" potentially create the very conditions it claims to predict?


Part C: Real-World Application Challenges ⭐⭐-⭐⭐⭐

These exercises ask you to investigate your own algorithmic environment.

C.1. ⭐⭐ Algorithm Audit: Your Feed. Select one platform you use regularly (Instagram, TikTok, YouTube, Twitter/X, or a news aggregator). Spend 30 minutes scrolling through your algorithmic feed. For every piece of content shown to you, record: (a) the type of content (news, entertainment, ad, personal post), (b) whether you interacted with it (liked, commented, watched to completion), and (c) your best guess at why the algorithm showed it to you. After 30 minutes, write a one-page analysis: What patterns do you see? What is the algorithm optimizing for? What content is not being shown to you, and why might that matter?

C.2. ⭐⭐ The Recommendation Experiment. Create a new, blank account on a streaming platform (YouTube, Spotify, or Netflix). Do not log into any existing account. Document what the platform recommends to you when it knows nothing about you — these are the algorithmic defaults. Then interact with specific types of content for 20-30 minutes (e.g., exclusively watch cooking videos, or only play jazz). Document how the recommendations change. Write a short report on what you learn about how quickly the algorithm profiles you and what assumptions it makes.

C.3. ⭐⭐⭐ Algorithmic Decision Mapping. Following the model from Section 13.2.1, map every algorithmic decision you encounter in a single day. For each one, identify: (a) what decision was made, (b) what data was likely used, (c) whether you were aware of the decision at the time, (d) whether you could have opted out, and (e) what the consequence of the decision was (trivial, moderate, or significant). Present your findings in a table and write a one-paragraph reflection on the ratio of algorithmic decisions you were aware of versus those you were not.

C.4. ⭐⭐⭐ Price Discrimination Investigation. Dynamic pricing algorithms adjust prices based on user data. Conduct a small experiment: search for the same product or service (airline ticket, hotel room, ride-share fare, or e-commerce item) from two different devices, locations, or browsing profiles. Document any price differences. Write a one-page analysis connecting your findings to the concepts of algorithmic decision-making and the Consent Fiction. Note: clearly state your methodology and any limitations of your experiment.


Part D: Synthesis & Critical Thinking ⭐⭐⭐

These questions require you to integrate multiple concepts from Chapter 13 and think beyond the material presented.

D.1. The chapter argues that algorithms "do not reason — they compute" (Section 13.1.3). Some AI researchers disagree, arguing that advanced AI systems exhibit forms of reasoning. Evaluate the chapter's claim. Is the distinction between reasoning and computing clear-cut, or is it a spectrum? Does it matter for the governance questions raised in this chapter whether algorithms "truly reason" or merely process statistical patterns? Write a two-paragraph analysis.

D.2. Consider the following thought experiment:

A university replaces its human admissions committee with an algorithmic system. The algorithm considers GPA, test scores, extracurricular activities, personal essay (analyzed by NLP), recommendation letters (analyzed by NLP), demographic factors, and zip code. It produces a single "admissions score" from 0 to 100. Students above 75 are admitted automatically; students below 40 are rejected automatically; students between 40 and 75 are reviewed by a reduced human committee.

Analyze this system using at least four concepts from Chapter 13: algorithmic gatekeeping, the Consent Fiction, the Accountability Gap, and the algorithmic turn. Who benefits from this system? Who is harmed? What is gained and what is lost when admissions moves from human judgment to algorithmic scoring?

D.3. Section 13.5 discusses algorithmic gatekeeping on platforms. The chapter notes that platforms argue they are neutral intermediaries while simultaneously curating content through recommendation algorithms. Write a short essay (300-500 words) evaluating this claim. Can a platform that algorithmically ranks, filters, and recommends content credibly claim neutrality? What would "neutral" even mean in this context? Draw on at least three specific examples from the chapter.

D.4. Dr. Adeyemi asks: "Even if data collection consent were perfect — fully informed, genuinely voluntary — would that consent extend to being algorithmically judged based on that data?" (Section 13.2.3). Write a response to this question in 200-400 words. Consider: Does consent to share data imply consent to be scored, ranked, or sorted by that data? If not, where does the gap lie? What would meaningful consent to algorithmic decision-making look like?


Part E: Research & Extension ⭐⭐⭐⭐

These are open-ended projects for students seeking deeper engagement. Each requires independent research beyond the textbook.

E.1. Algorithmic Accountability Reporting. ProPublica's "Machine Bias" investigation (2016) pioneered algorithmic accountability journalism. Research two additional examples of journalistic investigations that revealed algorithmic harms (e.g., The Markup's investigations into Facebook's ad platform, or the New York Times's investigation into facial recognition). For each, write a 500-word analysis covering: What system was investigated? What was found? What methodological approach did the journalists use? What changed as a result?

E.2. The Platform Moderation Workforce. Section 13.4 discusses content moderation's human cost. Research the working conditions of content moderators at a specific company or in a specific region (e.g., content moderation outsourcing in Kenya, the Philippines, or India). Write a 1,000-word report addressing: (a) who does this work and under what conditions, (b) what psychological impacts have been documented, (c) what legal protections exist (or are absent), and (d) how this labor arrangement reflects the Power Asymmetry and global inequality themes of this textbook.

E.3. Recommendation System Design Challenge. Section 13.3 describes how recommendation systems work. Design (on paper) a recommendation system for a university library's digital collection. Specify: (a) what data you would use as input, (b) what filtering approach you would employ and why, (c) what you would optimize for (engagement? diversity of sources? academic relevance?), (d) what you would not optimize for and why, and (e) what governance mechanisms you would build into the system. Your design should explicitly address the filter bubble problem and the risk of algorithmic gatekeeping.


Solutions

Selected solutions are available in appendices/answers-to-selected.md.