Chapter 28 Exercises: Algorithmic Management — When the Boss Is an AI
Exercise 28.1 — Algorithmic Management Audit: Mapping Your Own Encounters
Type: Individual reflective research Time: 40–50 minutes
Instructions
Algorithmic management is not limited to Amazon warehouses and Uber. It appears in many contexts that students encounter. This exercise asks you to map your own encounters with algorithmic management systems.
Part A: Identify three algorithmic management systems in your life
Think broadly. Consider: - Work experiences (present or past) - School — grading systems, learning management platforms, attendance tracking - Food delivery apps (if you've worked for them) - Ride share (if you've driven for them) - Any platform-based work (TaskRabbit, Upwork, Fiverr, Amazon Mechanical Turk) - Sports performance tracking (athletic performance apps, workout apps used in a team context)
For each system, document: 1. What does the system track? 2. How does it communicate feedback or direction? 3. Can you contest its assessments? How? 4. Does a human review the system's recommendations before they affect you?
Part B: Apply the black box test
For each system: if you received a negative consequence (lower score, deactivation, critical feedback) from this system, could you get a meaningful answer to "why did this happen?" Write a brief assessment of how opaque or transparent each system is.
Part C: Reflection (200–300 words)
Which of your identified systems most closely resembles the algorithmic management described in this chapter? What features make it similar or different from the warehouse or gig economy contexts?
Exercise 28.2 — Python Lab: Modifying the Task Assignment Algorithm
Type: Technical/analytical coding exercise Time: 60–90 minutes Prerequisites: Basic Python familiarity; Chapter 28 Python code section
Instructions
The chapter includes a simplified warehouse task assignment algorithm. This exercise asks you to modify the algorithm to incorporate worker protections, then analyze the trade-offs.
Setup: Copy the chapter's Python code into a new file or Jupyter notebook.
Modification 1: Add accommodation support
The current Worker class does not include any accommodation for workers with documented disabilities or medical limitations. Add the following:
- A has_accommodation boolean field to the Worker class
- A accommodation_type string field (e.g., "reduced rate target", "extended break time")
- Modify the check_and_issue_warnings method to skip or adjust warnings for workers with accommodations
Modification 2: Add an idle time context field
The current system treats all idle time identically. Add: - A way for workers to log "excused idle time" (approved bathroom break, equipment malfunction, helping injured colleague) - Modify the warning logic to distinguish excused from unexcused idle time - Note: In a real system, this would require a mechanism for workers to record excused time in real time — design that mechanism as part of your solution
Modification 3: Add a warning appeal mechanism
The current code generates automated warnings without appeal. Add:
- An appeal_warning method that allows a worker to flag a specific warning for human review
- A simple pending_appeals queue that human supervisors would review
- Output text indicating that appeals are pending when the simulation ends
Analysis (300–400 words):
After implementing your modifications, answer: 1. Did your modifications reduce the algorithm's efficiency (throughput per worker)? How significant is the trade-off? 2. Are your modifications technically feasible in a real warehouse management system? What would the implementation challenges be? 3. What does the process of designing these modifications reveal about the original algorithm's design choices? 4. Who would need to decide to implement these modifications — the engineer? The manager? The legal team? What does the answer reveal about whose interests are centered in algorithm design?
Exercise 28.3 — The Deactivation Scenario: Role-Play and Legal Analysis
Type: Small group role-play with written legal analysis Time: 50–60 minutes
Instructions
Scenario: Marcus has been a DoorDash delivery driver for 8 months. His overall rating is 4.62 stars — above the 4.60 minimum threshold. Yesterday, his account was deactivated. The email from DoorDash says: "Your Dasher account has been deactivated because your rating has fallen below the minimum threshold required to Dash."
Marcus believes this is wrong. He had four deliveries yesterday. One customer gave him 1 star despite the fact that the restaurant had prepared the food incorrectly (he knows this because he called to apologize and the customer told him). This dropped his rating from 4.62 to 4.59.
Marcus has an appeal process: he can email DoorDash's support team. He has already done so. The response was: "Thank you for contacting Dasher support. Our review has confirmed that your account has been deactivated in accordance with our policies. This decision is final."
Group roles (4–5 people): - Marcus - A DoorDash policy representative - An employment lawyer specializing in gig economy - A worker organizer from the Gig Workers Collective - (Optional) An NLRB investigator
Round 1 (15 minutes): Marcus attempts to appeal his deactivation through the policy representative. What arguments can Marcus make? What does the representative say?
Round 2 (15 minutes): The lawyer and organizer advise Marcus on his options. What legal recourse does he have? What organizing strategies might help?
Written Analysis (Individual, 300–400 words): 1. What legal rights, if any, does Marcus have in this situation under U.S. law? 2. How would this situation differ if Marcus were in the EU under the Platform Work Directive? 3. What structural change to DoorDash's algorithmic rating system would have prevented this situation? 4. Is the 1-star review that triggered the deactivation "Marcus's fault"? What does this question reveal about how algorithmic systems assign individual responsibility for structural problems?
Exercise 28.4 — Emotional Labor Monitoring: An Ethics Workshop
Type: Small group discussion with written position paper Time: 45–55 minutes
Instructions
Background: A large bank is considering implementing a sentiment analysis system for its customer service call center. The system would analyze every call in real time and assign an "empathy score" to each agent based on vocal patterns, word choice, and response timing. Agents whose empathy scores fall below a monthly threshold would receive mandatory coaching. Agents with consistently low scores would face performance review.
Discussion questions for group (20 minutes):
-
What is being measured in the empathy score? Is "empathy" reducible to vocal patterns and word choice?
-
The bank argues that emotional labor is already being evaluated in performance reviews — the only change is making the evaluation more consistent and continuous. Is this argument valid?
-
If the sentiment analysis system is racially biased — if it systematically scores Black workers' vocal patterns lower than white workers' for equivalent service quality — is this an algorithmic discrimination claim? How would a worker prove this?
-
Workers know their calls are already recorded "for quality assurance." Does adding automated sentiment scoring change the ethical picture? If so, how?
-
Should workers be informed that their "empathy scores" are being calculated and that these scores affect their performance evaluation?
Position Paper (Individual, 300–400 words): Write a position paper as if you were either: (a) the bank's Chief People Officer recommending or opposing implementation, or (b) the union representing call center workers, explaining your support or opposition. Use specific arguments and evidence from the chapter.
Exercise 28.5 — Historical Comparison: Taylor's Foreman and the Algorithmic Manager
Type: Comparative analysis essay Time: 45–60 minutes (can be assigned as homework)
Instructions
This exercise asks you to systematically compare the traditional human supervisor in a Taylor-era factory with the algorithmic manager in a contemporary warehouse or gig economy platform.
Comparison dimensions:
Knowledge asymmetry: - What did the Taylor-era foreman know that the worker did not? - What does the algorithmic manager know that the worker does not? - How has the nature and scale of the knowledge asymmetry changed?
Discipline mechanisms: - How did the Taylor-era foreman discipline workers? - How does the algorithmic manager discipline workers? - What are the advantages and disadvantages of each approach from the perspective of: (a) the employer, (b) the worker, (c) the legal system?
Appeals and contestation: - In the Taylor era, how could workers contest supervisory decisions? - How can workers contest algorithmic decisions today? - Has the move to algorithmic management made contestation easier or harder?
Visibility: - What was visible to the Taylor-era foreman that was invisible to workers? - What is visible to the algorithmic manager that is invisible to workers? - What is visible to workers about their management system in each era?
Accountability: - Who was responsible for a Taylor-era foreman's unfair decision? - Who is responsible when an algorithm makes an unfair decision? - How does responsibility shift when management is automated?
Essay length: 600–800 words. Make an argument — don't just describe. Your essay should defend a thesis about whether algorithmic management represents a continuity with or a rupture from traditional workplace supervision.