Quiz: Labor, Automation, and the Gig Economy

Test your understanding before moving to the next chapter. Target: 70% or higher to proceed.


Section 1: Multiple Choice (1 point each)

1. Algorithmic management differs from traditional management primarily in that:

  • A) It eliminates all forms of worker monitoring.
  • B) It uses continuous data collection, real-time metrics, and automated decision-making to direct, evaluate, and discipline workers without human review.
  • C) It gives workers more control over their working conditions.
  • D) It is limited to the gig economy and does not affect traditional employees.
Answer **B)** It uses continuous data collection, real-time metrics, and automated decision-making to direct, evaluate, and discipline workers without human review. *Explanation:* Section 33.1.1 defines algorithmic management as the use of data-driven automated systems to direct, evaluate, and discipline workers. Unlike traditional management (which involves human observation, communication, and judgment), algorithmic management operates through continuous quantified surveillance and automated decisions — often without meaningful human review, communication, or appeal. It affects both gig workers and traditional employees.

2. Which of the following is NOT identified in the chapter as a form of workplace surveillance?

  • A) Keystroke logging and screen capture
  • B) Emotional analytics through facial expression analysis
  • C) Worker-controlled data dashboards
  • D) GPS location tracking of company vehicles
Answer **C)** Worker-controlled data dashboards. *Explanation:* Section 33.2.1 lists keystroke logging, screen capture, mouse tracking, email monitoring, location tracking, and emotional analytics as forms of workplace surveillance. Worker-controlled data dashboards are proposed as a solution (not a surveillance tool) — they would give workers access to their own data. The distinction matters: surveillance is employer-controlled data collection about workers; dashboards would be worker-controlled access to that data.

3. Demand for employee monitoring software increased by approximately what percentage in the first months of the COVID-19 pandemic?

  • A) 10%
  • B) 30%
  • C) 60%
  • D) 100%
Answer **C)** 60%. *Explanation:* Section 33.2.2 cites Top10VPN (2020) reporting that demand for employee monitoring software increased by over 60% in the first months of the pandemic. The pandemic shift to remote work created conditions for surveillance infrastructure that, in many organizations, proved more invasive than anything in the physical office.

4. Sofia Reyes identifies five dimensions of gig worker data asymmetry. Which dimension describes the fact that workers can see individual pay amounts but cannot access the logic that determines those amounts?

  • A) Earnings data
  • B) Rating data
  • C) Algorithmic data
  • D) Market data
Answer **C)** Algorithmic data. *Explanation:* Section 33.3.2 describes algorithmic data as the rules by which the algorithm allocates work, sets prices, and determines pay — all of which are proprietary. Workers experience the algorithm's outputs (individual pay amounts) but cannot see its logic (how those amounts are calculated). Earnings data (A) refers to aggregate compensation information; rating data (B) to customer evaluations; market data (D) to supply/demand dynamics.

5. Veena Dubal's research on "algorithmic wage discrimination" documented:

  • A) Platforms paying all workers identical rates for identical work.
  • B) Platforms offering personalized pay to individual workers based on predictions of what each worker will accept.
  • C) Platforms publicly disclosing their wage algorithms to workers.
  • D) Platforms allowing workers to set their own rates.
Answer **B)** Platforms offering personalized pay to individual workers based on predictions of what each worker will accept. *Explanation:* Section 33.3.3 describes Dubal's research finding that platforms used behavioral data to identify workers who would accept lower rates and offered them lower pay for substantially similar work. Drivers in the same market, at the same time, performing the same work received different pay. Dr. Adeyemi characterized this as "the Consent Fiction applied to compensation" — consent without information, alternatives, or bargaining power.

6. The most consistent finding in automation research is:

  • A) Automation eliminates entire occupations rapidly and completely.
  • B) Automation has no meaningful effect on employment.
  • C) Automation displaces specific tasks within jobs rather than eliminating entire jobs.
  • D) Automation creates more jobs than it destroys in every sector simultaneously.
Answer **C)** Automation displaces specific tasks within jobs rather than eliminating entire jobs. *Explanation:* Section 33.4.2 identifies task displacement (not job displacement) as the most consistent finding. A radiologist whose diagnostic tasks are automated may shift to patient consultation and complex case management — the job changes but does not disappear. This distinction matters because it suggests more nuanced policy responses than either "robots will take all jobs" or "nothing will change."

7. The chapter identifies a unique feature of generative AI compared to previous automation waves. This feature is that generative AI:

  • A) Only affects manual labor jobs.
  • B) Affects non-routine cognitive tasks that were previously considered automation-resistant.
  • C) Has no effect on any current jobs.
  • D) Only affects jobs in the technology sector.
Answer **B)** Affects non-routine cognitive tasks that were previously considered automation-resistant. *Explanation:* Section 33.4.2 notes that previous automation waves primarily affected routine tasks (both manual and cognitive). Generative AI introduces a new variable by affecting non-routine cognitive tasks — writing, analysis, coding, design — that were previously considered safe from automation. Early evidence suggests it may compress the wage distribution rather than simply displacing low-wage workers.

8. Sofia Reyes's proposed "right to collective data" would give workers the right to:

  • A) Delete all data the platform has collected about them.
  • B) Access aggregate data about working conditions on their platform, enabling collective bargaining.
  • C) Sell their individual data to the highest bidder.
  • D) Refuse all data collection by the platform.
Answer **B)** Access aggregate data about working conditions on their platform, enabling collective bargaining. *Explanation:* Section 33.6.2 describes the "right to collective data" as the right to access aggregate information about working conditions — average earnings, rating distributions, deactivation rates — that would enable collective bargaining and organized advocacy. This right is distinct from individual data access because it provides the aggregate view necessary for understanding systemic patterns and negotiating collectively.

9. The chapter's key insight about the relationship between surveillance and productivity is:

  • A) Surveillance always increases both productivity and job satisfaction.
  • B) Surveillance optimizes for measurable output, not valuable output — potentially rewarding performative activity over genuine contribution.
  • C) Surveillance has no measurable effect on worker behavior.
  • D) Surveillance decreases productivity in all cases.
Answer **B)** Surveillance optimizes for measurable output, not valuable output — potentially rewarding performative activity over genuine contribution. *Explanation:* The Key Insight box in Section 33.2.3 states that "surveillance optimizes for *measurable* output, not *valuable* output." A worker thinking about a difficult problem may appear "idle" to a keystroke logger but produce more value than one typing continuously. Surveillance incentivizes performative typing, mouse jiggling, and metric gaming — behaviors that increase measured productivity while potentially decreasing actual productivity.

10. Sofia Reyes argues that data rights should be recognized as labor rights. Her first pillar is:

  • A) Workers consume data through their labor.
  • B) Workers produce data through their labor and should have rights over that data.
  • C) Workers should be paid directly for their data.
  • D) Data collection in the workplace should be prohibited entirely.
Answer **B)** Workers produce data through their labor and should have rights over that data. *Explanation:* Section 33.6.1 presents Sofia's three-pillar argument: (1) workers produce data (every keystroke, delivery, and ride generates data that platforms monetize), (2) data is used to manage workers (the mechanism through which management power is exercised), and (3) data asymmetry undermines existing labor rights (you cannot bargain effectively without information). The first pillar establishes that data is a product of labor, and therefore workers should have rights over it.

Section 2: True/False with Justification (1 point each)

11. "Gig platform companies classify workers as independent contractors because those workers genuinely exercise the same level of autonomy and control as traditional independent contractors."

Answer **False.** *Explanation:* Section 33.3.1 documents that the "flexibility" of gig work is largely illusory. Algorithmic management exercises functional control over workers: platforms determine pay, evaluate performance, allocate work, and can terminate the relationship at will. The "flexibility" to set hours is constrained by surge pricing and algorithmic nudges. The "freedom" to reject tasks is constrained by acceptance rate requirements. Information asymmetry means workers cannot make informed decisions. This functional control resembles an employment relationship, not genuine independent contracting.

12. "Research shows that workplace surveillance consistently increases both short-term productivity and long-term innovation."

Answer **False.** *Explanation:* Section 33.2.3 reports that while surveillance does increase measurable output in the short term (Bernstein, 2012), it decreases creativity and innovation, erodes trust, increases turnover, and incentivizes counterproductive gaming behaviors. The short-term compliance gains come at the cost of the creative risk-taking, idea-sharing, and psychological safety that drive long-term innovation. Surveillance optimizes for what is measurable, not what is valuable.

13. "Under current US law, employers face minimal legal restrictions on monitoring employees' digital activity on company systems."

Answer **True.** *Explanation:* Section 33.2.4 states that employer surveillance is broadly legal in the United States. The Electronic Communications Privacy Act (1986) permits employers to monitor communications on company systems. Many states have minimal restrictions. Notable exceptions include Connecticut and Delaware, which require employers to notify employees of email monitoring. The EU provides stronger protections through the GDPR, but US workers have limited legal recourse against workplace surveillance.

14. "The chapter argues that automation taxes should prevent all automation to protect existing jobs."

Answer **False.** *Explanation:* Section 33.5.2 describes automation taxes as proposals to align the private incentives of firms (which save money by automating) with the social costs of displacement (borne by workers and communities). Such taxes would not prevent automation but would "slow it where the social costs exceed the private benefits." The goal is to ensure that automation proceeds at a pace and in ways that allow for adjustment, not to block technological change entirely.

15. "Sofia Reyes found that every gig worker she interviewed was able to successfully access their data through CCPA requests."

Answer **False.** *Explanation:* Section 33.7.1 reports the opposite: "No worker had successfully accessed their data." Several workers had submitted CCPA requests, but the data they received was either incomplete (aggregate summaries rather than raw data) or unusable (massive CSV files with no documentation or context). The gap between legal rights and practical accessibility is a central finding of Sofia's investigation.

Section 3: Short Answer (2 points each)

16. Explain what Dr. Adeyemi means by calling the Consent Fiction in algorithmic management "operating at industrial scale" (Section 33.7.1, quoting Sofia's conclusion). How does the scale of the consent fiction in the gig economy differ from the consent fiction in, say, a social media platform's terms of service?

Sample Answer When Sofia says the Consent Fiction is "operating at industrial scale," she means that millions of gig workers have formally agreed to data practices they do not understand, cannot access, and cannot challenge — and that this formal agreement functions as the legal foundation for an entire labor system. The scale differs from social media consent in two ways. First, the stakes are higher: social media consent governs leisure activity, while gig economy consent governs livelihood — your income, your working conditions, your economic survival. Second, the power asymmetry is more acute: a social media user can (at least theoretically) leave a platform; a gig worker who depends on the platform for income faces economic consequences for opting out. The consent is formally identical (clicking "I agree" to terms you haven't read), but its consequences are fundamentally different when it governs your ability to earn a living rather than your ability to scroll a feed. *Key points for full credit:* - Explains the "industrial scale" concept (millions of workers, entire labor system) - Distinguishes from social media consent by stakes (livelihood vs. leisure) and power asymmetry - Connects to the structural definition of the Consent Fiction

17. Using the Worker Data Rights Assessment framework from Section 33.8, evaluate a specific workplace surveillance system (real or hypothetical) across all six dimensions: data collection, transparency, access, voice, contest, and portability.

Sample Answer Consider a call center that uses an AI-powered system to monitor employee calls, score "customer empathy" through voice tone analysis, and rank agents based on call resolution time and empathy scores. - **Data collection:** The system collects voice recordings, call duration, resolution outcomes, and AI-derived "empathy scores" — potentially disproportionate to legitimate management needs (empathy scoring is scientifically unvalidated). - **Transparency:** Agents may know their calls are recorded but likely do not know that AI analyzes their voice tone for "empathy" or how the scoring algorithm works. - **Access:** Agents can probably see their aggregate scores but cannot access the raw voice analysis data or the algorithm's reasoning for specific scores. - **Voice:** Agents had no input into the design of the empathy scoring system — the metrics, thresholds, and consequences were set without worker consultation. - **Contest:** If an agent receives a low empathy score, they likely cannot challenge the AI assessment through a meaningful process with human review. - **Portability:** Agents cannot take their performance data or empathy scores with them if they leave for another employer. Assessment: This system fails on five of six dimensions — only basic awareness of call recording (partial transparency) meets the minimum standard. *Key points for full credit:* - Applies all six dimensions to a specific system - Identifies gaps at each dimension with specific reasoning - Demonstrates understanding of the framework's evaluative purpose

18. Section 33.4.3 identifies three requirements for "responsible analysis" of automation's employment effects: specificity, complementarity, and institutional attention. Explain each requirement and why the chapter argues that generalized claims about automation (e.g., "AI will eliminate 300 million jobs") are irresponsible.

Sample Answer **Specificity** requires asking "which tasks within which jobs in which sectors are most susceptible to automation, over what timeframe, and with what distributional consequences" rather than making generalized claims about total job losses. A claim about "300 million jobs" treats wildly different occupations, tasks, and contexts as interchangeable. **Complementarity** recognizes that technology can substitute for human labor (replacing workers) or complement it (making workers more productive). Whether a technology acts as substitute or complement depends on design choices, organizational decisions, and policy environments — not just on the technology's capabilities. Generalized claims assume pure substitution and ignore the significant complementarity effects that shape actual outcomes. **Institutional attention** recognizes that the impact of automation depends not just on technological capability but on labor regulations, educational systems, safety nets, tax policy, and worker bargaining power. The same technology can produce very different employment outcomes in different institutional environments. Claims that ignore institutional context treat technology as deterministic when it is actually contingent on human choices. Generalized claims are irresponsible because they collapse these crucial distinctions into a single number, creating either unnecessary panic or false reassurance while obscuring the policy choices that will actually determine outcomes. *Key points for full credit:* - Defines all three requirements - Explains why each makes generalized claims inadequate - Connects to the chapter's argument about responsible public discourse on automation

19. Compare Sofia Reyes's proposed "right to explanation" and "right to contest" for workers (Section 33.6.2) with the GDPR's existing right to explanation for automated decisions. Why does Sofia argue that applying existing data rights to the workplace is "radical"?

Sample Answer The GDPR already provides EU citizens with a right to explanation of significant automated decisions (Article 22) and rights to access, rectify, and port their personal data. Sofia's proposed worker rights — the right to explanation of how algorithmic management makes decisions about task allocation, performance evaluation, pay, and termination, and the right to contest those decisions through meaningful human review — are structurally similar. But Sofia argues that applying these rights to the workplace is "radical" because "the workplace is the one domain where data power is most concentrated and data rights are least protected." Despite existing frameworks that guarantee data rights to citizens, workers are systematically excluded from exercising those rights in practice. Platforms' terms of service, the classification of workers as independent contractors, and the opacity of proprietary algorithms create barriers that effectively nullify the theoretical rights. The "radicalism" lies not in the novelty of the rights themselves but in their application to the domain of labor — where the power asymmetry between data collector (employer/platform) and data subject (worker) is most extreme and where economic dependence makes consent least meaningful. *Key points for full credit:* - Notes that similar rights exist in the GDPR - Explains why applying them to the workplace is described as "radical" - Identifies the structural barriers (terms of service, classification, opacity) that prevent exercise of existing rights

Section 4: Applied Scenario (5 points)

20. Read the following scenario and answer all parts.

Scenario: FlexWork Delivery

FlexWork is a food delivery platform operating in 15 US cities with 50,000 active delivery workers classified as independent contractors. FlexWork's algorithmic system assigns deliveries, sets delivery fees, tracks worker location via GPS, monitors delivery speed, calculates a "reliability score" based on acceptance rate, on-time percentage, and customer ratings, and deactivates workers whose reliability score falls below a threshold.

A group of FlexWork delivery workers in Chicago has organized and is demanding: 1. Disclosure of how the reliability score is calculated 2. Access to aggregate earnings data across all Chicago workers 3. A human review process for deactivation decisions 4. The ability to take their reliability score and delivery history with them if they switch to a competing platform

FlexWork's CEO responds: "Our algorithm is proprietary technology that represents millions of dollars of investment. Disclosing it would destroy our competitive advantage. Besides, our drivers agreed to these terms when they signed up."

(a) Map each of the workers' four demands to the corresponding worker data right from Sofia Reyes's framework (Section 33.6.2). For each demand, explain the specific data asymmetry it seeks to address. (1 point)

(b) Evaluate the CEO's response using the Consent Fiction framework. Identify at least three ways in which the workers' "agreement" to FlexWork's terms does not constitute meaningful consent. (1 point)

(c) FlexWork classifies its workers as independent contractors. Using the evidence from Section 33.3.1, identify at least three ways in which FlexWork's algorithmic management is inconsistent with independent contractor status. (1 point)

(d) Propose a governance mechanism — short of fully disclosing the algorithm — that would balance FlexWork's legitimate interest in protecting proprietary technology with the workers' legitimate interest in understanding and contesting the system that manages them. Reference the concept of "trusted intermediaries" or independent auditors. (1 point)

(e) FlexWork's deactivation algorithm may produce disparate impacts along racial lines if customer ratings (a component of the reliability score) are influenced by racial bias in customer evaluations. Design an equity audit that would test for this disparity. Describe what data you would need, what analysis you would perform, and what action should follow if a disparity is found. (1 point)

Sample Answer **(a)** Mapping to Sofia's framework: 1. Disclosure of reliability score calculation = **Right to explanation** — addresses the algorithmic data asymmetry (workers experience outputs but cannot see logic). 2. Aggregate earnings data = **Right to collective data** — addresses the earnings data asymmetry (workers know only their own earnings, preventing comparison and collective bargaining). 3. Human review for deactivation = **Right to contest** — addresses the evaluation/discipline function of algorithmic management (automated decisions with no human review or appeal). 4. Portability of score and history = **Right to portability** — addresses lock-in and the behavioral data asymmetry (platform retains all performance data, workers start from zero on competing platforms). **(b)** The CEO's Consent Fiction has at least three flaws: - **No negotiation:** Workers cannot negotiate FlexWork's terms. They accept the algorithm or they don't work. This is not a bilateral agreement; it is a condition of access. - **No transparency:** You cannot meaningfully consent to a system whose rules you cannot see. The reliability score calculation is undisclosed, so workers cannot know what they are agreeing to. - **Dynamic terms:** FlexWork can change its algorithm at any time without worker input. "Consent" given at signup does not extend to future modifications the worker cannot anticipate. - **Economic dependency:** For workers who rely on FlexWork as primary income, the "choice" to accept terms or not work is not a genuine choice but a coerced agreement under conditions of economic necessity. **(c)** Three inconsistencies with independent contractor status: - FlexWork's algorithm determines pay (delivery fees are set by the platform, not negotiated by the worker), which is characteristic of an employment relationship. - FlexWork evaluates and disciplines workers through the reliability score and deactivation — exercising the supervisory control that defines employment. - FlexWork controls what information workers see (destination not revealed until acceptance, aggregate data withheld) — constraining the "independent" decision-making that would characterize genuine contracting. **(d)** A governance mechanism balancing proprietary protection with worker rights: **independent algorithmic audit**. A qualified, independent auditing firm — bound by non-disclosure agreements that protect proprietary details — would conduct regular audits of the reliability score algorithm. The auditor would verify that the algorithm operates as described, test for discriminatory patterns, and publish a summary report (without disclosing proprietary details) certifying whether the algorithm meets standards of fairness, transparency, and due process. Workers would receive the summary findings; FlexWork's proprietary details would remain confidential. This mirrors the role of financial auditors who verify corporate accounts without disclosing trade secrets. **(e)** Equity audit design: - **Data needed:** Reliability scores disaggregated by worker race/ethnicity; customer rating distributions disaggregated by worker race/ethnicity; deactivation rates by race/ethnicity; controlling variables (delivery time, order accuracy, market area). - **Analysis:** Statistical comparison of customer ratings and deactivation rates across racial groups, controlling for objective performance metrics. If Black drivers receive systematically lower customer ratings than white drivers with equivalent delivery times and accuracy, this suggests customer racial bias is influencing the reliability score. - **Action if disparity found:** Remove or reduce the weight of customer ratings in the reliability score; implement rating calibration that adjusts for documented bias; establish a review process for deactivation decisions that includes human assessment of whether the reliability score reflects performance or bias; report findings publicly and to workers.

Scoring & Review Recommendations

Score Range Assessment Next Steps
Below 50% (< 15 pts) Needs review Re-read Sections 33.1-33.3, redo Part A exercises
50-69% (15-20 pts) Partial understanding Review specific weak areas, focus on Part B exercises
70-85% (21-25 pts) Solid understanding Ready to proceed to Chapter 34
Above 85% (> 25 pts) Strong mastery Proceed to Chapter 34: Environmental Data Ethics and Climate
Section Points Available
Section 1: Multiple Choice 10 points (10 questions x 1 pt)
Section 2: True/False with Justification 5 points (5 questions x 1 pt)
Section 3: Short Answer 8 points (4 questions x 2 pts)
Section 4: Applied Scenario 5 points (5 parts x 1 pt)
Total 28 points