Case Study 28-1: Uber's Psychological Manipulation of Drivers — Algorithmic Nudging

Overview

In 2017, The New York Times published an investigative report by Noam Scheiber that revealed the extent to which Uber had studied behavioral psychology and designed its app interface to manipulate driver behavior — extending working hours, steering drivers to specific locations, and maintaining supply of drivers during high-demand periods by applying techniques borrowed from video game design and cognitive psychology.

This case study examines Uber's algorithmic nudging program in detail, connecting it to the broader analysis of algorithmic management and exploring the specific ways in which psychological manipulation, disguised as helpful notifications and gamified incentives, constitutes a form of surveillance and control.


Background: The Uber Driver Management Problem

Uber's core operational challenge is supply management: ensuring that enough drivers are available in specific locations at specific times to meet rider demand, while paying drivers as independent contractors (rather than employees) and therefore unable to mandate where and when drivers work.

This creates a distinctive algorithmic management problem. Traditional employers can schedule employees; Uber cannot schedule contractors. Traditional employers can direct employees to specific locations; Uber cannot direct contractors. If Uber wants drivers in a specific place at a specific time, it must create conditions in which drivers choose to be there — while engineering those choices so thoroughly that the behavior is predictable and reliable.

Uber's solution was a sophisticated behavioral modification program, developed with the assistance of behavioral economists and psychologists, that used the driver app interface to create psychological conditions that reliably produced the desired driver behavior.


The Nudging Techniques

The Times investigation, supplemented by subsequent reporting and academic analysis, documented several specific techniques:

Income Targeting Notifications

Uber's algorithm knew drivers' income history and goals. When a driver approached a self-set earnings goal — say, $200 for the day — and was about to log off, the app would notify them: "You're $10 away from your daily goal! Keep going?"

The behavioral science behind this is the endowment effect and goal proximity: people experience disproportionate motivation as they approach a goal, and framing the remaining gap as small makes quitting feel like leaving something on the table.

The manipulation is subtle but consequential: Uber's algorithm knows when drivers are likely to stop working and deploys psychological pressure at precisely that moment to extend driving time. The driver experiences this as a "helpful notification"; from a behavioral management perspective, it is real-time behavioral intervention.

The Consecutive Ride Bonus Structure

Uber designed its bonus structure around "streaks" — consecutive ride completions within a time window. A driver who completed 10 rides in 3 hours earned a bonus; if they declined a ride or let the window expire, the streak ended. The structure borrowed directly from video game design: loss aversion (don't lose the streak) and sunk cost effects (I've already done 7 rides; I can't stop now) kept drivers working through the bonuses.

The critical feature: the driver's decision to keep working or stop was restructured by the app to feel like a decision about the streak, not a decision about whether continued driving was in their interest. The algorithm had made quitting feel like losing.

Phantom Supply Signals

In some markets, Uber's app showed drivers visualizations of rider demand — heat maps showing high-demand areas glowing red. Internal Uber documents obtained in litigation revealed that some of these visualizations were algorithmic constructs, not real-time representations of actual demand. The maps showed "hot spots" that were designed to move drivers toward areas where Uber wanted supply — not necessarily where actual surge demand existed.

Drivers who followed the heat map were following an algorithmically manufactured geography of demand. They were making "independent" decisions based on information that Uber had curated to produce specific behavioral outcomes.

The Quest for "Five Stars"

Uber's rating system — where drivers need to maintain a minimum rating to avoid deactivation — created a continuous psychological background condition: the awareness that every passenger was a rater, that any 1- or 2-star rating could affect their livelihoods, and that maintaining the threshold required constant performance of the correct emotional labor.

Uber's driver guides instructed drivers on how to behave — offering mints, asking about music preferences, being "friendly but not intrusive" — creating a standardized emotional labor script. Drivers who followed the script were "choosing" to behave in this way; they were also following an algorithm-enforced template that the algorithm would punish deviation from.


Uber drivers agreed to terms of service that disclosed the existence of the bonus structure and the rating system. They did not consent to behavioral manipulation programs designed by behavioral psychologists to extend their working hours through cognitive biases.

The distinction matters: drivers could reasonably be expected to understand that bonuses incentivize more work. They could not reasonably be expected to know that the bonus structure — the streak mechanic, the proximity framing, the timing of notifications — had been engineered by psychologists specifically to exploit cognitive biases that make goal abandonment feel disproportionately aversive.

This is a form of informed consent failure that goes beyond the usual workplace surveillance consent analysis: it's not merely that the monitoring was not disclosed, but that the behavioral modification mechanism was not disclosed. The driver who is being manipulated by a streak bonus is in a fundamentally different epistemic position than the driver who simply knows "completing more rides earns more money."


The Academic Response and Uber's Defense

The Scheiber investigation prompted significant academic commentary on what researchers called "algorithmic labor control" and "dark patterns in employment platforms." Researchers Alex Rosenblat and Luke Stark published "Algorithmic Labor and Information Asymmetries" (2016) in International Journal of Communication, arguing that Uber's information practices — including selective disclosure of demand data, manufactured heat maps, and behavioral nudges — constituted a systematic information asymmetry that enabled Uber to extract value from drivers while maintaining the fiction of contractor independence.

Uber's response was that the nudging was "transparency" — giving drivers information they had requested or found useful. The streaks were opt-in, Uber argued; no driver was required to participate. The notifications were helpful reminders, not manipulation.

The response illustrates a fundamental disagreement about what manipulation means: Uber's view is that manipulation requires coercion; critics' view is that systematic exploitation of cognitive biases, even without coercion, constitutes manipulation if the exploited party does not know their biases are being exploited.


The revelations about Uber's behavioral management practices contributed to regulatory momentum around gig worker protections. California's AB5 (2019), which created a presumption of employment for workers of companies like Uber, was partly motivated by findings that gig platforms exercised employer-level control over worker behavior while denying employer status. Proposition 22 (2020) — funded by Uber, Lyft, DoorDash, and other platforms — exempted gig platforms from AB5 while creating a new framework for some worker protections.

The EU Platform Work Directive specifically addresses algorithmic management practices, including requirements that platforms disclose the parameters that influence workers' access to work opportunities and earnings — a direct response to practices like Uber's selective demand data and manufactured heat maps.


Discussion Questions

  1. Uber describes its nudging practices as "helpful information" for drivers making independent decisions. At what point does information provision become manipulation? What criteria would you use to draw this line?

  2. The streak bonus is described as "opt-in." Is opting out of the streak bonus a meaningful option for a driver who needs to maximize earnings and fears that non-participation will affect their algorithm-managed work assignments? What does this reveal about "choice" in algorithmically managed employment?

  3. Compare Uber's behavioral nudging of drivers to the performance management systems analyzed in Chapter 26. What is similar? What is different? Is the nudging more or less problematic than explicit performance metrics?

  4. Uber's manufactured heat maps showed drivers "demand" that was partly algorithmic construction rather than real-time representation. How does this differ, ethically, from other forms of information framing that influence behavior?

  5. The EU Platform Work Directive would require Uber to disclose the parameters that influence driver work assignments and earnings. Would full disclosure of Uber's nudging techniques — including the behavioral psychology underlying them — reduce their effectiveness? Should that matter in evaluating whether disclosure is required?