Imagine walking into a home in 2026. You speak, and a small cylindrical device acknowledges your voice and turns on the lights. The television knows what you watch and when. The thermostat knows your schedule and adjusts accordingly. The...
In This Chapter
- Opening: The House That Watches
- 15.1 What "Smart" Means: Connected + Data-Collecting
- 15.2 Amazon Echo and Alexa: The Always-On Microphone
- 15.3 Smart TVs: Automatic Content Recognition and the Viewing Record
- 15.4 Smart Home Data: When Your House Knows Your Schedule
- 15.5 Connected Cars: Telematics and Driving Behavior Scoring
- 15.6 Wearables: The Quantified Body
- 15.7 IoT Security: The Attack Surface of the Surveillant Home
- 15.8 The Consent Problem: Devices in Homes with Others
- 15.9 Jordan's Scenario: The Warehouse Scanner
- 15.10 Smart Devices and the Data Pipeline
- Summary: The Extended Surveillance Space
- Key Terms
- Discussion Questions
Chapter 15: Smart Devices and the Internet of Things
Opening: The House That Watches
Imagine walking into a home in 2026. You speak, and a small cylindrical device acknowledges your voice and turns on the lights. The television knows what you watch and when. The thermostat knows your schedule and adjusts accordingly. The refrigerator logs the food you consume. Your watch measures your heart rate and sleep. Your car noted the route you drove to get here.
This is not science fiction. For a substantial and growing segment of the population in wealthy countries, it is the unremarkable reality of daily domestic life. Smart devices — connected, data-collecting, behavioral-monitoring — have migrated from novelty to infrastructure in less than a decade.
The word "smart" in consumer technology marketing has a specific meaning: connected to the internet and data-collecting. A "smart" television is not smarter in any cognitive sense than a conventional television; it is connected to a server that monitors what you watch, when, and how long. A "smart" thermostat is not a more sophisticated temperature regulator; it is a behavioral monitoring device that learns your household's movement patterns and transmits them to a cloud server. The "smart" prefix signals capability, convenience, and intelligence in the user-facing marketing. It signals data collection in the product's actual operation.
This chapter examines the Internet of Things as surveillance infrastructure — not incidentally or accidentally, but as the designed purpose of a commercial ecosystem built on the extension of the data pipeline from digital behavior into physical space, physical bodies, and physical homes.
15.1 What "Smart" Means: Connected + Data-Collecting
The Internet of Things (IoT) is the term for the expanding universe of physical objects connected to the internet. In 2023, estimates placed the number of connected IoT devices globally at approximately 15 billion — more than twice the number of human beings on earth. By 2030, projections suggest 25–30 billion connected devices.
These devices range from the familiar (smartphones, laptops, tablets) to the newly "smart" (televisions, speakers, thermostats, refrigerators, washing machines, door locks, garage doors, baby monitors, security cameras) to the wearable (smartwatches, fitness trackers, health monitors) to the industrial (connected manufacturing equipment, smart utility meters, fleet tracking systems) to the ubiquitous infrastructure (smart city traffic systems, connected building management systems).
What they share is the same economic logic as the web tracking ecosystem described in Chapters 11–14: connectivity generates behavioral data; behavioral data is monetized through the data pipeline. The IoT expands this logic from the screen into every dimension of physical experience.
💡 Intuition: Think of the difference between a library card catalog and a modern library system that tracks who has checked out each book, how long they kept it, which pages they dog-eared, and which other books they picked up and put back. The first system provides access without surveillance. The second provides the same access and generates behavioral data as a byproduct. The "smart" device ecosystem is the second system, applied to your home, your body, and your car.
The Data Value Proposition
IoT devices generate several categories of data that are commercially valuable:
Behavioral pattern data: Smart devices reveal the rhythms and patterns of daily life with granularity that no prior technology could achieve. When you wake, when you leave, when you return, what you eat, what you watch, how much you exercise, how well you sleep — all of these patterns have commercial value.
Physical context data: Unlike web tracking, which captures digital behavior, IoT captures physical context: what the temperature is in your home when you're anxious, what you were watching when you ordered food, what your heart rate is when you read certain news content.
Predictive intelligence: The combination of behavioral patterns and physical context enables predictions about future behavior, health trajectories, and consumption decisions with precision that web data alone cannot achieve.
Cross-device integration: IoT data can be linked to web behavioral data, social media data, and purchase data to create comprehensive profiles that span physical and digital life.
15.2 Amazon Echo and Alexa: The Always-On Microphone
Of all the IoT devices that have entered the home, the voice-activated speaker is among the most significant from a surveillance perspective — because of the one feature that defines its value proposition: the "always-on" listening mode.
The Amazon Echo, introduced in 2014, and its successors were built around a key capability: the device is always listening for a "wake word" (the default is "Alexa," but users can choose alternatives). When the wake word is detected, the device begins recording and transmitting audio to Amazon's servers for processing. The response — the weather report, the timer, the music playback — comes from Amazon's cloud.
This architecture requires continuous audio monitoring to function. The device's microphone cannot simply "not listen" and still detect the wake word. Some version of audio processing must be continuous.
What Amazon Collects
Amazon's privacy disclosures for Alexa (as of 2023) acknowledge collection of:
- All voice interactions that occur after the wake word is detected, transmitted to Amazon's servers and retained (by default, indefinitely, though users can delete recordings)
- Ambient audio in some documented cases where the device activated without the intended wake word — false activations that transmitted audio clips to Amazon unexpectedly
- Usage patterns — when the device is used, what commands are given, which features are activated
- Smart home device data — if the Echo is connected to other smart home devices (lights, locks, thermostats), Amazon receives data about their states and activations
- Third-party skill data — when users enable third-party "skills" (voice apps from external developers), those developers receive data from user interactions with their skills
The business model extends beyond direct data monetization. Alexa is, in part, a retail discovery and purchase channel for Amazon's e-commerce business. Voice interaction data informs product recommendations, advertising targeting, and supply chain planning. A household that frequently asks Alexa about certain product categories is more valuable to Amazon as an advertising and commerce target.
False Activations and the Privacy Margin
A recurring concern with always-on voice devices is false activation — cases where the device activates (and begins recording and transmitting) without the user having said the wake word. Research by academics and journalists has documented that voice assistants activate false-positively at rates varying from a few times per day to dozens of times per day depending on the acoustic environment.
A 2020 investigation by researchers at Northeastern University and the Imperial College London tested multiple smart speaker brands and found that none reliably activated only on their intended wake word. Fragments of conversation — specific phoneme combinations, television dialogue, or background speech — regularly triggered recording without any user intent. In one notable case documented by an Amazon customer in Germany, a device recorded and transmitted weeks of household audio without any intentional activation.
Amazon's default data retention policy — indefinitely saving all voice interactions — means that inadvertently recorded conversations remain on Amazon's servers unless users actively delete them. Most users do not delete recordings, both because many do not know the recordings exist and because the deletion process requires navigating account settings that most users never access.
⚠️ Common Pitfall: A common response to always-on microphone concerns is "I don't care — I don't have anything to hide." This response applies the "nothing to hide" framework — examined in Chapter 3 — to a context where its limitations are particularly clear. The false activation problem means the device may record conversations that were private by intent: medical discussions, relationship conversations, financial negotiations, or any communication the participants considered confidential. The "nothing to hide" framework assumes the surveillance is limited to what you consciously decide to share; the false activation problem means the surveillance may capture what you specifically chose not to share.
Third-Party Skill Data: The Ecosystem Problem
When users enable third-party Alexa "skills" (voice apps from external developers), the data flow extends beyond Amazon to the skill developer. Third-party skill developers receive transcripts of user voice interactions with their skills and, in some cases, behavioral data about those interactions.
The skill marketplace as of 2023 included over 100,000 skills, developed by companies ranging from major retailers and news organizations to small developers and startups. The data practices of these developers are governed by Amazon's policies but audited imperfectly. A user who enables a health information skill has given that skill's developer access to voice interaction data that may include health questions they would never have typed into a search engine.
Amazon's disclosure requirements for skill developers are less stringent than for Amazon's own data collection. Users who check Amazon's privacy disclosures and feel comfortable with them may be unaware that enabling a third-party skill creates a separate data relationship with a developer whose privacy practices they have not reviewed.
15.3 Smart TVs: Automatic Content Recognition and the Viewing Record
The modern smart television is, in a commercial sense, not primarily an entertainment device. It is a behavioral monitoring platform that happens to also show content. Its most significant data collection function is Automatic Content Recognition (ACR) — a technology that allows the television to identify what content is being displayed on its screen.
How ACR Works
ACR technology operates by periodically capturing still images of what is shown on the screen, generating a "fingerprint" (a compressed representation of the image), and comparing that fingerprint against a database of known content. The comparison identifies the program, the specific episode, the specific timecode, and the channel or platform. This process occurs multiple times per second, producing a continuous log of exactly what is being watched.
The ACR process is not limited to content delivered through the TV's own smart platform. It monitors whatever appears on screen — broadcast television, cable, streaming services, content from an external gaming console, content from a connected Blu-ray player. If it appears on screen, ACR can identify it.
The data this produces is extraordinarily granular:
- What program you watched, including exact start and stop times
- Whether you watched live or recorded content
- Whether you fast-forwarded through commercials
- What commercials you did watch, and for how long
- Which streaming services you use and what you watch on each
- How much time per day you spend watching vs. using other functions
Who Collects ACR Data
Most major smart TV manufacturers — Samsung, LG, Vizio, Sony, TCL — have implemented ACR in some form. But the data collection ecosystem extends beyond the manufacturer. Several platforms that power smart TVs (including Roku, Amazon Fire TV, and Google TV) also collect viewing data. And television manufacturers often partner with specialized ACR data companies — most notably Samba TV and Inscape (a Vizio subsidiary) — that focus specifically on monetizing viewing data.
Samba TV, for example, operates an ACR platform deployed in tens of millions of TVs across multiple brands. It sells viewing data to advertisers, content companies, and market research firms. Its clients can purchase reports on what percentage of viewers who saw a specific advertisement then searched for the advertised product online — closing the loop between television advertising exposure and web behavioral response in a way that was impossible before connected TV.
What Smart TV Data Is Used For
Cross-device advertising targeting: If your smart TV knows you watched a specific program, and if your identity is linked to your mobile device through cross-device tracking, advertisers can show you mobile ads related to what you watched on TV. The convergence of television viewing data and web behavioral data creates a more complete behavioral profile than either source alone.
Content licensing and programming decisions: Television networks and streaming services purchase aggregate viewing data to understand audience behavior at a level of granularity that traditional ratings services (Nielsen) cannot provide. What percentage of viewers completed an episode? At what point did viewers stop watching? Which programs do viewers watch in combination?
Insurance and health risk modeling: Television viewing patterns — how much you watch, what genres, what time of day — correlate with health-relevant behaviors. Sedentary viewing patterns, in particular, have been incorporated into some health risk scoring models.
📊 Real-World Application: In 2017, Vizio paid $2.2 million to the FTC and the New Jersey Attorney General to settle charges that it had collected detailed viewing data from 11 million smart TVs without adequate disclosure or consent. The settlement required Vizio to obtain affirmative consent before collecting and sharing ACR data. The case was instructive both for revealing the scope of smart TV data collection and for demonstrating the FTC's willingness to act — but $2.2 million represented a small fraction of the revenue generated by the data collection practice.
15.4 Smart Home Data: When Your House Knows Your Schedule
Beyond the television and voice speaker, the smart home encompasses a broader ecosystem of connected devices that, together, can reveal more about domestic life than any prior technology.
The Smart Thermostat
The Nest thermostat (now Google Nest), launched in 2011, was one of the first consumer IoT devices to achieve mainstream adoption. Its commercial value proposition was energy efficiency: it learned household patterns and optimized heating and cooling accordingly. But its data collection value was equally significant.
A smart thermostat knows: - When people are home (presence detection through motion sensors and temperature patterns) - When people wake up and go to sleep - Daily and weekly schedule patterns - Sensitivity to temperature variations - How often the home is vacant and for how long
When Google acquired Nest in 2014 for $3.2 billion, privacy advocates noted that the acquisition gave Google access to real-time presence data from inside users' homes — data that complemented Google's existing web behavioral and location data with information about domestic schedules and presence that no web interaction could reveal.
Google subsequently changed Nest's privacy policy to permit sharing data between Nest and Google, raising concerns from users who had purchased the device under Nest's original privacy terms. The episode illustrated function creep in the IoT context: a device purchased for energy management became a component of a broader behavioral surveillance ecosystem.
Smart Home Data and Insurance Pricing
Insurance companies have been particularly active in seeking behavioral data from connected home devices. Home insurance companies are interested in behavioral data that might predict claims risk. Health insurance companies are interested in domestic behavioral data that correlates with health outcomes.
Home insurance: Some home insurers offer discounts to customers who install connected security systems, smoke detectors, or water leak sensors that report device status to the insurer. In exchange, the insurer receives real-time data about risk indicators in the home. The discount covers the insurer's additional claim risk; the behavioral monitoring provides the insurer with data and leverage that extend beyond the specific risk disclosure.
Health insurance: Connected home data — sleep patterns, physical activity levels, meal preparation frequency, social activity — can be combined with wearable health data to construct comprehensive behavioral health profiles. Some insurance models explicitly encourage wearable device data sharing in exchange for premium discounts.
The insurance data use case illustrates the broadest form of IoT surveillance's commercial logic: behavioral data generated in the private space of the home flows outward to insurers who use it to price risk and shape coverage decisions. The home, historically the most protected space in privacy law, becomes a data source for decisions made by external institutions.
15.5 Connected Cars: Telematics and Driving Behavior Scoring
The automobile has become one of the most data-intensive environments outside the smartphone. Modern connected cars are equipped with dozens of sensors, a persistent internet connection, and sophisticated software systems that generate and transmit behavioral data continuously.
What Connected Cars Collect
The data collected by modern connected vehicles includes:
Location and routing data: GPS tracking with continuous position logging, route history, destinations visited, time spent at locations.
Driving behavior: Speed, acceleration patterns, braking patterns, cornering behavior, time of day driving occurs, mileage.
Vehicle systems data: Fuel consumption, engine status, maintenance indicators, tire pressure, battery status (in EVs).
Infotainment system data: What music was played, what navigation destinations were entered, what apps were used, contact information synced from connected phones.
Passenger behavior: In vehicles with in-cabin cameras (increasingly standard for driver monitoring and safety systems), data about driver attention, phone use while driving, and in some systems, passenger presence and behavior.
Insurance Telematics
The insurance application of connected car data is particularly significant because it has already achieved mainstream commercial deployment. Telematics insurance — also known as "usage-based insurance" or "pay-as-you-drive" insurance — uses driving behavior data to price insurance individually rather than through demographic and actuarial group classifications.
Programs like Progressive's "Snapshot," State Farm's "Drive Safe & Save," and Allstate's "Drivewise" use a telematics device (plugged into the OBD-II port or via smartphone) or the car's built-in connectivity to collect:
- Miles driven (more miles = more exposure = higher risk)
- Time of day driving (late-night driving correlates with higher accident rates)
- Hard braking events (correlates with aggressive or inattentive driving)
- Hard acceleration events
- Speed patterns
The data is used to calculate individual risk scores, which determine premium adjustments — lower premiums for careful, low-mileage drivers; higher premiums (or program termination) for drivers whose behavior suggests elevated risk.
The privacy implications are layered. Telematics programs are opt-in — drivers choose to participate in exchange for potential premium savings. But the choice is, in practice, constrained: drivers who decline telematics may be priced by actuarial group risk factors that are less favorable than their individual driving record would support. The "choice" to opt out of behavioral monitoring may carry financial costs.
The Automaker as Data Broker
Beyond insurance applications, automobile manufacturers have themselves become major data collectors and data sellers. Connected car data — location history, driving behavior, music preferences, navigation destinations, maintenance records — is collected by manufacturers and, increasingly, sold or licensed to data brokers and third parties.
A 2023 investigation by the Mozilla Foundation rated 25 major automobile brands on privacy practices and found all 25 failed the investigation's minimum privacy standards. The investigation found that most manufacturers collected extensive data beyond what was necessary for vehicle operation, shared or sold data to third parties without adequate disclosure, and used data for purposes including the sale of behavioral intelligence to insurers, marketers, and data brokers.
General Motors' OnStar platform was found to have sold driver behavioral data — including speed, braking, and GPS — to LexisNexis and Verisk Analytics, both of which sell the data to insurance companies for risk scoring. GM customers had technically consented to data sharing through OnStar's terms of service, but surveys found that most had no awareness that their driving data was being sold to insurance data brokers who would share it with their insurers.
🌍 Global Perspective: European data protection law has been more aggressive than U.S. law in scrutinizing connected vehicle data. GDPR's data minimization and purpose limitation principles apply to vehicle data, and several European data protection authorities have issued guidance on what connected car data is lawful to collect and share. Germany's data protection authority (BfDI) has been particularly active, finding that some standard connected car data practices violate GDPR's proportionality requirements. The EU's proposed Data Act (2022) would give vehicle drivers the right to access the data their cars generate — a significant departure from the current U.S. model in which this data belongs to the manufacturer.
15.6 Wearables: The Quantified Body
Wearable devices — fitness trackers, smartwatches, health monitors — bring the IoT into intimate contact with the body, generating behavioral and physiological data that has no precedent in commercial surveillance history.
What Wearables Collect
Contemporary wearables (Fitbit, Apple Watch, Garmin, Oura Ring) typically collect:
- Physical activity: Steps, movement patterns, exercise type and duration, active vs. sedentary time
- Physiological signals: Heart rate (continuous in many devices), heart rate variability, blood oxygen saturation (SpO2), skin temperature, electrodermal activity (EDA, measuring stress responses)
- Sleep: Duration, stages (light/deep/REM), interruptions, estimated sleep quality
- Location: GPS for outdoor activities, often continuous
- Menstrual cycle: Cycle tracking apps integrated with wearable data (cycle length, symptoms, predictions)
The physiological data that wearables collect goes far beyond what prior consumer devices could access. Continuous heart rate variability is a sensitive indicator of autonomic nervous system function — reflecting stress, anxiety, cardiovascular health, and mental state with more granularity than any self-reported measure. Sleep architecture (the distribution of sleep stages) correlates with cognitive function, depression risk, and neurological conditions. These are clinical-quality measurements that, until recently, required clinical settings.
Health Data and Commercial Use
The commercial use of wearable health data raises distinct concerns because of the sensitivity of health information and the regulatory frameworks that have historically protected it.
Health Insurance: As noted in the insurance telematics discussion, some insurers offer premium incentives for wearable data sharing. Vitality, a wellness insurance program operating through multiple carriers, offers premium discounts and rewards tied to activity data from Apple Watch and other wearables. Participants who maintain activity levels earn rewards; those who don't do not. The program has been praised for promoting health behaviors and criticized for commodifying health data and penalizing users who become ill or disabled and cannot meet activity targets.
HIPAA and Wearable Data: The Health Insurance Portability and Accountability Act (HIPAA) governs health data in clinical contexts — data held by healthcare providers and health plans. HIPAA does not cover wearable device companies like Fitbit or Apple because they are not healthcare providers or health plans. This creates a significant regulatory gap: highly sensitive health data — comparable in clinical value to medical records — is collected by companies that are not subject to the health data protections that apply in clinical settings.
Data Sales and Transfers: Fitbit's privacy policy (pre-Google acquisition) permitted the sale of anonymized data to research and commercial partners. Several studies have documented that claimed "anonymization" of wearable datasets can be reversed using combination attacks — linking movement patterns, location history, and physiological signals to individual records even in datasets that do not contain explicit identifiers.
Google's acquisition of Fitbit in 2021 for $2.1 billion was scrutinized by regulators in the EU, the US, and Australia primarily for its health data implications. The EU conditionally approved the acquisition with commitments from Google not to use Fitbit health data for advertising targeting for 10 years — an acknowledgment of the concern without fully addressing the structural problem.
15.7 IoT Security: The Attack Surface of the Surveillant Home
The IoT surveillance landscape raises not only data privacy concerns but security concerns. Connected devices with inadequate security are vulnerable to unauthorized access — by malicious actors as well as by the devices' own operators.
Default Passwords and Unpatched Firmware
The earliest and most pervasive IoT security problem was default credentials: devices shipped with factory-set usernames and passwords (typically "admin/admin" or "admin/password") that most consumers never changed. The 2016 Mirai botnet attack — which recruited hundreds of thousands of connected cameras, DVRs, and routers into a distributed denial-of-service army — demonstrated the catastrophic consequences of default credential failures at scale. Mirai was able to recruit devices because their owners had never changed the factory defaults, and the credentials were publicly known.
The Mirai attack disrupted major internet services including Twitter, Netflix, and CNN, but it was not the most privacy-significant IoT security failure. More personally damaging are attacks on specific devices in specific homes: security cameras accessed by unauthorized parties, baby monitors watched by strangers, smart doorbells used to monitor residents' schedules.
Baby Monitors and Intimate Surveillance Vulnerabilities
Baby monitors represent a category of IoT device whose security failures have particularly intimate consequences. Connected video baby monitors — which allow parents to view their baby's crib remotely through a smartphone app — have been repeatedly compromised by unauthorized access. In multiple documented cases, strangers have accessed insecure baby monitors to observe sleeping infants, to speak to them through the monitor's speaker, or to observe family activity in the child's room.
The security vulnerabilities exploited in these cases range from default credentials to unencrypted communication channels to authentication bypass vulnerabilities in the apps and cloud services that connect parents to their cameras. In many cases, the vulnerability exists not in the camera hardware itself but in the cloud service infrastructure — the company's backend systems that route video streams to parent applications.
When a baby monitor is compromised, the harm is not primarily about data collection (the privacy violation paradigm of Chapters 11–14). The harm is direct access to the physical environment of the home — a type of surveillance invasion that the traditional data privacy framework does not fully capture.
IoT Security Regulation
U.S. IoT security has been addressed through a patchwork of state-level legislation, most notably California's SB 327 (2018), which requires IoT device manufacturers to implement reasonable security features and prohibits default passwords that are shared across all devices of the same model. The FTC has brought enforcement actions against companies with grossly inadequate IoT security practices.
The EU's Cyber Resilience Act (proposed 2022, finalizing 2024) takes a more comprehensive approach, requiring IoT devices sold in the EU market to meet minimum cybersecurity requirements throughout their supported lifecycle, including automatic security updates and vulnerability disclosure obligations.
15.8 The Consent Problem: Devices in Homes with Others
IoT devices in shared living spaces create a consent problem that is structurally different from the individual consent problems described in Chapters 11–14. When a connected device is placed in a space shared by multiple people, the consent (or lack thereof) of the device owner extends over non-owners who share the space.
Household members: If one member of a household installs an Amazon Echo, every person in that household — including children, partners, or family members who did not choose to install the device — is subject to the device's always-on audio monitoring.
Guests: A visitor to a smart home may be subject to continuous monitoring — by cameras, by microphones, by behavioral sensors — without knowing that monitoring devices are present or having any opportunity to consent. Smart doorbells record everyone who approaches the door; smart TVs' ACR captures the viewing of every guest.
Renters: Landlords who install smart thermostats, connected door locks, or security systems in rental properties may be collecting behavioral data from tenants who did not install the devices and did not consent to the monitoring. The power asymmetry between landlord and tenant — which affects the tenant's practical ability to refuse or remove devices — further complicates consent.
Children: Children in homes with IoT devices are subject to continuous monitoring without meaningful capacity to consent. This creates special concerns that are addressed, in part, by the Children's Online Privacy Protection Act (COPPA) in the United States, but COPPA's coverage of IoT devices is contested and incomplete.
The IoT consent problem is structural: the device owner's decision to purchase and connect a device does not — and cannot — capture the consent of everyone who will be affected by it. The bilateral consent model of the GDPR and similar frameworks is insufficient for a surveillance apparatus whose reach is spatially defined rather than individually enrolled.
📝 Note: Some scholars have proposed a model of "environmental consent" for IoT contexts — a legal standard that would require disclosure of IoT monitoring in shared spaces (rental properties, commercial spaces, hotel rooms) regardless of the device owner's commercial relationship with the monitored persons. The UK's Information Commissioner's Office has issued guidance on IoT monitoring in shared spaces; U.S. law has not yet developed equivalent doctrine.
15.9 Jordan's Scenario: The Warehouse Scanner
Jordan had worked at the Hartwell Regional Distribution Center for eight months. The job was straightforward — receiving incoming packages, scanning items into the system, loading them onto outgoing vehicles. The work was physical and the hours were long, but the pay was reliable.
The scanner Jordan used was not just a barcode reader. The distribution company had recently upgraded to "smart scanners" — handheld devices that connected to the warehouse management system and tracked not just what Jordan scanned but where in the warehouse Jordan was at any given moment, how long it took to complete each task, how many steps Jordan took per hour, how often Jordan paused, and whether Jordan's pace fell below target thresholds.
The scanners were connected to a floor management dashboard visible to supervisors. The dashboard showed, in real time, each worker's location, productivity rate, and deviation from expected pace. Workers who fell below 95% of expected pace for more than 10 minutes received an automated notification on their scanner: "Your pace rate is below target. Please accelerate."
Jordan had read the employment agreement but hadn't internalized the monitoring section. It was only after a conversation with a coworker named DeShawn, who had worked at Amazon fulfillment centers before, that Jordan understood what the scanners were doing.
"At Amazon," DeShawn said, "they call it 'time off task.' If you're not scanning, you're TOT. Too much TOT and you get automatically flagged for a performance review. They don't even need a supervisor to notice. The algorithm notices."
Jordan thought about this on the walk home. "The algorithm notices." Not a person, not a manager, not someone who understood why Jordan had stopped for a minute — to help a new coworker, to deal with a spillage, to respond to a safety situation. The algorithm registered deviation from expected pace and initiated a process.
In the next class, Jordan raised this with Dr. Osei. "The IoT in my warehouse is different from the IoT in someone's home, right? At work it's legitimate to monitor productivity."
Dr. Osei considered this. "The legitimacy question is contested. There's a significant legal and ethical literature on how much work performance monitoring is appropriate. But the structural question you're raising — about who generates the data, who owns it, and who makes decisions based on it — is the same question whether it's a warehouse scanner or a smart speaker. You generated data about your work performance through your physical movement. The algorithm decided what that data meant. You had no access to the analysis, no opportunity to contest the interpretation, no visibility into the model. That's the structure we've been describing in every chapter."
🔗 Connection: Jordan's warehouse experience is the IoT's labor surveillance dimension — the extension of the behavioral monitoring infrastructure into workplace settings. This connection anticipates Chapter 19 (Workplace Surveillance), which examines how the same commercial IoT infrastructure described in this chapter is deployed in employment contexts where power asymmetries and consent problems are particularly acute.
15.10 Smart Devices and the Data Pipeline
The IoT is not a separate surveillance system; it is an extension of the data pipeline described in Chapters 11–14. Smart device data flows into the same DMPs, data brokers, and behavioral analytics systems that process web behavioral data, social media data, and location data.
The integration mechanisms are multiple:
Device ID linking: Connected devices use identifiers that can be linked to user accounts, web cookies, and mobile advertising IDs through identity resolution. Your Amazon Echo is linked to your Amazon account; your Amazon account is linked to your web browsing behavior on Amazon; your web browsing behavior is linked to your email address, purchase history, and behavioral profile.
Cross-context enrichment: Smart home data enriches web behavioral profiles: the fact that you're home during the day (from smart thermostat presence detection) combined with job-search browsing (from web tracking) produces an inference about employment status that neither source could produce alone.
Platform consolidation: Google's ownership of Nest, Android, Google Chrome, Gmail, YouTube, and Google Maps means that a single company has simultaneous visibility into your web behavior, your physical location, your television viewing, your home presence, and your voice interactions. This consolidation creates a surveillance profile with cross-domain depth that independent surveillance systems could not achieve.
Summary: The Extended Surveillance Space
The Internet of Things extends the commercial surveillance apparatus from digital behavior into physical space, physical bodies, and physical homes. Smart speakers listen continuously. Smart TVs watch what you watch. Smart thermostats track when you're home. Connected cars map where you go and how you drive. Wearables measure your heartbeat, your sleep, and your stress.
The data generated by these devices flows through the same commercial pipeline described in Chapters 11–14: to data brokers, to advertising networks, to insurance companies, to employers, to government agencies with appropriate legal process. The IoT does not create a new commercial surveillance system; it is the current frontier of the same system's expansion into physical space.
The consent problems are compounded in the IoT context. The bilateral, individual consent model of GDPR and U.S. privacy law was designed for digital transactions between individuals and platforms. It does not map cleanly onto an always-on microphone in a shared home, a connected car that monitors all passengers, or a warehouse scanner that tracks all workers. The architecture of surveillance has extended its reach; the legal architecture governing it has not kept pace.
Jordan's warehouse scanner is a version of this story — the same structure, the same asymmetry, applied to a body moving through physical space rather than a cursor moving through a webpage. The surveillance capitalism logic arrived, eventually, in every space.
Key Terms
Automatic Content Recognition (ACR) — Technology embedded in smart TVs that identifies what content is displayed on screen, generating a continuous record of viewing behavior.
Always-on microphone — A listening mode in which a device continuously monitors audio to detect a wake word, requiring some level of audio processing at all times.
Internet of Things (IoT) — The universe of physical objects connected to the internet, including smart home devices, wearables, connected vehicles, and industrial sensors.
Telematics — The use of connected vehicle data — location, speed, driving behavior — for insurance pricing, fleet management, and behavioral monitoring.
Usage-based insurance (UBI) — Insurance pricing model that uses individual behavioral data (driving behavior, activity levels) rather than demographic group statistics to determine premiums.
Wake word — A specific phrase (e.g., "Alexa," "Hey Google") that triggers recording and cloud processing on a voice assistant device.
Discussion Questions
-
The chapter argues that "smart" in consumer technology marketing primarily signals "data-collecting." Is this framing too cynical? Are there genuine consumer benefits of IoT devices that cannot be achieved without the data collection those devices perform?
-
Amazon Echo's always-on microphone creates a surveillance presence in the home that most residents have chosen to install. What distinguishes this voluntary surveillance from the panopticon, where observation is imposed? Does the voluntary nature of IoT installation change its surveillance character?
-
Smart TV ACR collects viewing data from all content on the screen — broadcast, cable, streaming, gaming — without the user's specific acknowledgment. Vizio paid a $2.2 million FTC settlement for inadequate consent. Was this an adequate regulatory response? What would be?
-
The insurance telematics case involves an explicit trade: behavioral monitoring in exchange for lower premiums. Is this a genuine choice, or is it constrained in ways that compromise its voluntariness? Who bears the cost of declining the trade?
-
Jordan's warehouse scanner tracks physical movement and generates automated performance assessments. How does this differ — if it does — from a supervisor watching Jordan work? What is the relevant surveillance distinction, and what are its implications?
Chapter 15 of 40 | Part 3: Commercial Surveillance Backward references: Chapter 11 (Data Economy), Chapter 13 (Social Platforms) Forward references: Chapter 17 (The Surveilled Home), Chapter 20 (The Quantified Self)