It is four in the morning. A delivery driver for a third-party logistics company — the kind of work Jordan Ellis does on weekends — parks briefly in front of a house on Maple Street to check a delivery address on his phone. The street is quiet. He...
In This Chapter
- Opening Scene: The Porch at 4:12 AM
- 16.1 From Doorbell to Data Network: The Rise of Ring
- 16.2 The Neighbors App: Crowdsourced Crime Watching
- 16.3 Ring's Law Enforcement Partnerships: The Fusion Center at Your Door
- 16.4 Race, Space, and the Digital Neighborhood Watch
- 16.5 Privacy in Semi-Public Spaces
- 16.6 Who Watches the Watchers: Ring's Own Data Practices
- 16.7 Jordan's Neighborhood
- 16.8 Regulatory Landscape and Reform Debates
- 16.9 The Normalization Question
- 16.10 Thinking Structurally About Privatized Surveillance
- Chapter Summary
- Key Terms
- Discussion Questions
Chapter 16: Ring Doorbells and the Privatization of Public Watching
Opening Scene: The Porch at 4:12 AM
It is four in the morning. A delivery driver for a third-party logistics company — the kind of work Jordan Ellis does on weekends — parks briefly in front of a house on Maple Street to check a delivery address on his phone. The street is quiet. He is there for ninety seconds.
In the morning, a post appears on the Neighbors app, uploaded from that house's Ring doorbell camera: "Suspicious activity on Maple St. Dark sedan, parked for nearly two minutes at 4 AM. Driver appeared to be casing the block. Anyone else notice this?" The post accumulates fourteen reactions and six comments within an hour. Two neighbors post that they have "flagged" the image for the local police department, which has a partnership with Ring. The driver's face is clearly visible. His license plate is captured. He has not been told any of this is happening.
He is Black.
This scenario is not hypothetical. Researchers and journalists have documented hundreds of nearly identical incidents since Ring's Neighbors app launched in 2018. The details change — the car, the street, the reason someone was standing in a driveway — but the structure remains constant. A private device, owned by a private citizen, captures footage of a person in a semi-public space. That footage flows into a platform controlled by a corporation. That corporation shares information with police. The person being watched consents to none of it, knows none of it, and has no mechanism for appeal.
This chapter examines Ring — the Amazon-owned doorbell camera company — as a case study in the privatization of public watching. Ring represents something new in the architecture of surveillance: not a government program imposed on citizens, not a corporate system monitoring employees, but a consumer product that turns ordinary homeowners into nodes of a distributed surveillance network. To understand Ring is to understand how surveillance infrastructure can be built not by central mandate but by millions of individual purchasing decisions, each made with the best of intentions, each contributing to a structure none of the individual buyers fully grasped they were creating.
16.1 From Doorbell to Data Network: The Rise of Ring
A Company Built on Fear
Ring was founded in 2013 by Jamie Siminoff, who has said he was inspired by hearing his doorbell ring while working in his garage and wanting to see who was at his door without stopping what he was doing. That origin story — convenience and mild anxiety — captures something essential about Ring's appeal. The product solved a real problem that real people had.
But Ring's marketing quickly moved from convenience to fear. The company's advertising leaned heavily on crime narratives, featuring footage of package thefts, attempted break-ins, and prowlers. Ring built partnerships with local news stations, providing footage for crime segments. It developed what critics would later call an "ecosystem of fear" — a marketing strategy that inflated perceptions of crime risk in order to drive device sales.
This is not unique to Ring. The home security industry has long relied on fear as its primary sales driver. What was new was the network. Traditional home security cameras recorded locally to a hard drive or DVR. Ring's cameras sent footage to the cloud — specifically, to Amazon's cloud infrastructure after Amazon acquired Ring in 2018 for approximately $1 billion. That shift from local to cloud storage transformed what the device was. It was no longer just a camera. It was a connected node in a data network owned by the world's largest retailer.
Scale and Reach
By the early 2020s, Ring had sold tens of millions of devices in the United States alone. Estimates from researchers at the Electronic Frontier Foundation and various academic surveillance studies suggested that in many American suburbs and mid-sized cities, Ring cameras covered the majority of residential blocks — overlapping fields of view creating a de facto surveillance network denser than most municipalities' official CCTV systems (as discussed in Chapter 8).
The geographic distribution of Ring cameras is not random. Rings tend to cluster in middle-class suburban neighborhoods — the kind of neighborhoods where homeowners have both the disposable income to purchase the devices and the property anxiety that Ring's marketing cultivates. This clustering has implications for who watches and who gets watched, which we will examine in Section 16.4.
💡 Intuition Check: Before reading on, consider this: when you purchase a Ring doorbell, you are making a choice about your own home. But your camera's field of view almost certainly captures the public sidewalk, the street, your neighbor's driveway, and possibly portions of neighboring properties. At what point does your private security decision become a public surveillance system? Who should have a say?
Amazon's Acquisition and Strategic Logic
Amazon's 2018 acquisition of Ring was not primarily about home security hardware. It was about data and about delivery logistics. Ring cameras can identify when packages are delivered — and when they are stolen. Amazon had a strategic interest in understanding what happened to packages after they left its warehouses. Ring footage also integrates with Amazon's broader smart home ecosystem, including Alexa voice assistants and Amazon Key (an in-home delivery service that uses Ring cameras to verify deliveries).
But Ring's data value extends further. Ring cameras capture enormous quantities of footage of ordinary life — of streets, driveways, visitors, routines. That footage, in aggregate, represents a detailed behavioral map of daily life in residential America. What Amazon does with that aggregate data — beyond the specific law enforcement partnerships discussed in Section 16.3 — remains largely opaque. Ring's privacy policies, like those of most tech companies examined in Chapter 11, are written to preserve maximum corporate flexibility while minimizing legal exposure.
📊 Real-World Application: In 2019, researchers at William & Mary analyzed Ring camera placement in public Instagram posts and found that Ring cameras in many neighborhoods created overlapping coverage of 70–85% of street-level traffic. A person walking down a typical suburban block might be captured by five to twelve separate Ring cameras, owned by five to twelve separate individuals, all feeding data to a single corporate infrastructure. This density exceeds the coverage of most urban CCTV networks studied in Chapter 8 — and it was built entirely by private consumer choice, not public investment or democratic decision-making.
16.2 The Neighbors App: Crowdsourced Crime Watching
Platform Architecture
Ring's hardware is just the capture layer. The social layer — where the surveillance data becomes socially and politically consequential — is the Neighbors app. Neighbors is a neighborhood-watch platform, built into Ring's ecosystem, where users can share footage clips, post alerts about "suspicious activity," and interact with posts from other users in their geographic area. Non-Ring users can also join Neighbors, viewing posts without contributing footage.
The app's design choices are not neutral. Like the social media platforms examined in Chapter 11, Neighbors is designed to maximize engagement, and the content that drives engagement on a neighborhood-watch platform is content about crime and threat. The app's algorithm amplifies posts about suspicious activity. The interface presents a constantly refreshing feed of threat reports. The overall effect, documented in academic studies of Nextdoor (a predecessor platform with similar dynamics), is that users experience their neighborhood as more dangerous than independent crime data would suggest.
This perception gap matters enormously. If Ring cameras captured footage and that footage remained local, the surveillance impact would be significant but bounded. The Neighbors app transforms individual footage into collective threat narratives — and those narratives shape behavior, policy, and policing in ways that extend far beyond any individual camera.
Who Gets Reported
The most extensively documented problem with the Neighbors app — and with digital neighborhood watch platforms generally — is racial bias in who gets reported as "suspicious."
MIT researcher Joy Buolamwini, journalists at Vice and the Washington Post, and academic researchers studying Nextdoor (which launched before Neighbors and developed similar problems) have all documented the same pattern: Black and brown residents of predominantly white neighborhoods are dramatically overrepresented in "suspicious activity" reports, relative to both their share of the population and any objective behavioral difference.
The pattern is structurally predictable. Broken-windows theory, introduced in Section 16.5, holds that visible disorder — even minor, low-level disorder — triggers community anxiety and ultimately more serious crime. The digital broken-windows equivalent is this: in a neighborhood with a shared surveillance platform, the definition of "suspicious" is not neutral. It is calibrated to the appearance and behavior of whoever the community perceives as "belonging." A white resident walking to their car at 4 AM may be invisible. A Black resident doing the same thing becomes a post.
Researchers at Upturn, a Washington, D.C.-based nonprofit that has studied Ring's partnerships with police, documented hundreds of Neighbors posts that used explicitly racial language in describing "suspicious" individuals, and many more that described individuals in ways — "unfamiliar face," "didn't seem like they lived here" — that functioned as racial codes. Ring eventually introduced automated filtering to flag racially explicit language in posts, but researchers noted that the filtering was trivially circumvented and did not address implicit racial framing.
⚠️ Common Pitfall: Students sometimes argue that digital neighborhood watch platforms are neutral tools — it's the users who introduce bias, not the platform. This analysis is insufficient. Platform design choices — what content to amplify, what notifications to send, what emotional registers to encourage — shape user behavior. When a platform is designed to maximize reports of "suspicious activity," it guarantees that existing community biases about who is suspicious will be amplified, not merely reflected. Platform neutrality is never actually neutral.
The "Suspicious Person" Report as Digital Infrastructure
Older forms of the suspicious person report — calls to 911, calls to non-emergency police lines — left a limited record and required the reporting person to engage with an operator who might push back on vague descriptions. The Neighbors app changes this in three ways.
First, it lowers the friction of reporting. Posting on Neighbors requires less commitment than a phone call, and the social dynamics of a community platform (others react, comment, validate) reward reporting in ways that a 911 call does not.
Second, it creates a persistent record. Posts on Neighbors may remain visible for days, weeks, or longer. A person incorrectly identified as "suspicious" remains associated with that report even after the incident is resolved or forgotten.
Third, it connects to official infrastructure. Through Ring's law enforcement partnerships, Neighbors posts can be — and are — viewed by police departments, creating a pathway from a neighbor's racial anxiety to a police record without any of the procedural safeguards that would normally accompany a police action.
16.3 Ring's Law Enforcement Partnerships: The Fusion Center at Your Door
The Partnership Model
In 2018, Ring launched what it called its Law Enforcement Partnerships program, ultimately building agreements with more than 2,000 law enforcement agencies across the United States. Under these partnerships, police departments could request footage from Ring users in a specific geographic area through a portal that Ring provided. The system worked as follows: a police department investigating a crime would submit a request through Ring's law enforcement portal, specifying a geographic radius and a time window. Ring would then contact homeowners with cameras in that area and ask them to voluntarily share footage.
Initially, Ring presented these partnerships as privacy-protective — footage sharing was voluntary, and Ring would not provide footage without user consent or a warrant. This framing was substantially misleading.
First, the partnerships gave police direct access to Ring's map of camera locations — information about which addresses had cameras and where they were pointed. This intelligence alone was valuable to law enforcement and was shared without any user consent.
Second, the structure of the request was not neutral. Homeowners received messages from their local police department, through an official-looking Ring portal, asking them to share footage in connection with a crime investigation. The social pressure to cooperate with a police request — even a voluntary one — is substantial, particularly for homeowners who are themselves anxious about neighborhood crime.
Third, and most importantly, the framing of "voluntary sharing" obscured the fact that footage shared by Ring users captured not just those users' property but the neighbors, pedestrians, and passersby who appeared in the footage without any choice in the matter. The homeowner's "voluntary" consent covered the footage of everyone in the frame.
📊 Real-World Application: A 2019 investigation by Motherboard/Vice obtained documents showing that Ring had instructed its customer service representatives to promote law enforcement partnerships proactively — telling customers about the partnerships when they called for other reasons, and framing the partnerships as a selling point. Ring was actively marketing police access to its user base.
The Warrant Question
The partnership system created a legal structure designed to minimize the need for warrants. Police who obtain footage through voluntary sharing face far fewer evidentiary restrictions than police who use footage obtained through a warrant or subpoena. Defense attorneys have argued in several cases that Ring footage obtained through the partnership program was obtained in ways that, had police used traditional methods, would have required more rigorous legal oversight.
The Electronic Frontier Foundation and the ACLU have both argued that the Ring partnership system creates an end-run around Fourth Amendment protections. The Fourth Amendment protects citizens against unreasonable searches and seizures by government actors. When a private company facilitates mass footage sharing with police, and when that company has financial incentives to maintain its police partnerships, the distinction between government surveillance and private surveillance begins to dissolve.
In 2022, following extensive criticism, Amazon announced that Ring would no longer respond to "emergency" police requests for footage without a warrant — a policy change that closed the most egregious loophole but did not address the fundamental architecture of the partnership program.
🎓 Advanced Concept: Third-Party Doctrine and Ring Data
The third-party doctrine, established in Smith v. Maryland (1979) and related cases, holds that information voluntarily shared with a third party loses Fourth Amendment protection. If you give information to your bank, your phone company, or — under older interpretations — your doorbell camera's cloud service, the government can obtain that information from the third party without a warrant.
Ring's architecture is designed, whether intentionally or not, to exploit this doctrine. When Ring users upload footage to Amazon's cloud, they have technically "shared" that footage with a third party (Amazon/Ring). Under the third-party doctrine, police can potentially obtain that footage from Ring through a subpoena — a much lower bar than a warrant — even without using the partnership portal.
The Supreme Court's 2018 decision in Carpenter v. United States complicated the third-party doctrine by ruling that police need a warrant to obtain historical cell phone location data. Whether Carpenter's logic extends to doorbell camera footage remains an unresolved question that will likely be litigated as Ring footage becomes more common in criminal cases.
Geofence Warrants and Ring's Ecosystem
A geofence warrant instructs a technology company to provide data about all devices present within a specific geographic area during a specific time window. Google has been the primary target of geofence warrants (for location data from Android phones), but the same logic applies to Ring: police can use warrants to compel Ring to provide footage from all cameras within a geographic area.
In 2020 and 2021, reporting by the New York Times revealed that geofence warrant requests to Google had increased by 1,500% between 2017 and 2019. Advocates for civil liberties argued that geofence warrants are inherently overbroad — they capture data about everyone in an area, not just the person under investigation — and that courts have not yet developed adequate legal standards for limiting them.
Ring's camera network, combined with geofence warrant authority, creates what surveillance scholars have called a "fusion center at your door" — a reference to the post-9/11 law enforcement fusion centers that aggregated data from multiple agencies and databases. The doorbell camera functions as an always-on, private-sector node in a surveillance architecture that connects to official law enforcement through multiple pathways.
16.4 Race, Space, and the Digital Neighborhood Watch
Broken Windows in Silicon Valley Form
Broken-windows theory, developed by criminologists James Q. Wilson and George Kelling in the 1980s, proposed that visible signs of disorder — broken windows, graffiti, litter — signal that an area is uncontrolled and invite further disorder and crime. The theory was enormously influential in American policing, driving "zero tolerance" and "quality of life" policing strategies in the 1990s and 2000s. It was also extensively criticized for targeting low-income and minority communities, for criminalizing poverty and homelessness, and for producing aggressive policing that damaged community trust without significantly reducing crime.
The Neighbors app can be understood as broken-windows theory implemented in consumer technology. The platform encourages residents to monitor and report "signs of disorder" — strangers, vehicles they don't recognize, people who "don't belong." This monitoring is framed as crime prevention, just as broken-windows policing was framed as crime prevention. And it encodes the same assumption: that visible deviance from neighborhood norms predicts crime, and that residents are accurate judges of what constitutes deviance.
The problem is that "deviance from neighborhood norms" is not an objective measure. In predominantly white neighborhoods, the presence of Black residents — even longtime residents — can trigger "suspicious activity" reports simply because they deviate from the neighborhood's dominant visual norm. Surveillance technology does not eliminate this bias. It scales it, automates it, and connects it to official enforcement infrastructure.
The Nextdoor Problem
Nextdoor, a neighborhood social network that preceded Ring's Neighbors app, developed the same racial-profiling dynamics so early and so visibly that its experience serves as a baseline. By 2016, Nextdoor had become notorious for "racial profiling" posts — reports of suspicious individuals that were explicitly or implicitly racially coded. Researchers found that Black residents in integrated neighborhoods were dramatically overrepresented in suspicious activity reports, even when controlling for crime rates and other variables.
Nextdoor responded by redesigning its crime and safety reporting interface to require reporters to provide more specific behavioral descriptions before racial identifiers would be accepted — a modest friction intervention that research suggested reduced explicitly racial posts without eliminating implicit racial framing. Ring introduced similar measures in 2020.
Neither intervention addressed the underlying dynamic: a platform designed to amplify community anxiety about threat, operating in a society with deep racial hierarchies about who belongs where, will produce racially biased surveillance. The design fix treats the symptom without touching the structural cause.
🌍 Global Perspective: The privatized neighborhood surveillance model has spread internationally, but with important national variations. In the United Kingdom, where CCTV density is already high (see Chapter 8), Ring and similar devices have faced greater regulatory scrutiny from the Information Commissioner's Office (ICO), which has ruled that homeowners whose cameras capture public spaces may be subject to data protection obligations under GDPR. In Australia, similarly, privacy regulators have issued guidance holding that cameras capturing public footpaths create legal obligations for the camera owner. In the United States, by contrast, footage captured from private property of public spaces generally has no legal protection for those depicted, because people in public spaces have a diminished expectation of privacy under fourth amendment doctrine. This creates a stark asymmetry: the American legal framework provides the most permissive environment for privatized surveillance, while offering the least protection for those surveilled.
Who Gets Watched and Who Gets to Watch
The geography of Ring ownership mirrors the geography of American inequality in revealing ways. Ring cameras are clustered in middle-class homeowner neighborhoods. They are sparse in lower-income neighborhoods, in renter-dominated areas, and in neighborhoods of color. The surveillance network that Ring has built — through millions of individual consumer decisions — replicates and amplifies existing patterns of spatial inequality.
This creates a system of visibility asymmetry (a concept introduced in Chapter 1 and examined throughout this text) that tracks racial and class lines. Wealthy and predominantly white neighborhoods are surveilled by their own residents, for their own security, with footage flowing to platforms those residents control and trust. Lower-income neighborhoods have less self-surveillance infrastructure but are more heavily surveilled by external systems — police cameras, code enforcement, housing inspectors — systems that are not controlled by residents and that do not serve residents' interests.
The person who is watched most is not the person who watches most. Surveillance power is distributed unequally, and those inequalities are not random.
16.5 Privacy in Semi-Public Spaces
The Conceptual Problem
Privacy law, particularly in the United States, has traditionally operated through a binary framework: either a space is public (no reasonable expectation of privacy) or it is private (legal protections apply). The street is public. Your home is private. The line between them is, theoretically, your front door.
Ring cameras trouble this binary in two ways. First, they are positioned precisely at the threshold — the door, the porch, the driveway — capturing spaces that are neither fully public nor fully private. Your driveway is legally accessible to the public but practically functions as an extension of your private space. Your front porch is visible from the street but is also where you have quiet conversations, receive sensitive mail, argue with family members.
Second, Ring cameras routinely capture spaces that are clearly private — neighbors' yards, living room windows visible from the street — and footage of those spaces flows into Ring's cloud and Ring's law enforcement systems without any consent from those neighbors.
Traditional privacy law's response to this is inadequate. Courts have generally held that anything visible from a public vantage point is not protected. The "plain view" doctrine in Fourth Amendment jurisprudence holds that police do not need a warrant to act on evidence that is visible without any special intrusion. Ring cameras dramatically extend what is "visible" in this legal sense — they see more, at more times, than any human observer standing on the public sidewalk could see. But the law does not yet account for this difference.
The Reasonable Expectation Test and Its Limits
The "reasonable expectation of privacy" test, established in Katz v. United States (1967), holds that Fourth Amendment protections apply when a person has a subjective expectation of privacy that society recognizes as reasonable. The test has been criticized for its circularity — what society "recognizes" as reasonable is itself shaped by what surveillance is normal, which is shaped by what legal decisions have previously allowed.
Ring illustrates this circularity concretely. As Ring cameras become ubiquitous in residential neighborhoods, courts may determine that people in those neighborhoods have diminished reasonable expectations of privacy — because surveillance is now normal. Normalization of surveillance reduces privacy expectations, which reduces legal privacy protections, which further enables surveillance, which further normalizes it. This feedback loop, which surveillance scholars call the "normalization ratchet," is one of the most important structural dynamics in contemporary surveillance architecture.
✅ Best Practice: What Homeowners Should Consider
If you own or are considering purchasing a Ring or similar device, these considerations can help you use it more responsibly:
- Review camera positioning. Point cameras toward your own property, not toward neighbors' windows, yards, or driveways. Most Ring cameras have adjustable field-of-view zones that can be configured to blur or exclude neighboring properties.
- Opt out of law enforcement partnerships. Ring's settings include the ability to decline automatic footage requests from partner police agencies. Review your settings and configure them consciously rather than accepting defaults.
- Be cautious with Neighbors posts. Before posting about "suspicious" activity, ask yourself whether the behavior would seem suspicious if performed by a person of a different race or economic background. Ask whether you are reacting to actual threatening behavior or to unfamiliarity.
- Review Ring's data practices. Read Ring's current privacy policy, particularly regarding what footage may be retained, who may access it, and under what circumstances footage may be shared with third parties.
- Consider alternatives. Local video storage (cameras that record to a home device rather than the cloud) provides similar security benefits without feeding footage into a corporate data infrastructure.
16.6 Who Watches the Watchers: Ring's Own Data Practices
Ring Employees and Internal Access
In January 2020, news reports revealed that Ring had fired four employees for improperly accessing customer video footage. The employees had used their access to Ring's cloud systems to view footage of customers — footage that included private homes, bedrooms, and intimate moments. Ring characterized this as a data misuse issue affecting a small number of bad actors. Critics argued it revealed a structural problem: Ring employees had broad access to customer footage because Ring's business model requires that access (for quality control, for customer service, for AI training, and for other purposes that Ring has disclosed in various degrees of specificity).
This episode illustrates a fundamental asymmetry in surveillance systems: those who control the infrastructure of watching are themselves subject to minimal watching. Ring's employees who accessed private footage were caught through internal systems; the customers who were watched had no way of knowing it was happening. The surveillance relationship runs one way.
🔗 Connection: This dynamic — surveillance of the watched by the watcher, without surveillance of the watcher by the watched — is the foundational visibility asymmetry introduced in Chapter 1 and examined through the panopticon model in Chapter 2. Ring's internal data misuse scandal is a concrete instantiation of what Bentham's panopticon structurally encodes: the inspector cannot be inspected.
Ring's Relationship with Amazon
Ring operates as an Amazon subsidiary, and its data practices must be understood within Amazon's broader data ecosystem. Ring footage, Ring account data, and Ring's neighborhood behavioral maps potentially connect to Amazon's commercial data operations — purchase history, browsing data, Alexa voice recordings, and the other behavioral data Amazon aggregates about its customers.
Amazon has maintained that Ring footage is not used to target advertising. Privacy researchers have noted that Amazon's privacy policy, like all major tech companies' policies, is drafted to preserve maximum flexibility. What Amazon does — and does not do — with Ring-adjacent data is not independently verifiable by users.
The core issue is not whether Amazon is currently doing something nefarious with Ring footage. The issue is structural: a single company controls an enormous network of home cameras, the footage those cameras capture, the neighborhood social platform where that footage circulates, and the law enforcement portal through which that footage reaches police. That concentration of data power is dangerous independent of any specific misuse.
16.7 Jordan's Neighborhood
Jordan Ellis's apartment is in a mixed neighborhood at the edge of Hartwell University's campus — the kind of neighborhood where students, longtime residents, and recently arrived young professionals coexist uneasily. The neighborhood has a Ring network, organized through the Neighbors app. Jordan knows about it because Marcus, their roommate, signed up for the app after a package was stolen from the building's front steps.
Jordan's neighbor across the street, Mr. Tate, is a 68-year-old Black man who has lived on the block for thirty years. He retired from the postal service. He walks his dog every morning at 6 AM, waters his garden in the evening, and occasionally sits on his porch reading in the afternoon.
In the three months since Jordan has been following the Neighbors app (initially out of curiosity, increasingly out of a discomfort they struggle to name), Mr. Tate has appeared in Neighbors posts four times. In one post, he was described as "an older Black man I didn't recognize walking around looking at houses." In another, someone posted his dog as "an aggressive unleashed animal" — the dog was on a leash; the leash was simply not visible at that angle in the footage. In a third post, Mr. Tate was photographed by a Ring camera while unlocking his own front door and described as "suspicious activity."
Each time, Jordan has typed out a response — "That's Mr. Tate, he's been here longer than most of us" — and then hesitated. The social dynamics of the app are uncomfortable. Correcting a post requires identifying yourself as someone who disagrees with the person who posted it. And Jordan has noticed something else: the people who post most actively on the Neighbors app are the people who seem to matter most to the neighborhood's informal social structure. Disagreeing with them has costs.
This is the political economy of the digital neighborhood watch. It is not just a surveillance system. It is a social system in which surveillance capacity confers status and power, and in which the targets of surveillance have diminished voices.
📝 Note for Students: The scenario above is fictional but is drawn from patterns documented in academic research on Neighbors and Nextdoor. A 2021 study published in the Journal of Urban Technology analyzed thousands of Nextdoor posts and found that Black residents were mentioned in suspicious person reports at a rate seven times higher than their neighborhood population share would predict. The Neighbors app has not been subject to equivalent systematic academic study because Ring has not made its data available to researchers — a form of data opacity that is itself a surveillance issue.
16.8 Regulatory Landscape and Reform Debates
What Law Currently Does (and Does Not) Cover
Federal law provides limited protection against the surveillance enabled by Ring cameras. The Fourth Amendment protects against government surveillance without warrants; Ring cameras are private devices. The Electronic Communications Privacy Act (ECPA) regulates electronic surveillance but was written in 1986, before networked home cameras existed. State wiretapping laws vary significantly but generally do not restrict the recording of activity visible from public property.
Several cities have taken local action. Oakland, California, passed a "surveillance oversight ordinance" requiring city council approval before any new surveillance technology — including private systems that integrate with police — could be deployed citywide. Portland, Oregon, passed ordinances restricting city use of footage from private cameras. These efforts represent the leading edge of municipal surveillance governance, but they remain exceptional rather than the norm.
At the federal level, the Fourth Amendment Is Not For Sale Act, introduced in Congress in 2021, would have prohibited law enforcement from purchasing data from commercial data brokers — including, potentially, data obtained through Ring's partnership program — without a warrant. As of this writing, the legislation has not passed.
🎓 Advanced Concept: The Governance Gap
There is a fundamental mismatch between the pace of surveillance technology deployment and the pace of regulatory response. Ring went from startup to national surveillance infrastructure in approximately seven years. Congressional action, when it comes, typically lags technology deployment by a decade or more. This governance gap — the space between what technology enables and what law constrains — is structurally advantageous to surveillance technology companies. They can build, deploy, and normalize surveillance networks before legal frameworks catch up. By the time regulation arrives, the technology is so embedded in daily life that restricting it seems radical. This is not accidental. The governance gap is part of the business model.
Reform Proposals
Privacy advocates have proposed a range of reforms to Ring's surveillance architecture:
Data minimization requirements: Mandating that Ring cameras store footage locally rather than in the cloud by default, with cloud upload as an opt-in feature.
Mandatory warrant requirements: Requiring Ring to obtain a warrant before sharing any footage with law enforcement, regardless of whether users consent.
Transparency requirements: Requiring Ring to publish regular transparency reports detailing the number of law enforcement requests received, the number fulfilled, the geographic distribution of requests, and the demographic characteristics of individuals depicted in shared footage.
Racial bias audits: Requiring independent audits of the Neighbors app to assess whether the platform's design choices produce racially disparate surveillance outcomes.
User notification: Requiring Ring to notify users when their footage has been requested by or shared with law enforcement.
None of these proposals has been enacted at the federal level as of this writing. The political economy of surveillance regulation — in which the tech industry has significant lobbying power and in which crime fear reliably mobilizes political opposition to privacy protections — makes federal reform difficult.
16.9 The Normalization Question
How Ring Changed What Feels Normal
When Ring launched, the idea that your neighbor's doorbell might be recording your movements and that footage might be accessible to police would have struck most Americans as invasive and unusual. Today, Ring cameras are as common in many American neighborhoods as mailboxes. The surveillance has been normalized through the quotidian: through commercials that show smiling families, through unboxing videos on YouTube, through the matter-of-fact way that news programs use Ring footage.
This normalization is not neutral. It changes what people expect, what they accept, and what they think is possible. If everyone has a Ring camera, then no one has a reasonable expectation of privacy on any residential street — and the erosion of that expectation changes both the legal framework (as discussed in Section 16.5) and the social fabric. People change their behavior when they know they are being recorded. They are more guarded. They are less likely to have the informal, spontaneous interactions that constitute community life. They are more likely to treat neighbors as threats to be surveilled rather than people to know.
Surveillance scholars use the concept of the "chilling effect" (introduced in Chapter 7) to describe how the awareness of being watched changes behavior. The chilling effect traditionally describes how surveillance of political activity suppresses dissent. Ring's neighborhood surveillance creates a more diffuse but equally real chilling effect on the kind of unstructured, unmonitored public life that democratic communities depend on.
The Consent-as-Fiction Problem
Every Ring camera posting to the Neighbors app, every footage clip shared with police, involves footage of people who have not consented to being recorded, analyzed, or shared. This is legally permissible — you have no legal right to prevent a neighbor from recording the public street from their property — but it reveals the degree to which consent, as a framework for thinking about surveillance, is insufficient.
As explored in Chapter 3, consent as a framework for legitimizing data collection requires that consent be meaningful: informed, voluntary, and specific. In Ring's ecosystem, none of these conditions obtain. People moving through residential neighborhoods cannot meaningfully consent to camera capture because they typically do not know cameras are present, cannot verify what cameras do with footage, and cannot choose alternative routes without significant burden. Consent-as-fiction, here, describes a surveillance architecture that is nominally voluntary (no one is forced to walk past Ring cameras) but practically mandatory (short of avoiding all residential neighborhoods, there is no way to opt out).
16.10 Thinking Structurally About Privatized Surveillance
Individual Choice and Collective Architecture
Ring's story is, in one sense, a story about individual choices made by individual homeowners who wanted to feel safer. In another sense, it is a story about how individual choices aggregate into collective surveillance architectures that have consequences no individual chooser anticipated or approved.
This is the structural lens introduced in Chapter 1 and developed throughout this text. To analyze Ring's Neighbors app as a product of bad individual decisions — racially biased users, unethical Ring employees, overeager police officers — is to miss the more important story. The architecture itself produces racially biased surveillance, even with well-intentioned users. The architecture itself enables mass law enforcement data gathering, independent of any individual sharing decision. The architecture itself erodes public privacy expectations, regardless of any individual homeowner's intent.
Structural analysis asks: what does the system produce as a whole, and who designed those features in? Ring's founders and Amazon's engineers made choices — to use a cloud-based architecture, to build a law enforcement portal, to design an app that amplifies threat reports — that shape the system's collective behavior. Those choices were not made by the homeowners who bought the devices.
The Future of Privatized Surveillance
Ring represents a template for privatized surveillance infrastructure that will only become more common. As cameras become cheaper and more capable, as AI-powered video analysis becomes more accessible, and as smart home ecosystems become more interconnected, the architecture of privatized neighborhood surveillance will grow more sophisticated and more integrated.
Some likely near-term developments:
Facial recognition integration. Ring and Amazon have not publicly integrated facial recognition into Ring cameras, but the capability exists. Several smaller manufacturers have deployed consumer doorbell cameras with facial recognition. The technical barrier to Ring doing so is minimal; the regulatory and public relations barriers have so far been sufficient to prevent it.
Drone and mobile surveillance. Fixed cameras are increasingly supplemented by mobile surveillance — including autonomous drones that can follow a person or vehicle from one camera's field of view to the next. Private surveillance drones are already in use in some residential communities in the United States.
AI-powered behavioral analysis. Camera footage can be analyzed by AI systems that flag "anomalous" behavior — movement patterns, gait, time of presence — without any human reviewer. These systems encode assumptions about what "normal" behavior looks like that are likely to reproduce and amplify the racial and class biases documented in current Neighbors posts.
Each of these developments will raise the stakes of the questions this chapter has raised: who watches, who gets watched, who benefits, who is harmed, and who has the power to decide.
Chapter Summary
Ring doorbells represent a fundamental transformation in the architecture of neighborhood surveillance. By turning private homeowners into nodes of a distributed surveillance network, Ring has created surveillance infrastructure denser and more intimate than most municipal CCTV systems — built through consumer choice rather than democratic decision-making, owned by a corporation rather than a public body, and integrated with law enforcement through partnerships that blur the line between private and government surveillance.
The Neighbors app, Ring's social platform, concentrates and amplifies existing racial biases in assessments of who "belongs" in a neighborhood, producing digital suspicious person reports that flow to police without procedural safeguards. Ring's own data practices — cloud storage, employee access, law enforcement partnerships — create multiple pathways through which intimate footage of private life reaches parties who have no claim on it.
The legal framework governing this architecture is inadequate, shaped by doctrines (third-party doctrine, plain view, reasonable expectation of privacy) that were not designed for a world of networked home cameras. The normalization of Ring cameras is itself a surveillance outcome — it changes what people expect, what courts protect, and what future surveillance deployments will seem reasonable.
Understanding Ring requires understanding the difference between individual choices and structural outcomes, between what any single homeowner decides and what the system as a whole produces. That distinction — between the individual and the structural — is one this text will return to repeatedly as we move deeper into the architecture of surveillance.
Key Terms
Privatized surveillance infrastructure: Surveillance systems built and operated by private corporations or individuals rather than government entities, often integrated with government enforcement mechanisms through formal or informal partnerships.
Crowdsourced surveillance: Surveillance systems that rely on many distributed participants — each contributing footage, reports, or observations — to produce aggregate monitoring coverage. Ring's Neighbors app is a prototype.
Geofence warrant: A court order compelling a technology company to provide data about all devices (or cameras) present within a specified geographic area during a specified time window.
Digital redlining: The process by which algorithmic or platform systems reproduce and amplify existing racial and class-based inequalities in access, treatment, or surveillance.
Fusion center: An information-sharing hub that aggregates data from multiple law enforcement agencies and, increasingly, private sources. The term describes both formal post-9/11 intelligence centers and, metaphorically, private systems (like Ring's law enforcement portal) that perform similar aggregation functions.
Normalization ratchet: The feedback loop by which surveillance normalization reduces privacy expectations, which reduces legal protection, which enables further surveillance normalization.
Broken-windows theory: The criminological theory that visible signs of disorder predict and invite further disorder, used to justify "quality of life" policing strategies and, by extension, neighborhood surveillance platforms that flag low-level deviance.
Discussion Questions
-
Ring's law enforcement partnership program is voluntary for homeowners — they can decline to share footage. Does this voluntary structure adequately protect the privacy of non-homeowners who appear in the footage? What would a genuinely consent-based alternative look like?
-
The Neighbors app has been criticized for enabling racial profiling, but Ring argues that the platform itself is neutral and that users introduce bias. Evaluate both positions using the framework of platform design and algorithmic amplification developed in this chapter.
-
How does the normalization ratchet apply to Ring cameras? Identify a specific future surveillance technology and explain how Ring's normalization might make it easier to deploy and harder to resist.
-
Jordan notices that disagreeing with Neighbors posts has social costs. How does this dynamic — where surveillance power confers social status — relate to the concept of visibility asymmetry?
-
Compare Ring's neighborhood surveillance network to the CCTV systems discussed in Chapter 8. What are the key differences in terms of accountability, governance, and privacy impact? Is privatized surveillance more or less concerning than government-operated surveillance?