The following are not predictions. They are extrapolations — exercises in following present trajectories to their logical extensions. Whether any of them materialize depends on choices that are being made right now.
In This Chapter
- Opening: Three Headlines from the Future
- Section 1: The Trajectory — From Reactive to Predictive Surveillance
- Section 2: AI Surveillance Systems — Capabilities, Limitations, Bias Amplification
- Section 3: Expanding the Biometric Frontier
- Section 4: The Social Credit Model — Not Just China
- Section 5: Neural Surveillance — The Ultimate Frontier
- Section 6: Ambient Surveillance and the End of Off
- Section 7: Three Scenarios for Surveillance in 2050
- Section 8: Jordan Writes About 2050
- Chapter Summary
- Key Terms
- Discussion Questions
Chapter 38: The Future of Surveillance: Predictive Policing, AI, and Brain-Computer Interfaces
Opening: Three Headlines from the Future
The following are not predictions. They are extrapolations — exercises in following present trajectories to their logical extensions. Whether any of them materialize depends on choices that are being made right now.
HEADLINE A (2031): NeuroVerify Partners with TSA to Offer "Thought-Pattern Pre-clearance" at Major Airports — Critics Call It the End of Private Thought
HEADLINE B (2031): Chicago Announces 100% Real-Time Facial Recognition Coverage of All Public Spaces — Mayor Cites 40% Drop in Violent Crime
HEADLINE C (2031): Municipal Coalition of 200 Cities Bans Predictive Policing, Mandates Community Oversight of All AI Surveillance — "The Algorithm Is Not the Police"
These three headlines represent three possible futures for surveillance. One is dystopian in the familiar sense: technology deployed without meaningful consent, at a scale that eliminates the possibility of unmonitored public life. One is authoritarian but plausibly appealing: surveillance that produces measurable safety benefits at the cost of privacy. One is democratic and regulated: a society that decided, collectively, to set limits on the surveillance it would permit.
Chapter 38 is about how we get from here to there — or to one of those "theres." It is about the trajectories currently underway, the technologies under development, and the choices — technical, political, and moral — that will determine which future we inhabit.
Section 1: The Trajectory — From Reactive to Predictive Surveillance
1.1 The Logic of Prediction
Traditional surveillance is reactive: a crime occurs, surveillance footage is reviewed, a suspect is identified and prosecuted. This model assumes that surveillance functions primarily as documentation — a record of what happened, useful after the fact. The logic of predictive surveillance is different: rather than documenting events, it seeks to forecast them, enabling intervention before harm occurs.
The aspiration of predictive surveillance is not new. Chapter 7 examined how national security agencies have long sought to identify threats before they materialize. Chapter 36 analyzed how predictive policing systems extend that logic to municipal law enforcement. What is new is the scale, the speed, and the sophistication of prediction — and, crucially, the lowering of the threshold for intervention. As prediction capabilities improve, the question shifts from "when should we surveil someone suspected of planning a crime?" to "when should we act on the algorithmic prediction that someone might commit a crime in the future?"
This shift toward preemptive intervention has a cultural touchstone: Philip K. Dick's 1956 story "The Minority Report" — adapted as a 2002 Steven Spielberg film. In Dick's world, three "precogs" can foresee murders before they occur, enabling a "Precrime" unit to arrest people for crimes they have not yet committed. The story's central horror is not that the system is wrong — it is largely accurate — but that it raises the question: is a person morally and legally culpable for something they would have done but did not do?
💡 Intuition Check: The "Minority Report" frame is often deployed to suggest that predictive surveillance is pure science fiction — that predicting individual criminal behavior with accuracy is impossible. This is partly true and partly comforting misdirection. Accurately predicting individual behavior for a specific crime at a specific time may be beyond current technical capabilities. But predicting aggregate behavior patterns at group and geographic levels is well within current capabilities — and raises exactly the pre-crime ethical questions that Dick's story explores, applied to groups rather than individuals.
1.2 The Chicago "Heat List" in Retrospect and Prospect
We examined the Chicago Strategic Subject List in Chapter 36 from a racial equity perspective. Here we return to it from a predictive surveillance architecture perspective, because it illustrates the fundamental structure of predictive risk systems.
The Heat List encoded a theory: that past behavior (arrests, victimization, associations) predicts future behavior, and that acting on that prediction (through police visits, intervention program assignment, or enhanced monitoring) can reduce that future behavior. This theory has several empirically contestable elements: whether the predictor variables actually predict future violence; whether the interventions actually reduce predicted behavior; whether the costs (false positives, chilling effects, racial disparities) are justified by the benefits.
The RAND Corporation's evaluation found the Chicago program had no statistically significant effect on violent crime. The city discontinued the program in 2020. But the architecture — the idea of assigning risk scores to individuals based on algorithmic analysis of their behavioral profiles — has not disappeared. It has migrated into more sophisticated forms, backed by more data, running on more powerful models, deployed in more jurisdictions.
Section 2: AI Surveillance Systems — Capabilities, Limitations, Bias Amplification
2.1 What AI Can and Cannot Do
Artificial intelligence surveillance systems are powerful in several specific ways and limited in several others. Understanding both is necessary for assessing the risks of AI-powered surveillance deployment.
AI systems excel at: - Pattern recognition at scale: Identifying patterns in large datasets (images, audio, text, behavioral data) faster and more consistently than human analysts - Anomaly detection: Flagging deviations from statistical norms - Cross-referencing: Correlating data across multiple sources simultaneously - Automation of routine classification: Applying learned categories (face match, behavioral flag, risk score) to new data without human review
AI systems struggle with: - Context: Understanding the social, cultural, and situational context that gives behavior its meaning - Rare events: Training on data that underrepresents rare events (like actual terrorist attacks) produces models that generalize poorly to those events - Adversarial inputs: Being fooled by deliberate attempts to exploit model weaknesses - Explanatory transparency: Many high-performing AI models (deep neural networks) are functionally opaque — they produce outputs without generating human-readable explanations
The combination of these capabilities and limitations creates specific risks in surveillance contexts. An AI system that excels at pattern recognition applied to historical data will find patterns — including spurious correlations. An AI system deployed in a high-stakes context (law enforcement, employment screening, border control) that lacks contextual understanding and explanatory transparency is an accountability void: it produces consequential outputs that no one can fully explain, challenge, or correct.
🎓 Advanced Note: The computer science concept of algorithmic fairness has become an active research area in response to documented bias in AI surveillance systems. Researchers have proposed multiple formal definitions of fairness (equality of accuracy across groups, equality of false positive rates, equality of false negative rates) and have proven that several of these definitions are mathematically incompatible — it is impossible to satisfy all of them simultaneously. This "impossibility theorem" result implies that designing an "unbiased" AI system requires making explicit value choices about which type of fairness to prioritize, which in turn requires acknowledging that no technical solution is politically neutral.
2.2 Bias Amplification
A critically important feature of AI systems deployed in surveillance contexts is that they do not merely inherit the biases present in their training data — they can amplify those biases. This happens through the feedback mechanisms we discussed in Chapter 36: the system's biased outputs shape real-world decisions, which generate new biased data, which train the next iteration of the system.
The amplification dynamic is particularly concerning in AI systems designed for continuous learning — systems that update their models based on new data as it comes in. A continuously learning facial recognition system deployed in a context where operators initially scrutinize darker-skinned faces more carefully (producing more flagging data for darker-skinned faces) will learn, over time, to flag darker-skinned faces more readily. The bias present in deployment practices is absorbed into the model and becomes structurally embedded.
The same dynamic applies to natural language processing systems trained on historical text: they absorb the racial, gender, and other biases present in that text and apply them to new contexts. AI systems trained on historical hiring data learn to prefer candidates similar to historical hires — reflecting whatever biases shaped historical hiring decisions. AI systems trained on historical criminal sentencing data learn to recommend sentences that reflect the racial disparities of historical sentencing.
2.3 Deepfakes and Synthetic Media
The development of generative AI systems capable of producing realistic synthetic media — photographs, video, audio — creates a new dimension in the surveillance landscape. Deepfakes (synthetic videos created using deep learning techniques) were initially developed for entertainment applications but have proliferated rapidly in potentially harmful ones.
From a surveillance perspective, deepfakes create a specific threat to evidentiary integrity: if synthetic video can be created that shows a person doing something they did not do, then video evidence — the primary product of surveillance systems — becomes contestable in ways it previously was not. This cuts in multiple directions:
- Deepfakes can be used to create false evidence of crimes (framing an innocent person)
- Deepfakes can be used to create plausible deniability for real crimes ("that video of me is a deepfake")
- Deepfakes can be used as counter-surveillance tools (creating confusion about surveillance-captured identities)
- Deepfakes of political figures can be used to spread disinformation with the apparent authority of video documentation
The proliferation of synthetic media does not merely create a misinformation problem. It creates an epistemic problem for surveillance: if video surveillance evidence becomes presumptively contestable, the entire evidentiary architecture built on surveillance footage requires renegotiation.
🔗 Connection to Theme — Consent as Fiction: Deepfakes are the ultimate expression of consent-as-fiction in a new register: they create representations of people doing things they did not consent to do, expressed in a medium (video) that carries presumptive authority. The consent a person gives to appear on camera does not extend to synthetic appearances they cannot anticipate. The surveillance infrastructure that captures real images can be exploited to generate synthetic ones.
Section 3: Expanding the Biometric Frontier
3.1 Beyond the Face — Gait, Voice, Vein, Heartbeat
Facial recognition is the most publicly visible biometric surveillance technology, but it is only one of several biometric modalities that are currently being developed or deployed at scale.
Gait recognition: Human gait — the pattern of movement characteristic of how a person walks — is distinctive enough to support identification, and gait recognition systems do not require the person being identified to face the camera or be in close proximity to it. Chinese researchers and commercial systems have developed gait recognition capable of identifying people at distances of up to 50 meters, from footage captured by standard surveillance cameras, with claimed accuracy rates above 90 percent. Gait recognition is currently deployed in Chinese cities and is being evaluated by law enforcement agencies in other countries.
Voice recognition: Voice biometrics — identifying people by the acoustic characteristics of their voice — is deployed commercially in customer service contexts (many banks use voice recognition to identify callers) and has been adopted by law enforcement agencies for identifying people from recorded communications. Voice biometrics create a passive surveillance modality: in a world of ubiquitous ambient audio recording, voice recognition systems can identify people from their participation in ordinary conversations.
Vascular biometrics: The pattern of veins in a person's hand or wrist is distinctive and can be captured by near-infrared cameras. Vascular biometrics are used in some high-security access control contexts and are being developed for retail payment applications.
Remote physiological monitoring: Emerging research demonstrates the possibility of identifying individuals by their cardiac rhythm (captured remotely by infrared sensors), their breathing patterns, and other physiological characteristics. Pentagon research has developed a system that can identify individuals by their cardiac signature at distances of up to 200 meters, through clothing.
The expansion of biometric modalities beyond the face represents a qualitative change in the surveillance landscape. Facial recognition is evadable — people can wear masks, glasses, hats, face paint. Gait recognition evades standard countermeasures. Remote physiological monitoring is essentially unevadable: you cannot stop having a heartbeat. The progression of biometric surveillance moves toward modalities that are less and less subject to individual counter-surveillance strategies.
⚠️ Common Pitfall: It is tempting to evaluate biometric surveillance modalities independently: perhaps facial recognition is too inaccurate to rely on, but gait recognition might be better, or voice recognition more appropriate for a particular use case. This modality-by-modality evaluation misses a crucial systemic point: when multiple biometric modalities are deployed simultaneously, they combine to create a surveillance system far more powerful than any individual component. Wearing a mask defeats facial recognition but not gait recognition. Changing your gait defeats gait recognition but not voice recognition. Staying silent defeats voice recognition but not cardiac signature. Combined biometric surveillance creates a nearly inescapable identification system.
3.2 DNA as Surveillance Infrastructure
The expansion of DNA databases represents a parallel biometric frontier that has received less attention than facial recognition but is arguably more consequential for long-term privacy.
Law enforcement DNA databases — CODIS in the United States — were initially populated by the profiles of people convicted of serious crimes. They have expanded, through legislative changes, to include profiles of people arrested but not convicted, people on probation, immigration detainees, and, in some states, people convicted of minor offenses. The expansion of who is included has consistently tracked the racial and class composition of the criminal justice system: CODIS disproportionately represents Black, Latino, and Indigenous Americans.
Consumer genetic services — 23andMe, AncestryDNA, and others — have created a parallel genetic database that, as of the early 2020s, included the genetic profiles of tens of millions of Americans. These databases are not public law enforcement resources, but they have been accessed by law enforcement through legal process. More significantly, they enable familial searching: using a partial DNA match to identify not the person whose DNA was found, but their relatives, who may then be investigated. The person whose DNA was collected by AncestryDNA for genealogy purposes may not know that they have, through that submission, created a partial genetic record accessible to law enforcement investigation of their entire family.
📊 Real-World Application: The arrest of the Golden State Killer, Joseph James DeAngelo, in 2018 was made possible by uploading crime scene DNA to a genealogy database, identifying distant relatives, and working backward through family trees to identify DeAngelo. The case was celebrated as a remarkable investigative achievement. It was also a demonstration that consumer genetic databases are, effectively, partial genetic profiles of the entire American population — accessible to sufficiently motivated investigators through means that bypass the Fourth Amendment protections that govern direct law enforcement DNA collection.
Section 4: The Social Credit Model — Not Just China
4.1 China's Social Credit System — What It Actually Is
The Chinese social credit system has achieved near-mythological status in Western surveillance discourse — frequently described as a single, omniscient scoring system that rates every Chinese citizen and determines their access to education, employment, transportation, and social participation based on a unified score. This description is significantly inaccurate.
What actually exists in China is a collection of overlapping systems at national, provincial, and municipal levels, with different scoring mechanisms, different data sources, different consequence structures, and different implementation statuses. Some components are well-developed and actively enforced: the corporate credit system, which rates businesses for regulatory compliance, is operational. The "dishonest persons" blacklist, which restricts the ability of people who have defaulted on court-ordered financial obligations to purchase airline and rail tickets, is operational and has been applied to tens of millions of people.
The unified citizen scoring system with behavioral ratings across all life domains — the popular image of the Chinese social credit system — is less developed than commonly assumed, varies enormously by locality, and is not yet a single national system. China has the aspiration and the infrastructure capacity to build such a system; it has not yet fully built it.
This distinction matters because it shifts the analysis. The real-world Chinese social credit system raises genuine civil liberties concerns — it is used to suppress political dissent, punish religious practice, and target ethnic minorities, particularly Uyghurs in Xinjiang — but it is a more complex and contested phenomenon than the Western mythologized version. Understanding what it actually is, as opposed to what we fear it represents, enables more precise analysis of what elements of the model are spreading globally.
4.2 Social Credit Logics Beyond China
The logic of social credit — using comprehensive behavioral data to assign scores that determine access to social goods — is not uniquely Chinese. Several analogues are present in Western societies:
- Credit scores: The FICO score and its equivalents are explicit social credit systems, assigning numerical scores based on financial behavioral history that determine access to loans, housing, and increasingly employment
- Insurance behavioral scoring: Auto insurers in the United States have adopted "usage-based insurance" that scores driving behavior through telemetry, determining premiums. Home insurers use credit scores as proxy behavioral ratings
- Corporate ESG scoring: Environmental, social, and governance ratings for corporations represent a form of institutional social credit
- Social media reputation systems: Systems of likes, followers, ratings, and reputational signals on social media platforms function as social credit in informal but consequential ways
- Algorithmic employment screening: Systems that score job applicants based on psychometric, behavioral, and social media data are in use at major employers
The argument is not that these Western analogues are identical to the Chinese system. The argument is that the logic of social credit — comprehensive behavioral scoring for purposes of access determination — is not alien to Western liberal societies. It is already deeply embedded in them, operating through commercial rather than state mechanisms, which provides somewhat different accountability structures but similar functional effects on individual behavior.
Section 5: Neural Surveillance — The Ultimate Frontier
5.1 Brain-Computer Interfaces and Neural Data
Brain-computer interfaces (BCIs) — systems that establish a direct communication link between the brain and a computer — represent the most far-reaching surveillance frontier currently being actively developed. Neuralink, the company founded by Elon Musk, received FDA approval to begin human trials in 2023. BrainGate and other academic research programs have demonstrated that BCIs can enable paralyzed patients to control digital interfaces through thought. Kernel and other companies are developing non-invasive BCI systems for consumer applications.
The surveillance implications of BCIs at scale are potentially unprecedented. Neural data — information about the electrical activity of the brain — contains information that no other data source currently accessible to surveillance systems includes:
- The content of thoughts before they are expressed in words or actions
- Emotional states (anxiety, fear, attraction, desire) as they occur
- Attention patterns and cognitive engagement
- Pre-decisional mental states — the brain's preparation for an action before it occurs
No current surveillance system accesses this information. Even the most comprehensive behavioral surveillance systems discussed in this book observe outputs of mental processes: words spoken or written, movements made, purchases executed. BCI-based surveillance would access the processes themselves.
The threshold between behavioral and neural surveillance is qualitatively significant. Every framework for thinking about privacy — from the "nothing to hide" framework to contextual integrity to the reasonable expectation of privacy — has been developed in a world where thoughts were private by default, because there was no technology to access them. Neural surveillance would break that default in a fundamental way.
🎓 Advanced Note: Neuroscientists emphasize that the relationship between neural activity and the content of thought is far more complex than popular accounts suggest. Current BCIs do not "read minds" in the sense of extracting propositional content from neural signals; they read movement intention patterns that can be translated into cursor movements or text. But the trajectory of improvement in neural decoding — demonstrated by research labs that have reconstructed speech and images from fMRI brain data — suggests that more content-rich neural surveillance is a realistic medium-term prospect. The question is not whether it is possible but when it becomes feasible at scale.
5.2 The Behavioral Surplus of the Mind
Shoshana Zuboff's concept of behavioral surplus — the excess data produced by human activity that surveillance capitalism converts into predictive products — applies to neural data with disturbing precision. If neural monitoring devices become consumer products (as Musk and others have suggested they will), the data they generate will be processed by the companies that build them. That processing will generate behavioral surplus in the form of neural patterns that, aggregated across users, reveal correlations between neural states and subsequent behaviors.
The advertising applications are obvious: if a neural monitoring device can detect the moment when a user experiences heightened emotional engagement with an advertisement, that information is enormously valuable to the advertiser. More troubling applications include: detecting neural signatures associated with political beliefs, sexual orientation, religious faith, or vulnerability to addiction — information that is, in many legal systems, protected against collection in other forms but that neural monitoring might make readily accessible.
This is not primarily a future concern. It is a present architectural decision. The neural monitoring devices being designed today will, if they achieve commercial scale, produce neural data infrastructure that operates according to the same logic as every other behavioral surveillance system examined in this book. The time to design privacy into neural monitoring technology is before it exists at scale, not after.
Section 6: Ambient Surveillance and the End of Off
6.1 When Everything Is Always Recording
The trajectory of surveillance technology points toward what researchers call ambient surveillance — a condition in which recording and monitoring are continuous, ubiquitous, and passive. In an ambient surveillance world, surveillance is not something that happens to you in particular contexts (you enter a store with cameras, you use a monitored device) but something that characterizes all contexts. The camera is everywhere. The microphone is always on. The sensors are always sensing.
We are approaching but have not reached this condition. Smart speakers (Amazon Echo, Google Home) are always-listening audio devices in tens of millions of homes. Smart televisions monitor viewing patterns and, in some implementations, ambient audio. Fitness trackers capture continuous physiological data. Connected vehicles record location, speed, and increasingly the content of in-car conversations. Smart city infrastructure — traffic cameras, air quality sensors, noise monitors, pedestrian counters — creates a sensing network across urban public space.
The normalization of each individual ambient surveillance technology makes the cumulative condition easier to reach. Each device is assessed individually ("I'm comfortable with my smart speaker because I chose to have it") rather than as a component of an interconnected surveillance infrastructure. The architecture of ambient surveillance is assembled piece by piece, by individual consumer choices, with no single moment at which the totality becomes visible.
🔗 Connection to Theme — Normalization: The normalization dynamic in ambient surveillance is the same as the normalization dynamic we have traced throughout this book. The always-listening smart speaker is normalized first as a convenience device; then as a standard feature of the connected home; then as an assumed presence in homes without one. The normalization of the individual device normalizes the ambient surveillance condition that devices collectively produce.
Section 7: Three Scenarios for Surveillance in 2050
The following three scenarios are not predictions. They are scenarios — internally consistent extrapolations of different choices about surveillance governance, technology deployment, and political economy. They are designed to be used as analytical tools: thinking through the full implications of different paths from the present.
Scenario A: The Libertarian Path
In this future, government regulation of surveillance technology has remained minimal, deference to market competition has prevailed, and the primary governance mechanism for surveillance is individual consumer choice and contract.
The surveillance landscape of 2050 A.D. (Alternate Dystopia) is comprehensive, sophisticated, and privately owned. Citizens carry multiple ambient monitoring devices; their homes, vehicles, and workplaces are pervasively sensored; their behavioral data is aggregated across hundreds of platforms and processed by a small number of data brokers who sell predictive products to employers, insurers, lenders, government agencies, and advertisers. Neural monitoring is available for consumer purchase; the leading brands are owned by the same companies that own the dominant social media platforms.
Surveillance in this scenario is not primarily government surveillance. The state uses commercial data infrastructures through a combination of purchase, legal process, and formal and informal arrangements with technology companies. The surveillance state is a subscriber to the surveillance economy rather than its primary architect.
Individual privacy rights are robust on paper and nearly meaningless in practice. Contract law governs the relationship between individuals and their surveillance environments; the contracts are agreed to through processes that provide no meaningful opportunity for informed consent.
Scenario B: The Authoritarian Path
In this future, some combination of crisis conditions (pandemic, terrorism, climate disruption, civil unrest) and political movement toward concentrated executive power has produced a surveillance state in the more traditional sense: government-controlled surveillance infrastructure deployed for political control.
The surveillance landscape of 2050 B.D. (Alternate Dystopia) includes real-time facial recognition in all public spaces, integrated social credit scoring, AI-powered monitoring of digital communications, and neural monitoring requirements for certain populations (released prisoners, political dissidents, government employees). The infrastructure was built in stages over twenty years, each stage justified by a specific security rationale, each stage normalizing the next.
This scenario is not uniquely Chinese. The history of democratic erosion and surveillance expansion in Hungary, Turkey, India, and — in more limited ways — in the United States during periods of crisis demonstrates that democratic societies are not immune to the authoritarian path. The infrastructure of surveillance built in democratic contexts — commercial databases, AI systems, biometric registries — can be repurposed by authoritarian governments that acquire power through democratic means.
Scenario C: The Democratic-Regulated Path
In this future, a combination of regulatory frameworks, community advocacy, international coordination, and technological design choices has produced a surveillance environment that is regulated, accountable, and subject to meaningful democratic oversight.
The surveillance landscape of 2050 C.D. (Alternate Democracy) includes restrictions on facial recognition in public spaces (use permitted only with judicial authorization); comprehensive privacy legislation with meaningful enforcement; algorithmic impact assessments required before deployment of AI surveillance systems; community oversight boards with real authority over local surveillance decisions; and international treaties governing cross-border data flows and surveillance cooperation.
This future is not a surveillance-free utopia. The technology still exists. Crime and terrorism still create pressures for surveillance. Commercial data collection still occurs. The difference is that surveillance operates within a framework of democratic governance: it requires authorization, produces transparency, and is subject to challenge. The watcher is also watched.
This scenario is achievable. It requires political will, sustained advocacy, and international coordination at a scale that has not yet materialized. But the regulatory frameworks that would produce it — modeled on the EU's GDPR and AI Act, on municipal surveillance ordinances, on community control legislation — already exist in embryonic form.
Section 8: Jordan Writes About 2050
Dr. Osei has assigned each student in the surveillance seminar a 500-word piece for their sociology class: a reflection on what they expect surveillance to look like in 2050, grounded in the analysis of the course.
Jordan writes:
When I try to imagine surveillance in 2050, I keep returning to a question I've been wrestling with all semester: is the history of surveillance the history of technology, or the history of power?
If it's technology, then 2050 looks radically different from today. Neural interfaces. Ambient sensors in every surface. AI systems that predict behavior before the person having it is aware of the impulse. Gait recognition that knows who you are from across the street. A biometric register so comprehensive that the only way to not be identifiable is to not exist.
If it's power, then 2050 looks remarkably familiar. The watcher and the watched. The authority that sees without being seen. The population that must prove its authorization to exist in public space. The body that is made legible so that it can be managed. This has been true for four hundred years of the technologies I've studied this semester. The lantern laws were primitive surveillance. The slave pass system was primitive biometrics. The difference between them and facial recognition is computation speed and storage capacity, not fundamental architecture.
I think it's both, but the power story matters more.
What I believe about 2050 is this: we will have the technology to build a surveillance system that is essentially totalizing — one that knows everything about everyone, all the time. Whether we build that system is a choice. Not an inevitable technological unfolding, not a market outcome, not a neutral reflection of security requirements. A choice. Made by governments, corporations, communities, and individual people in thousands of small decisions every day.
The reason I don't know what 2050 looks like is that I don't know what choices we'll make. I know that every previous generation has been offered a version of the same choice, and that many of them chose surveillance — because it felt safer, or because they were afraid, or because the people making the choice were not the people who would be most watched. I know that this asymmetry — between those who decide about surveillance and those who bear it — is the most persistent feature of surveillance history.
I also know that people have resisted. That communities have pushed back. That regulations have been enacted. That some of the worst versions of surveillance have been contained or dismantled.
So my prediction for 2050 is not a number or a technology. It's a question: Who, in 2050, will be watching whom? If the answer is "roughly the same people who have always watched roughly the same other people," then we have failed to learn anything from the history we've been taught. If the answer is "it depends — on the choices made by your generation, mine, and the ones between" — then we still have work to do. But at least we know what the work is.
Chapter Summary
This chapter has traced the surveillance trajectories currently underway and the futures they could produce. We moved from the reactive-to-predictive shift in policing (from documentation to forecast to preemptive intervention), through the capabilities and limitations of AI surveillance systems (powerful at pattern recognition, limited in context, prone to bias amplification), to the expanding biometric frontier (gait, voice, vein, cardiac signature, DNA), to the global spread of social credit logics, to the unprecedented implications of brain-computer interfaces and neural surveillance, to the condition of ambient surveillance that individual devices collectively produce.
We examined the Minority Report frame and found it simultaneously useful and misleading: useful as a reminder that predicting individual behavior at the level of specific crimes remains difficult; misleading as a suggestion that predictive surveillance is therefore a distant fiction, when aggregate behavioral prediction is a present reality.
We proposed three scenarios for 2050: the libertarian path (comprehensive private surveillance with formal but meaningless individual rights), the authoritarian path (state-controlled surveillance deployed for political control), and the democratic-regulated path (surveillance within a framework of democratic accountability and meaningful rights). These scenarios are not prophecies. They are tools for thinking about the choices that will determine which future we inhabit.
Jordan's 500-word piece offers the chapter's most important synthesis: what 2050 looks like depends not on technology but on power — and whether the distribution of surveillance power, so consistently asymmetric throughout history, can be made more just.
Key Terms
- Predictive surveillance: Surveillance oriented toward forecasting future behavior rather than documenting past events
- Bias amplification: The tendency of AI systems to amplify, rather than merely inherit, the biases present in their training data
- Gait recognition: Biometric identification based on individual patterns of movement; evades countermeasures (masks) effective against facial recognition
- Social credit system: Comprehensive behavioral scoring used to determine access to social goods; the Chinese system is the most developed but analogues exist in Western commercial contexts
- Ambient surveillance: The condition in which recording and monitoring are continuous and ubiquitous across all environments
- Neural surveillance: Surveillance accessing the content or patterns of brain activity through brain-computer interfaces or other neural monitoring technologies
- Behavioral surplus (Zuboff): Excess data produced by human activity that surveillance systems convert into predictive products; applicable to neural data as to behavioral data
- Familial searching: Using a partial DNA match to identify relatives of a person whose DNA was collected, enabling investigation of people who never submitted their own DNA
- Algorithmic fairness impossibility theorem: Mathematical result showing that several desirable definitions of algorithmic fairness cannot be simultaneously satisfied
Discussion Questions
-
The chapter describes three scenarios for surveillance in 2050. Which do you find most likely, and why? What would have to happen — politically, technologically, socially — to redirect from your predicted scenario to the democratic-regulated scenario?
-
Jordan's essay argues that the history of surveillance is primarily a history of power rather than technology. Do you agree? What evidence from this chapter and earlier chapters supports or challenges this framing?
-
The chapter describes the "ambient surveillance assembly problem": each individual device seems acceptable, but the devices collectively produce a surveillance condition that no one has chosen. Who, if anyone, is responsible for this outcome? What regulatory response, if any, is adequate to it?
-
Neural surveillance is described as qualitatively different from behavioral surveillance because it accesses mental processes rather than their outputs. Is this a morally significant distinction? What privacy frameworks are adequate to the challenge of neural surveillance?
-
The social credit logic, the chapter argues, is already present in Western commercial societies (credit scores, insurance behavioral scoring, algorithmic hiring). What is the relevant distinction, if any, between these Western forms and the Chinese social credit system? Does this distinction hold under pressure?