Part 5: Privacy and Security — The Surveillance Dimension of AI Ethics
Introduction
Artificial intelligence did not create the surveillance society, but it has dramatically expanded its reach, reduced its cost, and concentrated its power in ways that are qualitatively different from what came before. Before AI, surveillance at scale required significant human labor — analysts who watched footage, read files, and matched records. That labor cost constrained the scope of surveillance to what institutions could afford to pay attention to. AI removes that constraint. Facial recognition systems can scan millions of faces automatically. Behavioral analytics can profile billions of user interactions continuously. Predictive systems can score the risk profiles of entire populations without anyone reviewing individual cases. The architecture of surveillance has changed, and with it the ethical stakes of privacy.
Part 5 examines privacy and security through the specific lens that AI creates: not merely as abstract rights or corporate compliance obligations, but as a structural condition shaped by who has the capacity to collect information, who controls how it is used, and who bears the consequences of its misuse. The surveillance dimension of AI ethics is about asymmetry — the profound imbalance of information and power between the entities that build and deploy AI systems and the individuals whose data fuels them and whose behavior is shaped by them.
Privacy, in this framing, is both a fundamental right and a business risk. As a fundamental right, it protects the conditions for individual autonomy, dignity, and freedom — the ability to form opinions, maintain relationships, and pursue projects without being permanently monitored and categorized by institutions whose interests may not align with your own. As a business risk, it encompasses regulatory exposure under an increasingly demanding global privacy regime, reputational vulnerability when data practices are exposed, and the liability that follows from inadequate security. Part 5 develops both dimensions, because business professionals need to understand both and because the two are more deeply connected than they sometimes appear.
The Asymmetry of Surveillance
The organizing concept of this part is asymmetry. Surveillance systems are inherently asymmetric: one party knows about the other, and the other does not know what is known, how it is being used, or what consequences flow from it. This asymmetry is not a neutral technical feature; it is a power relation. The party with information has leverage over the party without it. They can make decisions that affect the surveilled party without their knowledge or consent. They can use behavioral predictions to shape choices in ways the subject cannot detect or resist. They can sell or share information in ways the subject has no meaningful ability to prevent.
AI amplifies this asymmetry in at least three ways. First, it dramatically increases the quantity of information that can be collected and processed at any given cost. Second, it enables inferences about sensitive attributes (health status, political views, sexual orientation, financial vulnerability) from data that appears innocuous — purchase histories, location patterns, social connections, browsing behavior. Third, it makes the surveillance less visible: AI-driven behavioral profiling leaves no obvious trace, whereas traditional surveillance — a camera, a security guard, an investigator — is often apparent to the person being watched.
Understanding this amplified asymmetry is essential for every chapter in Part 5.
Chapter Previews
Chapter 23: Data Privacy Fundamentals This chapter establishes the foundational concepts of data privacy as they apply to AI systems: the nature of personal data, the principles of purpose limitation, data minimization, and consent, and the major regulatory frameworks — GDPR, CCPA, and their equivalents — that govern data collection and processing. It also introduces the concept of contextual integrity, which holds that privacy is violated not merely when data is collected without consent but when it flows in ways that violate the norms of the context in which it was originally shared. This framework is particularly important for AI systems that combine data from multiple contexts to generate inferences their subjects would not anticipate.
Chapter 24: Surveillance Capitalism and Behavioral Manipulation This chapter examines the economic model that makes large-scale AI-driven surveillance commercially rational: the extraction of behavioral data as a raw material, processed into predictions about behavior and sold to advertisers and others who seek to influence it. Drawing on the foundational scholarship in this area, it examines how surveillance capitalism creates incentives for the continuous expansion of data collection, the colonization of previously private spaces, and the development of AI systems specifically designed to modify behavior in commercially useful directions. The chapter asks what this economic model means for democratic societies and for individuals' capacity for autonomous choice.
Chapter 25: Cybersecurity and AI AI creates cybersecurity challenges in two directions: it is both a target of attacks and a tool for conducting them. This chapter examines the cybersecurity risks specific to AI systems — adversarial attacks that manipulate model inputs to produce incorrect outputs, model inversion attacks that extract training data from deployed models, and the particular vulnerabilities of AI systems trained on sensitive personal data. It also examines how AI is being used by both defenders and attackers in cybersecurity, and what this two-sided dynamic means for organizations responsible for protecting personal data and critical systems.
Chapter 26: Biometric Data and Facial Recognition Biometric data — fingerprints, facial geometry, iris patterns, voice prints, gait characteristics — occupies a special place in privacy law and ethics because it is inherently identifying, cannot be changed if compromised, and can be collected without the subject's awareness or consent. This chapter examines the legal and ethical status of biometric data collection, the documented harms from facial recognition errors (particularly their concentration in communities of color), the regulatory responses in multiple jurisdictions, and the conditions under which biometric AI systems can be deployed responsibly. It takes seriously both the legitimate uses of biometrics and the documented abuses.
Chapter 27: Privacy-Preserving AI Techniques The good news in Part 5 is that privacy and useful AI are not irreconcilable. A growing body of technical research has produced methods for building AI systems that generate value from data without exposing the individuals whose data is used. This chapter examines the most important privacy-preserving techniques — federated learning, differential privacy, secure multi-party computation, and synthetic data generation — explaining how they work, what they protect against, and where their limitations lie. It also addresses the organizational and economic conditions under which these techniques are actually adopted, which turns out to be as much a governance question as a technical one.
Key Questions This Part Addresses
- What makes AI-enabled surveillance qualitatively different from pre-AI forms of monitoring, and why does that difference matter ethically?
- What are individuals' legal rights with respect to data collection, processing, and AI-driven inference under current regulatory frameworks?
- How does surveillance capitalism create structural incentives for privacy violation, and what would meaningful reform require?
- What cybersecurity risks are specific to AI systems, and how should organizations managing AI deployments approach security?
- When, if ever, is the deployment of facial recognition and other biometric AI systems ethically justified, and what conditions must be satisfied?
- What privacy-preserving AI techniques are available, and what do they realistically protect?
The Five Recurring Themes in Part 5
Power distribution is the master theme of this part. Surveillance is a form of power: the power to know, to categorize, to predict, and to influence. AI-driven surveillance concentrates this power in the hands of large technology companies, governments, and data-rich organizations in ways that are qualitatively new. Chapter 24 in particular examines how this concentration of surveillance power maps onto broader questions of democratic governance and corporate accountability.
Who bears harms and who captures benefits is starkly asymmetric in the surveillance context. The organizations that collect and process personal data capture the benefits (revenue from targeted advertising, efficiency gains from behavioral prediction, competitive advantage from proprietary behavioral models). The individuals whose data is collected bear the risks (privacy violations, manipulation, discrimination, security breaches). This distributional structure is part of what makes the business model of surveillance capitalism ethically contested.
Technical systems and human values connects directly to Chapter 27's treatment of privacy-preserving AI. These techniques represent one of the most promising areas of AI ethics in which technical innovation and ethical values reinforce rather than trade off against each other. They also illustrate the book's broader argument that technical and ethical analysis must proceed together: the decision to invest in privacy-preserving AI is a governance and values decision that technical tools alone cannot make.
Governance under uncertainty appears throughout the regulatory chapters (23 and 26 in particular), where organizations must navigate a global patchwork of privacy regulations that is actively evolving, with significant variation across jurisdictions and significant uncertainty about how existing standards will be applied to new AI capabilities.
The innovation versus precaution tension is acute in the biometrics chapter (26), where the technology is advancing faster than regulatory and ethical frameworks. The potential benefits of biometric AI — in security, healthcare, and accessibility — are real, and so are the documented harms. Calibrating precaution in this space requires the kind of rigorous analysis this part develops.
Cross-References Within Part 5
Chapter 23 (Data Privacy Fundamentals) is the regulatory and conceptual foundation for the rest of the part and should be read first. The privacy principles it introduces — purpose limitation, data minimization, contextual integrity — are the evaluative standards against which the practices described in Chapters 24, 25, and 26 are assessed.
Chapter 24 (Surveillance Capitalism) connects backward to Chapter 16 (Transparency in Marketing) in Part 3. The commercial AI transparency questions raised there cannot be fully understood without the economic analysis of Chapter 24, which explains why organizations have structural incentives to resist transparency about their behavioral profiling systems. Together, these chapters provide a comprehensive picture of AI ethics in the commercial data economy.
Chapter 26 (Biometrics) connects forward to Chapter 30 (Criminal Justice) in Part 6, where facial recognition's use by law enforcement is examined in the context of its documented racial disparities and its potential for chilling effects on civil liberties. Readers interested in the criminal justice applications should read Chapter 26 first.
Chapter 27 (Privacy-Preserving AI) connects back to the bias and fairness chapters in Part 2. Some privacy-preserving techniques, particularly differential privacy, interact with fairness metrics in ways that practitioners need to understand: adding privacy noise can differentially affect model performance across demographic groups, potentially worsening fairness outcomes even as it improves privacy protections.