Appendix F: Frequently Asked Questions

Addressing Misconceptions and Common Student Questions


How to Use This Appendix

The questions in this appendix fall into two categories: common misconceptions that are worth engaging analytically, and practical questions about how surveillance works in everyday life. For the misconceptions, the goal is not to dismiss the underlying intuition but to complicate it — to show where the argument breaks down and why more careful analysis leads to different conclusions. For the practical questions, the answers are as concrete and current as this textbook can make them, though you should verify the most technical details against current expert sources.

The "nothing to hide" argument gets the first and longest treatment because Jordan Ellis — and you — will encounter it constantly. Getting good at responding to it is a core skill of surveillance literacy.


Part I: Misconceptions and Their Responses

"If I have nothing to hide, I have nothing to fear."

This is the single most common objection to privacy protection, and it deserves a thorough response. Daniel Solove has documented fifteen distinct responses; here are the most important.

The argument rests on a false conception of what privacy protects. The nothing-to-hide argument assumes that privacy is only valuable for hiding wrongdoing — that the only reason anyone would want their activities concealed is embarrassment or guilt. But privacy serves many purposes that have nothing to do with hiding: it protects autonomy (the freedom to develop your identity, explore ideas, and make choices without external scrutiny); it protects relationships (which require zones of intimacy to function); it protects dignity (the freedom from being exposed, humiliated, or reduced to a data profile); and it protects freedom (because the awareness of being watched changes behavior, even among people doing nothing wrong). You close the bathroom door not because you're hiding crimes; you close it because some experiences require privacy to be experienced as they should be.

The chilling effect is real harm. Even when surveillance reveals nothing incriminating, it changes behavior. Researchers have documented that Google searches for sensitive topics (health symptoms, political information, legal questions) declined significantly following the Snowden revelations in 2013. People stopped looking up things they were curious about because they were afraid of what their searches might look like in a surveillance record. This is harm — the impoverishment of intellectual life, the shrinkage of curiosity — even when nothing illegal is happening.

The privacy-security tradeoff is a false binary. The argument frames privacy and security as necessarily opposed: you give up one to get the other. But this framing is empirically wrong. Surveillance programs have frequently failed to prevent the attacks they were designed to prevent, while successfully chilling legitimate activity. Mass surveillance generates too much data to analyze effectively; targeted surveillance based on individualized suspicion is more effective. The question is not "how much privacy are you willing to give up for security?" but "what specific surveillance measures demonstrably provide security benefits that outweigh their privacy costs?" That is a much harder question that requires evidence, not slogans.

The argument assumes that "nothing to hide" is a stable category. What is legal, normal, or acceptable changes over time and across legal regimes. In 1950s America, being gay was criminal in most states; in some countries today, it still is. Interracial marriage was prohibited in many US states within living memory. Abortion has recently had its legal status dramatically altered. The same applies to political beliefs, religious practice, and associational activity. When you say "I have nothing to hide now," you are implicitly trusting that the current legal and political regime will be the relevant one forever, that no future government will have different views about what you are hiding, and that your data cannot be repurposed against you. None of these assumptions is reliable. The history of surveillance — particularly the history covered in Chapter 6 — documents repeatedly how information collected in one context is weaponized in another.

The "nothing" in "nothing to hide" is almost never true. Most people have things they would prefer to keep private that are perfectly legal: medical conditions, mental health struggles, financial difficulties, relationship problems, sexual behavior, political views, religious doubts, family conflicts, embarrassing moments. These are all things "to hide" in the relevant sense — things people would prefer not to have in a database accessible to employers, insurance companies, government agencies, or hackers — without being wrongful in any sense.

Power asymmetry enables abuse. Surveillance creates a power imbalance: some actors have detailed information about others who have little information about them. This asymmetry enables blackmail, targeting, manipulation, and discrimination, even when no "hidden wrongdoing" is involved. J. Edgar Hoover used surveillance not to expose crimes but to accumulate leverage — information that could be used to coerce compliance or destroy people who crossed him. This is not hypothetical history; it is documented fact.


"Surveillance makes us safer."

This claim deserves a nuanced response, because it contains some truth alongside significant exaggeration.

Where surveillance contributes to safety: Certain narrow surveillance applications have demonstrated genuine safety benefits. Closed-circuit cameras in high-theft locations (car parks, specific retail environments) have shown documented effects on vehicle crime in controlled studies. DNA databases have contributed to solving serious crimes, including exonerating the wrongly convicted. Epidemiological surveillance systems have enabled rapid identification and containment of disease outbreaks. Contact tracing during COVID-19, however imperfect, contributed to outbreak management. These are real benefits.

Where the evidence is weaker: The evidence for broader surveillance safety claims is much less robust. CCTV in open public spaces shows mixed to absent crime reduction effects in most studies, with documented displacement effects (crime moves to unmonitored areas rather than decreasing). Mass surveillance of phone metadata did not, according to the Privacy and Civil Liberties Oversight Board's analysis, produce a single case in which the bulk collection program was essential to preventing a terrorist attack — targeted surveillance would have been sufficient.

The wrong comparisons: Surveillance safety arguments often compare surveillance to nothing, rather than to alternative safety measures. Compared to the cost and civil liberties impact of mass surveillance, targeted surveillance of identified suspects, violence interruption programs, addressing root causes of crime (poverty, housing instability), and community-based safety initiatives may produce better safety outcomes with lower costs to rights.

Who "safer" means: Surveillance does not make everyone safer equally. Communities that are disproportionately surveilled — Black communities, immigrant communities, Muslim communities — experience surveillance as a source of insecurity rather than safety. When your community is the one being watched by the state, "surveillance makes us safer" describes someone else's experience.


"I already gave up my privacy, so what's the point?"

This is what surveillance scholars call the "resignation" response — a fatalistic conclusion that surveillance is so pervasive that protective action is futile. It is psychologically understandable, but analytically wrong.

Privacy is a spectrum, not a binary. The claim that privacy is "already gone" treats privacy as a switch — either you have it or you don't. But privacy protection is a matter of degree. Every additional layer of data collected about you is an additional marginal risk; every layer of protection reduces risk. You cannot achieve perfect privacy, but you can meaningfully reduce the scope of your surveillance exposure through choices about applications, services, settings, and behavior. The question is not "private or not private?" but "how much privacy, in which domains, at what cost?"

Collective action problems require individual action. The resignation argument, if widely adopted, is self-fulfilling: if no one takes protective action because they believe everyone else has already given up, then surveillance expands unopposed. Individual action matters not just for individual protection but as part of collective norm-setting. When individuals refuse to accept privacy-invasive defaults, they create demand signals that affect market and policy choices.

Rights can be reclaimed. Surveillance capitalism has expanded dramatically in the last two decades; its current form is not inevitable or permanent. European GDPR enforcement has curtailed some practices that were normalized in the US. Illinois BIPA has restricted biometric surveillance in that state. Regulatory and legislative change can change what is and is not practiced. Resignation forecloses this possibility.


"Companies just use my data to show me ads — that's harmless."

Advertising is not the only use. Behavioral data collected for advertising is also used for: insurance pricing decisions (there are documented cases of companies using data broker information in health and life insurance pricing); employment screening; credit decisions; and sale to data brokers whose customers include law enforcement agencies. The FTC and multiple academic researchers have documented that law enforcement regularly purchases commercially collected location data specifically to avoid warrant requirements.

Targeted advertising is itself not harmless. Political microtargeting — the use of detailed behavioral profiles to deliver bespoke political messages — is a demonstrated mechanism for manipulating democratic processes. The Cambridge Analytica scandal involved the use of Facebook behavioral data to build psychographic profiles used in political targeting. This is not "harmless advertising" in any ordinary sense.

The long-term behavioral modification dimension. As Zuboff documents, the most advanced advertising systems are not merely showing ads — they are testing and refining behavioral interventions designed to produce specific outcomes. The goal is not just to match an ad to an interest but to modify the probability that you will make a specific choice.

Data breaches. Advertising data, once collected, is vulnerable to breach. Health advertising data (derived from browsing patterns on health sites, searches, app usage) that was collected to target ads becomes, in a breach, a sensitive health dossier accessible to anyone. The 2023 breaches of major health data companies exposed advertising-derived health inferences for tens of millions of people.


"The government would need a warrant to read my messages."

For some messages, yes — but the picture is more complicated. Content of messages sent over services using end-to-end encryption is technically inaccessible to governments even with warrants (they cannot produce what they cannot access). For other messages, the warrant requirement depends on: the type of message; who holds it; and the legal authority invoked.

The third-party doctrine creates exceptions. As discussed in Chapter 9 and Appendix E, the third-party doctrine historically meant that information shared with third parties — including email stored on a server you don't control — might lack Fourth Amendment warrant protection. Carpenter modified this somewhat but did not resolve the full scope of digital third-party doctrine.

FISA bypasses ordinary warrant requirements. The Foreign Intelligence Surveillance Act creates a separate legal regime for intelligence-related surveillance. If you are in contact with a foreign national who is designated as a foreign intelligence target, your communications may be collected under FISA Section 702 without individualized suspicion or a traditional warrant — even if you are a US person and even if you are not a target yourself.

Metadata has weak protection. Even when content is protected, metadata — who you contacted, when, how often, from where — has much weaker legal protection. Metadata enables highly revealing inference, as chapter 9 documents.


"VPNs make you anonymous."

They do not. VPNs shift some surveillance risk but create new risks and leave many others intact:

  • Your VPN provider can see all your traffic. If the VPN provider maintains logs, is compelled by law enforcement, or is itself a surveillance company, your privacy is compromised.
  • Websites that use login accounts, cookies, or browser fingerprinting can identify you regardless of your IP address. If you use a VPN and then log into Google, Google knows it's you.
  • VPNs do not protect against malware, device-level surveillance, or adversaries with access to your device.
  • VPNs do not encrypt DNS queries by default in all configurations, which can reveal which sites you are visiting to your ISP even with a VPN active.
  • "No-log" VPN claims have in some cases proven false; VPN providers have been compelled to provide records they claimed not to keep.

VPNs are a useful tool for specific threats (ISP traffic monitoring, local network surveillance, IP-based geolocation). They are not a comprehensive privacy solution.


"Incognito mode is private."

Incognito mode (or "private browsing") is significantly more limited than most users assume. What incognito mode does: prevents your local browser from storing your browsing history, cookies, and form data after the session ends. What it does not do:

  • It does not prevent websites from knowing who you are (they can still use server-side tracking, your IP address, browser fingerprinting, and any account you log into).
  • It does not prevent your ISP from seeing which websites you visit.
  • It does not prevent network operators (your employer's network, your university's WiFi) from monitoring traffic.
  • It does not prevent your device's operating system or installed software from logging activity.

In summary, incognito mode protects against someone who uses your physical device after you do from seeing your browsing history. It does not protect against any networked surveillance actor.


"Deleting an app means they stop collecting data."

Deleting an app stops future data collection through that app, but it does not:

  • Delete data already collected and stored by the company's servers.
  • Remove your profile from their database or partners' databases.
  • Prevent data brokers who purchased your data from continuing to hold it.
  • Prevent the company from receiving data about you from third parties (if your contacts still use the app, your social graph data is still accessible).

What deletion requests actually do: Under GDPR, CCPA, and similar laws, you may have a right to request deletion of your personal data from a company's servers. These requests have legal force in relevant jurisdictions, but they must be actively submitted — deleting the app is not the same as submitting a deletion request. Companies must comply with properly submitted deletion requests within specified timeframes, subject to certain exceptions.


"China's surveillance state couldn't happen here."

This requires careful unpacking, because both the alarm and the complacency are partially wrong.

What is genuinely different about China's system: the deliberate government integration of commercial and state surveillance infrastructure; explicit political compliance as a scoring criterion; unified political direction of a surveillance regime across party, government, and commercial actors; and the absence of meaningful independent legal challenge.

What is less different than commonly assumed: The United States and other democracies have extensive surveillance infrastructure including: bulk collection programs revealed by Snowden; data broker industries that sell personal profiles to government agencies without warrants; predictive policing and algorithmic management systems that impose differential outcomes on surveilled populations; surveillance of immigrant communities, Muslim communities, and political activists with significant parallels to China's minority surveillance programs; and platforms with extensive behavioral data that are legally compelled to cooperate with government requests.

The appropriate response is neither "it already happened here" (which overstates the case) nor "it couldn't happen here" (which ignores documented practices and democratic vulnerabilities). The differences — judicial independence, civil society, press freedom, political pluralism — are real and important constraints. They are also contingent: they have been and can be eroded. The relevant question for surveillance studies is not "could this happen here?" but "which elements are already present, and what institutional conditions would need to change for the system to become more comprehensive and less constrained?"


"Facial recognition is just like a cop looking at a photo."

Scale changes everything. A police officer showing a witness a photo array is a surveillance act of limited scope: one witness, a handful of photos, a specific investigation. Facial recognition at scale — cameras scanning crowds, matching every face against a database of millions — is categorically different in capability:

  • It applies retrospectively to video archives, enabling reconstruction of anyone's movements at any time surveillance infrastructure existed.
  • It can monitor an entire population in real time.
  • It can identify individuals in contexts where they had a reasonable expectation of anonymity (attending a protest, walking to an addiction treatment clinic, visiting a political organization's office).
  • Its error rates — particularly for darker-skinned individuals, as documented in Gender Shades — mean that automated false identifications are applied at scale, affecting large numbers of people incorrectly.
  • It eliminates the practical obscurity that historically existed in public spaces — the fact that you might be seen in public but not reliably identified.

The cop-looking-at-a-photo analogy fails because it doesn't account for these scale effects. What is qualitatively different about mass biometric surveillance is not that it recognizes faces but that it does so continuously, comprehensively, retrospectively, and automatically — creating a capability that has no analogue in the history of law enforcement.


Part II: Practical Questions

"What can I actually do to protect my privacy today?"

Start with what matters most for your situation. A threat model helps: consider who might want access to your data, for what purposes, and with what capabilities.

For general everyday privacy from commercial tracking: Install uBlock Origin on your browser. Use Signal for sensitive communications. Review app permissions on your phone and revoke unnecessary ones. Use Firefox or Brave as your primary browser. Set your social media accounts to the most restrictive privacy settings. Opt out of data broker profiles (see Appendix D).

For greater privacy from ISP and network surveillance: Add a reputable VPN to the above.

For protection against the most serious adversaries (journalists, activists, political organizers): Consult the EFF's Surveillance Self-Defense guide for your specific threat model. Consider Tor Browser for sensitive research. Use encrypted email for sensitive communications.

The most important single step is probably Signal. It takes two minutes to install, is free, and provides end-to-end encrypted messaging that is qualitatively more private than standard SMS or most other messaging applications.


Partially, imperfectly, and only for some types of tracking. Under GDPR, cookie consent banners from EU-facing websites are supposed to obtain meaningful consent for tracking cookies. If you decline consent, the website is required to not set tracking cookies (beyond technically necessary ones).

The limitations: Cookie banner designs are frequently manipulative — making "accept all" the prominent option and making "reject all" or "manage preferences" difficult to find. The IAB's Transparency and Consent Framework has been ruled non-compliant with GDPR by multiple data protection authorities. Even when you decline cookies, other tracking mechanisms (browser fingerprinting, tracking pixels in images, server-side tracking, network-level monitoring) may continue.

Outside GDPR territory: In the United States, cookie banners are largely voluntary marketing exercises rather than genuine consent mechanisms. CCPA opt-out rights for California residents have some effect, but enforcement is limited. Cookie banners in the US are primarily designed to provide a paper trail of "notice" rather than meaningful consent.


"Can my employer see my personal texts on work WiFi?"

Generally, no — but with important nuances. Personal text messages (SMS) are transmitted through your cellular carrier, not your employer's WiFi. Your employer cannot see your SMS texts just because you're on their WiFi. However:

  • If your texts are transmitted as data (through a messaging app) over the employer's WiFi, the employer can potentially see encrypted metadata (that a connection occurred, when, and to what server) — but not message content if end-to-end encrypted.
  • If your personal device is enrolled in your employer's mobile device management (MDM) system, the employer may have much broader access to your device, potentially including message logs.
  • If you are logged into personal accounts on an employer-owned device, the employer can potentially see browser history and app usage through device management software.

Practical guidance: Keep personal communication on personal devices and personal cellular data connections. Do not log into personal accounts on employer-owned devices if you want to be confident those accounts are not monitored.


It depends on what email system and what kind of school. At most universities, students use university-provided email accounts subject to university policy. University policies typically permit the institution to access account contents for security, legal compliance, and investigation of policy violations. This is legal — you are using the institution's email system, and institutional access is typically disclosed in acceptable use policies.

FERPA provides some protection: student education records cannot be disclosed without consent, and email communications that constitute education records may have FERPA protection. But FERPA protects against external disclosure, not necessarily against institutional review.

Practical guidance: If you need privacy from your institution, use a personal, non-institution email account for sensitive communications.


"How do I know if someone installed stalkerware on my phone?"

Warning signs: Unusual battery drain; data usage significantly higher than normal; phone running slower than usual; strange sounds during calls; phone taking longer to shut down; unfamiliar apps or settings changes.

How to check: Review installed apps carefully, including in settings menus where apps may be hidden. Check data usage by app in your phone's settings. Android devices: check for apps with device administrator privileges in security settings. Look for apps with broad permissions (location, microphone, camera, contacts, message access) that you don't recognize or didn't install.

Limitations of self-check: Stalkerware is specifically designed to be invisible. Some commercial stalkerware products successfully evade casual detection. If you have strong reason to believe your device is compromised (particularly in a domestic violence context), contact the National Network to End Domestic Violence's Safety Net project or a local domestic violence organization — they have trained technology safety advocates who can help.

What not to do: If you discover suspected stalkerware and are in an abusive situation, do not delete it without a safety plan. Deleting stalkerware may alert the abuser, who can use this information in dangerous ways. Safety planning with an advocate should precede device action.


"Can I sue if my data is breached?"

This is genuinely complicated. The short answer is: possibly, but it is difficult and the law is evolving.

Challenges to data breach litigation: Courts have historically required plaintiffs to demonstrate concrete "injury in fact" to establish standing to sue. Exposure of data in a breach, without demonstrated harm from that exposure, has often been held insufficient. Spokeo v. Robins (2016) established that a "bare procedural violation" of a privacy statute is not automatically concrete injury.

Where suits succeed: Class actions have been most successful when: (1) plaintiffs can demonstrate actual harm (fraudulent accounts opened, credit used); (2) a specific statute with a private right of action applies (Illinois BIPA has been particularly successful); or (3) the breach was a result of clearly inadequate security that a court finds falls below reasonable standards.

State law and specific statutes: Some state privacy laws (including California's) create private rights of action for data breaches resulting from inadequate security. FTC enforcement actions can provide remedies but not individual compensation.

Practical guidance: If you are a victim of data breach, monitor your credit, consider a credit freeze, document any actual harm you experience, and consult with an attorney if you believe you have suffered concrete losses.


"What is a geofence warrant and should I be worried?"

What it is: A geofence warrant (also called a "reverse location search" warrant) compels a technology company — typically Google through its Sensorvault database — to identify all devices present within a defined geographic area during a specified time period. Unlike ordinary warrants that target a specific suspect, geofence warrants are issued before suspects are identified — they ask "who was here?" rather than "what did this person do?"

Why it's controversial: Because geofence warrants are inherently broad, they sweep up data from everyone in the defined area — most of whom have no connection to the crime being investigated. Someone who happened to be near the scene of a crime at the relevant time, for entirely innocent reasons, may find themselves in a law enforcement database and potentially subjected to further investigation.

What "should I be worried" means: If you use an Android device with Google location services or many Google apps on any device, your location data may be in Google's Sensorvault. If you are ever in the wrong place at the wrong time — near a crime scene, at a protest that turns violent, anywhere that triggers a geofence warrant — your data may be disclosed to law enforcement.

Legal status: Courts are actively debating the constitutionality of geofence warrants. Some courts have found them unconstitutional; others have allowed them with limitations. After Carpenter, there are strong arguments that geofence warrants require a warrant supported by probable cause, but this is not yet settled law.

What you can do: Disable Google location history. Choose "never" or "while using" for location permissions rather than "always." Use an iPhone with location services restricted. Understand that even with these precautions, your cellular carrier has location records through cell tower data.


"What is the single most important thing from this textbook that I should remember?"

Surveillance is not a feature of technology — it is a feature of power relations. Technologies enable surveillance, but which technologies are deployed, at whom, for whose benefit, with what accountability, and with what consequences are social and political choices. The question is never only "can this be monitored?" but "who is deciding to monitor whom, on what authority, with what justification, and with what oversight?" Those are political questions, and they require political answers.

Jordan Ellis's journey from naïve acceptance to critical awareness and action is not just a pedagogical device. It describes the actual intellectual and moral movement that understanding surveillance requires: from seeing surveillance as background noise to recognizing it as contested territory where power is exercised, rights are at stake, and choices have consequences.


This FAQ will be updated in future editions as law, technology, and social practices change. Questions not answered here may be directed to the course instructor or to the organizations listed in Appendix D.