Case Study 32-1: Edward Snowden and the Architecture of Secure Communication
How the NSA Revelations Were Made Possible by Privacy Technology
Background
In May 2013, a 29-year-old NSA contractor named Edward Snowden contacted journalist Glenn Greenwald, filmmaker Laura Poitras, and reporter Ewen MacAskill. Over the following weeks, working from a hotel room in Hong Kong, Snowden provided them with thousands of classified documents revealing the scope of NSA surveillance programs — including PRISM (collection from major tech companies), XKeyscore (broad internet monitoring), MUSCULAR (tapping submarine cables), and metadata collection on hundreds of millions of phone calls.
The result was one of the largest intelligence disclosures in American history, generating legal reform (the USA FREEDOM Act of 2015, which curtailed the bulk phone records program), policy debate, and ongoing conversation about the balance between national security surveillance and civil liberties.
What is less often examined is the role that privacy technology played in making the disclosure possible — and the near-misses that almost exposed Snowden before the story could be published.
The Technical Architecture of the Disclosure
The Snowden-Greenwald-Poitras communication used a sophisticated stack of privacy and security tools:
PGP (Pretty Good Privacy) encryption: Greenwald initially resisted using PGP, which Snowden had identified as essential. Greenwald was not a technologist and found PGP complex. Snowden's early contacts with Greenwald almost failed because Greenwald didn't install PGP quickly enough. (Poitras, who was already subject to suspicious border crossings and had reason to be cautious about communications security, had been using PGP for years.)
Tails OS: Snowden specified that source communications should be conducted using Tails. He provided Poitras with a USB drive containing Tails and instructions. The amnesic operating system meant that communications left no forensic trace on the devices used.
Signal-equivalent encrypted communications: The journalists used encrypted channels throughout, recognizing that if their source was communicating from within NSA systems, any communication over standard channels could be detected.
Compartmentalization: Information about who the source was, what the documents contained, and when publication would occur was kept strictly separated. Greenwald, Poitras, and MacAskill knew different pieces; no single communication channel held the complete picture.
The Guardian's legal team operated on different channels. Coordination with the New York Times and Washington Post on specific stories occurred on secure channels.
The Near-Misses: Where Security Culture Was Tested
The disclosure was not technically perfect. Several near-misses illustrate the limits of even sophisticated counter-surveillance:
The Greenwald delay: Snowden's first approach to Greenwald failed because Greenwald didn't install PGP. Snowden had to approach Poitras first, who then brought Greenwald in. This delay could, in theory, have exposed Snowden if NSA monitoring of his unusual document access patterns had triggered an investigation before the journalists were ready to publish.
Hotel registration: Snowden stayed in the Mira Hotel in Hong Kong under his real name. This was a conscious choice — he wanted to be able to speak publicly after the story broke. But it meant that from the moment the story published, his location was known. The journalists had to complete their work knowing that Snowden's position was not fully protected.
Document handling: Some of the documents given to Greenwald were eventually shared with other journalists and made public without fully removing metadata — document properties that could potentially identify the specific printer used to print them, or other forensic markers. The "printer dots" problem (that some laser printers embed invisible patterns identifying the specific printer) is a persistent concern in document disclosure.
The NSA's own monitoring: The NSA has since confirmed that Snowden's unusual access patterns were flagged by some internal systems. The agency was slow to act on those flags — not because the systems failed, but because human follow-through was insufficient. This illustrates that good opsec is not just about technical tools but about the adversary's internal functioning.
The Consequences of Technical Success and Failure
The disclosure succeeded, in the sense that the documents reached the public and generated substantial policy debate. The technical security measures — particularly Tails and PGP — were central to that success.
The disclosure failed, in another sense, to protect Snowden's physical safety. He has lived in Russia since 2013, unable to return to the United States under his parole (he was granted permanent residency in Russia in 2022). This outcome reflects the limits of technical counter-surveillance: encryption protects the content of communications; it does not protect the communicator from geopolitical consequences.
The Snowden case also prompted significant changes in the tools journalists use. The Freedom of the Press Foundation — co-founded by Greenwald, Poitras, and Daniel Ellsberg — took over maintenance of SecureDrop, an open-source whistleblower submission system. SecureDrop is now used by the New York Times, Washington Post, BBC, and hundreds of other news organizations. It uses Tor and other anonymization tools to allow sources to submit documents without exposing their identity even to the news organization receiving them.
Lessons for Counter-Surveillance Practice
The Snowden case distills several principles:
1. The weakest link is human. The most sophisticated technical security fails when humans don't use it consistently. Greenwald's resistance to PGP nearly prevented the disclosure. The challenge of usability — making privacy tools accessible enough that non-technical users adopt them — is as important as the technical strength of the tools themselves.
2. Threat modeling must be specific. Snowden knew his specific adversary (NSA surveillance capabilities), their specific techniques (metadata analysis, communications monitoring), and what he needed to protect (his identity as source). His counter-surveillance choices flowed from that specific model. Generic privacy advice is less useful than adversary-specific planning.
3. Compartmentalization works. By ensuring that no single channel or person held all the relevant information, Snowden limited the damage any single security failure could cause.
4. Technical protection has political limits. Even perfect operational security could not have protected Snowden from geopolitical consequences of exposure. Counter-surveillance addresses technical surveillance; it cannot address the broader systems of law, politics, and power that generate those consequences.
5. Tools evolve from use cases. SecureDrop, Signal's sealed sender feature, the Freedom of the Press Foundation's training programs — these tools and institutions emerged directly from the practical lessons of the Snowden disclosure. Counter-surveillance practice and tool development are co-evolving.
Analysis Questions
1. Greenwald resisted using PGP because he found it too complex. If the Snowden disclosure had failed because of Greenwald's technical resistance, who would bear moral responsibility? What does this suggest about the relationship between privacy tool designers and their users?
2. Snowden chose to stay in Hong Kong under his real name, rather than disappearing anonymously. What does this choice tell us about his conception of what he was doing? Is civil disobedience — consciously breaking a law and accepting consequences — different from covert whistleblowing?
3. The Snowden revelations led to legal reform (USA FREEDOM Act) and significant policy debate. They did not stop NSA surveillance programs or fundamentally change the architecture of mass surveillance. Does the partial success of the disclosure support optimism about the impact of counter-surveillance and whistleblowing, or pessimism?
4. SecureDrop routes submissions through Tor and removes identifying metadata from documents. But if a news organization publishes a document and the adversary can identify the specific document from a small pool of people who had access to it, Tor and metadata cleaning don't help. What does this limitation reveal about the relationship between technical security and the inherent risks of disclosure?
5. The Freedom of the Press Foundation provides security training for journalists. The Electronic Frontier Foundation provides Surveillance Self-Defense guides. These organizations treat counter-surveillance as a service that can be distributed to non-technical users. Is this the right approach? What are the limits of making counter-surveillance "user-friendly"?
This case study should be read alongside Chapter 32 Sections 32.2 (encryption), 32.11 (metadata hygiene), and 32.12 (limits of individual counter-surveillance). It connects backward to Chapter 9 (NSA mass interception programs) and forward to Chapter 33 (activism and press freedom advocacy).