28 min read

There is a moment in every social media user's early experience that, in retrospect, is remarkable for what it asks. You are invited to tell the platform your name, your birthday, your relationship status, your employment history, your education...

Chapter 13: Social Media as Observation Tower

Opening: The Invitation to Be Watched

There is a moment in every social media user's early experience that, in retrospect, is remarkable for what it asks. You are invited to tell the platform your name, your birthday, your relationship status, your employment history, your education, your interests. You are invited to upload photographs of yourself and others. You are invited to declare your political views, your religious beliefs, your location. You are invited to make friends — to map your social relationships in a format the platform can read and analyze.

You accept this invitation without much thought, because that is what everyone does. The invitation is phrased warmly: "Tell us about yourself." "Connect with friends and family." "Share what's on your mind." The language of connection, community, and self-expression obscures what is simultaneously occurring: you are constructing, with your own hands, a detailed behavioral and relational profile that the platform will retain, analyze, and monetize.

The social media platform is, from one perspective, a communication tool — a way to stay connected with people you care about, to discover information, to participate in communities. From another perspective, it is an observation tower — a structure designed to attract participants to a central, visible space, to collect everything they do there, and to use that collection for purposes that have nothing to do with the participants' intentions. Both perspectives are accurate. Understanding social media surveillance requires holding them simultaneously.


13.1 Participatory Surveillance: The Platform You Build Yourself

In 2010, media scholar Mark Andrejevic introduced a concept that has become central to critical platform studies: participatory surveillance. The term captures something genuinely novel about social media compared to prior surveillance systems: it is surveillance that users actively participate in constructing. Unlike the panopticon — where the watched have no role in building or operating the watching architecture — social media is a surveillance system built primarily by its subjects.

When you post a photograph, you are providing a behavioral trace. When you tag someone in that photograph, you are providing relational data. When you like a post, you are providing preference data. When you share an article, you are providing interest data. When you argue in a comment thread, you are providing data about your emotional responses, your communication style, your social relationships, and your views. None of this data was forced from you. You participated — actively, enthusiastically, in many cases — in generating it.

Andrejevic distinguishes participatory surveillance from coercive surveillance through the concept of the digital enclosure: social media creates an environment in which participation requires disclosure. To engage with the platform — to communicate with friends, to access content, to participate in communities — you must generate the behavioral traces the platform monetizes. Participation and surveillance become structurally inseparable.

This is different from the panopticon's coercive dynamic, where the watched have no choice about being watched but do not participate in the watching structure. It resembles more closely what Foucault might have recognized as productive power: surveillance that works not by prohibiting or punishing but by channeling, enabling, and rewarding self-disclosure. The platform does not force you to share. It creates an environment in which sharing is the medium of participation, and participation is the condition of social connection.

💡 Intuition: Imagine a town square where the only way to meet your friends is to walk through a turnstile that photographs you, records your movements, and notes whom you spoke with. The square is genuinely a public social space — people form real relationships there, have real conversations, organize real events. But the operator of the turnstile knows everyone who attended, for how long, with whom, and what they said. Participation in the social space and participation in the surveillance apparatus are inseparable. Social media is that square.

🔗 Connection: Chapter 2 introduced the synopticon — Mathiesen's concept of the many watching the few — as a complement to Bentham's panopticon (the few watching the many). Social media is, in one dimension, synoptic: many users watch and scrutinize a small number of celebrities and public figures. But simultaneously, the platform watches all users from above. Social media collapses the panopticon/synopticon distinction, operating as both simultaneously: users watch each other (synopticon), while the platform watches everyone (panopticon).


13.2 What Platforms Collect: Beyond the Post

Users typically understand that social media platforms collect what they post: the photographs, the status updates, the comments. The data collection that matters most is far broader.

Behavioral Signals

Time spent: Platforms measure not just whether you viewed a post but how long. Extended viewing time signals interest even if you don't react or comment. This data is used to calibrate recommendation algorithms and to sell advertising: a user who spent 45 seconds looking at a piece of content related to diabetes care is a different advertising target than a user who scrolled past it.

Scroll behavior: The speed at which you scroll through a feed, and the moments when you slow or stop, provides behavioral signals about content preferences that are independent of any explicit reaction.

Hover time: On desktop interfaces, where your cursor rests and for how long provides additional implicit preference data. Hovering over a profile photograph, a political headline, or an advertisement generates different signals.

Cursor movement and click patterns: The spatial path your cursor takes through a feed — which elements it approaches, which it avoids — provides data about attention that explicit engagement metrics miss.

App session patterns: When you open the app, how long you stay, what time of day, how many sessions per day, whether your usage pattern has changed over time — all of these behavioral patterns are logged and analyzed.

Notification response: Whether you open notifications, how quickly, and whether you engage after opening them provides data about your responsiveness and your relationship to specific content and content types.

Search within platform: What you search for on social media — which profiles, which topics, which keywords — reveals interests and intentions you may not want to disclose through public posting.

Network Data

Social network data — the graph of relationships among users — is among the most analytically powerful data that platforms collect. Your network of connections, followers, and mutual friends encodes:

  • Your social status and social capital (how many connections you have, of what type)
  • Your community membership (which groups of people you are embedded in)
  • Your influence potential (whether you are a hub connecting many others, or a peripheral node)
  • Your likely demographic characteristics (your network's demographics predict your own through homophily — the tendency to associate with similar others)
  • Your information exposure (what information flows through your network to you and from you)

Social network analysis (SNA) allows platforms to use graph data to infer characteristics that users never disclosed. Research has shown that a user's political affiliation can be inferred from their network with 85%+ accuracy without any explicitly political content from the user. Sexual orientation can be inferred from network characteristics with similar accuracy. Mental health status, economic anxiety, and consumer purchasing patterns are all detectable in network data.

Importantly, graph data is also data about people who never agreed to provide it. If you are part of someone's social network, data is collected about you — your network position, your behavioral patterns, your relationship strength — through your relationship with the platform user, not through your own direct interaction with the platform.

📊 Real-World Application: A 2013 study by Michal Kosinski, David Stillwell, and Thore Graepel published in PNAS demonstrated that Facebook Likes could predict, with high accuracy, users' political views, sexual orientation, religious affiliation, intelligence, and personality traits (using the "Big Five" personality model). Crucially, the study found that these predictions could be made from Likes alone — without any other behavioral data — and that users whose own Likes were limited could be predicted from the Likes of their network connections. The research became foundational for Cambridge Analytica's subsequent work (discussed in Chapter 14) and remains one of the most cited studies in surveillance studies.


13.3 The Shadow Profile: Data on Non-Users

One of the most ethically striking aspects of social media surveillance is that it extends to people who have never created an account on the platform — and, in some cases, have explicitly refused to do so.

The shadow profile (sometimes called a "contact shadow profile") is a behavioral record that a platform maintains on an individual who is not a registered user. Shadow profiles are built from two primary sources:

Contact import data: Many social media platforms encourage users to upload their phone's contact lists or email contact books to find friends on the platform. When a user imports their contacts, the platform receives not just the names and contact information of current users — it receives the same information for non-users. The non-user never provided this information to the platform; someone who had their contact information provided it without the non-user's knowledge or consent.

On-platform references: When platform users tag, mention, or photograph non-users, they generate data that references those non-users. A photograph shared on social media of a person who has never joined the platform creates a data point: this person exists, was at this location on this date, was with these other people. The person has no account through which to manage, access, or delete this data.

Cross-platform tracking: As discussed in Chapter 12, third-party tracking pixels and cookies on websites across the web generate behavioral data that can be linked to individuals regardless of whether they have social media accounts. If a person without a Facebook account visits any of the millions of websites with Facebook's tracking pixel ("the Facebook Pixel"), Facebook receives data about that person's browsing behavior and stores it.

In 2018, Facebook confirmed to the U.S. Senate Commerce Committee, following the Cambridge Analytica scandal, that it maintained data on non-users for security and integrity purposes and for advertising purposes. The company did not proactively disclose the scope or depth of non-user data collection. Privacy advocates noted that users were informed, through terms of service, about what Facebook collected about them. Non-users had been informed of nothing — because they had no account through which information could be conveyed.

⚠️ Common Pitfall: Students sometimes reason that opting out of social media fully solves the social media surveillance problem. The shadow profile complicates this reasoning. If your friends, family members, and colleagues are on social media, your data flows to those platforms through their participation even in the absence of your own account. The network effects of social media surveillance mean that individual opt-out is less effective when your social network is largely on-platform. You can opt out of your own participation; you cannot opt out of your friends' participation on your behalf.


13.4 Emotional Manipulation: The Facebook Contagion Experiment

In 2014, an academic paper was published in the Proceedings of the National Academy of Sciences with the following abstract: "We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness."

The paper, titled "Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks," described an experiment conducted by Facebook data scientists in partnership with academic researchers. For one week in January 2012, Facebook modified the News Feeds of nearly 700,000 users — without their knowledge or special consent — to show them either more positive emotional content or more negative emotional content, then measured whether users' subsequent posts showed corresponding emotional valence.

The result was affirmative: exposure to more negative content produced more negative emotional expression in users' posts; exposure to more positive content produced more positive expression. Emotional states were contagious — transmissible through the content of the social media feed in ways users did not notice and could not resist.

The paper produced an immediate and significant public reaction upon publication. The controversy had several dimensions:

The 700,000 users whose feeds were modified for the study had not consented to being research subjects. Facebook's defense was that its Terms of Service included agreement to "data analysis, testing, research, and service improvement" — and that, therefore, users had effectively consented to this kind of manipulation as part of their agreement to use the platform.

This defense was widely criticized. Most subjects of psychological research must be informed of the general nature of the study (though not always its specific hypotheses), must be told they are participating in research, and must be able to withdraw. The Facebook study met none of these requirements. The claim that clicking "I agree" on a terms of service document constitutes consent to emotional manipulation experiments was, to most privacy researchers and ethicists, an extreme reading of what such agreement could encompass.

The Manipulation Problem

Beyond consent, the study raised a more fundamental question: was Facebook conducting ongoing manipulation of users' emotional states as a normal commercial practice, distinct from the academic research framing of the paper? If emotional contagion was a measurable effect of feed manipulation, and if Facebook was continuously adjusting feeds for commercial purposes (to maximize engagement), were users being continuously exposed to emotional manipulation without knowing it?

Facebook's internal research, later disclosed through the 2021 whistleblower revelations from Frances Haugen, confirmed that the platform's recommendation algorithms did amplify emotionally activating content — particularly outrage and anxiety — because such content generated higher engagement. The academic experiment was, in a sense, a controlled version of what the commercial system was doing all the time.

The emotional contagion study forced a reconsideration of what informed consent means in the context of social media. The prevailing view — that clicking "I agree" on a terms of service is functional consent to data use — was challenged by the study's implications: if users could be systematically manipulated without knowing it, and if such manipulation was covered by terms of service, then "consent" was not providing meaningful protection against the most significant harms social media could cause.

🎓 Advanced: The emotional contagion study is often discussed in terms of individual harm — the distress that specific users might have experienced from having their feeds made more negative. But the study's deeper significance may be systemic. If platforms can measurably shift the emotional tone of millions of users simultaneously — and if they routinely do so for commercial purposes — the implications for public discourse, political mobilization, and collective emotional health are potentially far more significant than any individual's distress. This systemic dimension connects to scholarship on surveillance and democratic governance examined in Part 7.


13.5 Platform Surveillance of Private Messages

A common assumption among social media users is that private messages — the equivalent of sealed letters, as opposed to public posts — are genuinely private. This assumption is not warranted.

Major social media platforms' privacy policies typically disclose that private messages may be accessed by the platform for multiple purposes: to enforce terms of service (detecting spam, harassment, CSAM, or other prohibited content), to improve machine learning models, to provide advertising targeting, and — in response to legal process — to share with law enforcement.

Content scanning: Facebook, Instagram, and other platforms scan private message content for prohibited material. This is, in some cases, a legal requirement (platforms are legally required to report CSAM) and in others a policy choice. The scanning means that message content is processed by platform systems, which has implications for what other uses that processing enables.

Metadata collection: Even without reading content, platforms collect extensive metadata about private messages: who messages whom, when, how frequently, the length of messages, response times, and whether messages include attachments. This metadata can be used to infer relationship strength, emotional state, and behavioral patterns.

End-to-end encryption: Some platforms offer end-to-end encryption (E2EE) for messages — a cryptographic scheme in which only the sender and recipient can read the content. WhatsApp (owned by Meta) and Signal use E2EE by default. Facebook Messenger has introduced an opt-in E2EE option. However, E2EE does not protect metadata — the platform still knows who is communicating with whom, when, and how often, even if it cannot read the content.

Law enforcement access: In response to valid legal process — subpoenas, search warrants, or national security orders — platforms can and do provide both message content and message metadata to law enforcement. The availability of this data to law enforcement has expanded as private communication has migrated from telephone networks (governed by specific legal requirements) to social media platforms (governed by different, sometimes less protective standards).


13.6 Social Media in Law Enforcement: Warrants, Subpoenas, and Geofence Orders

Social media data has become a significant resource for law enforcement investigations, creating a new dimension of visibility asymmetry that operates beyond the commercial context. Platforms receive and respond to tens of thousands of government data requests annually.

Subpoenas and Warrants

A subpoena allows law enforcement to compel a platform to disclose specific user data — typically account information, message metadata, IP addresses, and other records — without a search warrant (and with a lower legal threshold than a warrant). A search warrant, which requires probable cause and judicial oversight, is required to compel disclosure of message content on most platforms.

Meta reported receiving approximately 60,000 government data requests from U.S. authorities in 2022 alone, complying with some portion of requests in over 70% of cases. Google received approximately 65,000 requests. This volume reflects the normalization of social media data as an evidentiary resource in criminal investigation, civil litigation, immigration enforcement, and national security matters.

Geofence Warrants

Among the most surveillance-expansive law enforcement tools enabled by social media and location data is the geofence warrant (also called a "reverse location warrant"). Rather than specifying a target and obtaining data about them, a geofence warrant specifies a location and time, and demands data about every device — and therefore every person — in that location during that window.

The geofence warrant exploits the fact that major tech companies (particularly Google, through its Location History feature) maintain detailed records of where every device has been. By obtaining a geofence warrant for a crime scene at the relevant time, law enforcement can identify everyone who was present — including potential witnesses, bystanders, and suspects — without any individualized suspicion.

Google received approximately 11,554 geofence warrants from U.S. law enforcement in 2020, up from 982 in 2018. Each warrant could implicate dozens, hundreds, or thousands of individuals. A geofence warrant for a protest site, a political rally, a medical clinic, or a house of worship would identify everyone present — a surveillance capability that implicates First and Fourth Amendment concerns that courts have only begun to grapple with.

🌍 Global Perspective: Social media's role in law enforcement varies dramatically across jurisdictions. In the United States, the Stored Communications Act (18 U.S.C. § 2701) governs platform disclosures to government — a law written in 1986 that predates social media and provides legal protection structures designed for email services. In China, social media platforms are required to provide user data to government authorities upon request without judicial oversight. In the European Union, GDPR creates data minimization requirements that constrain some forms of data sharing, but law enforcement exceptions are significant. The variation reveals how the same technology creates different surveillance implications depending on the legal architecture around it.

Social Media Monitoring for Immigration and Protest

Beyond traditional criminal investigation, social media data has been used in contexts that raise distinct civil liberties concerns:

Immigration enforcement: U.S. Customs and Border Protection and Immigration and Customs Enforcement have used social media monitoring as part of visa vetting and immigration enforcement. Programs have included monitoring social media posts of visa applicants and monitoring accounts associated with individuals under investigation for immigration violations.

Protest monitoring: Law enforcement agencies — including the FBI, Department of Homeland Security, and local police departments — have monitored social media to track political protest activity. Investigative journalism has documented cases of protesters being identified through their social media posts, publicly stated locations, or information volunteered by social media companies to law enforcement.

"Social media threat assessments": Several law enforcement and intelligence agencies have purchased access to commercial social media monitoring services — companies that aggregate, search, and analyze publicly available social media data at scale — to conduct ongoing monitoring of political, community, and organizational activity.

🔗 Connection: Social media law enforcement use connects directly to the surveillance themes of Part 2 (State Surveillance) and anticipates Chapter 36's examination of racial surveillance via social media. Research has documented that social media monitoring programs have disproportionately targeted communities of color, Muslim communities, political activists, and journalists — a pattern consistent with the discriminatory deployment of surveillance technology discussed in Chapter 36.


13.7 The Surveillance Asymmetry in Social Media

Social media's defining surveillance characteristic — the feature that distinguishes it from prior media — is the way it creates a performance/observation asymmetry. Users perform on social media: they post, comment, react, share. The platform observes that performance: it records, analyzes, models, and monetizes it.

This asymmetry has a paradoxical quality. The performance is (mostly) public — visible to other users, chosen by the performer, shaped by the performer's intentions and self-presentation strategy. The observation is invisible — users know, in the abstract, that the platform observes them, but have almost no access to what the observation produces, how it is used, or what conclusions have been drawn.

Put differently: you know what you posted. You do not know what your posting means to the platform — what inference it triggers, what audience segment it places you in, what behavioral model it updates, what advertising category it signals. You performed; the platform interpreted. You generated the data; the platform owns the analysis.

This asymmetry is not accidental. It is the economic structure of the attention economy: the performance is valuable only insofar as it attracts audiences; the observation is valuable because it enables targeting. The platform monetizes the observation, not the performance.

📊 Real-World Application: Facebook's internal research documents — disclosed through the 2021 Frances Haugen whistleblower releases and subsequent Congressional testimony — revealed that the company's researchers had documented that Instagram (owned by Meta) made adolescent body image issues worse, particularly for teenage girls. The company's internal findings showed that 13.5% of teen girls who reported suicidal thoughts attributed the issue to Instagram. The company did not disclose this research publicly, did not change the relevant algorithm design, and continued to design the platform for engagement maximization. This is the observation/performance asymmetry in its starkest form: the platform observed harm it was producing; the users experiencing the harm were unaware of it; the platform kept both the observation and the harm private.


13.8 Jordan's Scenario: Data After Deletion

Jordan had deleted their Facebook account two years ago. Not a deactivation — an actual deletion, which, Facebook helpfully informed them, would take up to 90 days to complete. Jordan had checked the boxes and clicked the confirmation buttons and counted the 90 days. The account was gone.

So when Dr. Osei assigned a class exercise in requesting data from platforms, Jordan felt a flicker of satisfaction. They had nothing to request. They were already free.

Then Jordan learned about shadow profiles.

Following Yara's guidance, Jordan used a third-party tool to see what Facebook-pixel tracking data had been collected from sites Jordan had visited since the account deletion. The results were unexpected: Facebook had data from Jordan's subsequent browsing of news sites, a health information site, a political candidate's website, and several e-commerce pages — all of which had embedded Facebook's tracking pixel. Facebook knew Jordan's approximate location (derived from IP address), their general browsing patterns, and the categories of sites they had visited, all in the two years since they had deleted their account.

Beyond that: Jordan's phone contacts, imported by a mutual friend years ago, had given Facebook Jordan's phone number. Jordan appeared in the shadow profile system as a known entity with a phone number, an email address (provided by a different contact), and a behavioral profile built from pixel tracking — all without any account.

"I deleted my account," Jordan said, presenting this to Yara. "I did the thing. I followed the steps."

"The account is gone," Yara agreed. "The data isn't. The account was yours. The data is theirs."

Marcus, who still had his Facebook account, was uncomfortable with this in a way he didn't usually feel uncomfortable with tech privacy discussions. "I didn't know I was giving Facebook data about Jordan when I uploaded my contacts."

"That's the thing," Jordan said. "I didn't either. We're both the product and neither of us knew it."

Dr. Osei framed it for the class: "The shadow profile is the clearest demonstration that consent, in the social media context, is not an individual matter. Your data is partly determined by the choices of your social network — people you are connected to, who have connections to the platform you chose not to have. You cannot meaningfully opt out of a network that includes people who have opted in on your behalf, with your data."

💡 Intuition: Think of social media surveillance as a network effect rather than a bilateral transaction. Each user who joins a platform brings not just their own data but partial data about everyone in their social network — including people who have not joined and would not consent to join. The surveillance network expands not just through direct enrollment but through relational capture of the non-enrolled.


13.9 Platform Accountability and the Limits of "It's Free"

Social media companies have routinely defended their data practices on two grounds: first, that users consented to them by agreeing to terms of service; and second, that the service is provided "free" in exchange for data, and that users who don't like this can leave.

Both defenses are weaker than they appear.

The consent defense depends on the assumption that clicking through a terms of service represents meaningful, informed consent. As Chapter 12 established, this is a difficult claim to sustain when the documents being "agreed to" are tens of thousands of words long, change without notification, and are written in language that behavioral economists have shown most users cannot accurately summarize. A consent that no reasonable person could actually process is, in any meaningful sense, not consent.

The consent defense is further undermined by the shadow profile problem: non-users who have not agreed to any terms are nonetheless subject to the platform's data collection. Their data is collected without any agreement at all.

The "you can leave" defense is structurally problematic because social media is, by design, a network good: its value depends on the participation of others. If your friends, family, colleagues, and professional network are on a platform, leaving the platform means losing access to those connections in the contexts where they exist digitally. This is not a free choice in any ordinary sense — it is a choice between social participation and privacy, structured by network effects that no individual can alter through personal decision-making.

✅ Best Practice: When evaluating claims that social media data practices are consensual, ask two questions: (1) Did users have genuine understanding of what they were agreeing to? (2) Did they have a genuine alternative — a meaningful ability to decline without losing access to social connectivity? If both answers are no, the "consent" framing serves a legal and rhetorical function but does not reflect the substance of voluntary agreement.


13.10 Designing for Disclosure: The Platform's Interest in Your Data

It is a design principle, not an accident, that social media platforms make disclosure easy, automatic, and rewarding. The social and emotional mechanics that drive user engagement — the notification that someone liked your post, the dopamine reward of an encouraging comment, the social validation of a growing follower count — simultaneously drive the disclosure of behavioral data that the platform monetizes.

The platform has a direct financial interest in maximizing disclosure. More posts, more reactions, more shares, more time spent — all generate more behavioral data, which generates more targeting capability, which generates more advertising revenue. This means the platform's design incentives align precisely with maximizing surveillance. A platform that makes disclosure less emotionally rewarding would collect less data. Therefore, platforms make disclosure highly emotionally rewarding.

This is not a conspiracy theory about deliberate psychological manipulation (though some design choices do appear deliberately exploitative). It is a structural observation about the alignment of incentives: the platform's revenue model rewards exactly the behaviors that maximize data collection, and the platform's design should therefore be expected to maximize those behaviors.

Former employees of major social media platforms have confirmed this structural dynamic. Tristan Harris, a former Google design ethicist who became a prominent critic of the attention economy, has described how platform design teams explicitly optimize for "engagement" metrics — time on platform, notification response rates, share frequency — because these metrics directly correlate with advertising revenue. The emotional reward mechanisms are designed to be maximally compelling because maximal engagement is maximal revenue.


Summary: The Willing Observatory

Social media represents the most sophisticated voluntary surveillance infrastructure in history. Its genius, from a surveillance perspective, is that users are not passive subjects of surveillance but active participants in its construction. The behavioral residue they generate — the posts, reactions, messages, network connections, scroll patterns, and hover times — is produced through the act of sociality: of connecting with others, expressing themselves, engaging with information.

This participatory quality means that social media surveillance achieves a depth and intimacy that no coercive system could approach. The platform knows what you say publicly, what you say "privately," what you look at without reacting, who you are connected to, how strong those connections are, when you use the platform and how your usage correlates with your mood. It knows these things because you told it — through the design of a system that made telling feel like participating.

The shadow profile extends this surveillance to those who refuse participation, capturing them through the relational data generated by their connected friends and family. The law enforcement pipeline transforms the platform's commercial database into an investigative resource. And the emotional contagion experiment reveals the platform's capacity to not merely observe behavior but to shape it — not just an observation tower but a behavioral modification system.

Jordan's experience of finding Facebook's data after deleting their account crystallizes the chapter's central insight: in social media surveillance, "opting out" is available to individuals but unavailable to networks. Your data lives in the choices of everyone you know.


Key Terms

Contact shadow profile — A behavioral record maintained by a platform on an individual who has never created an account, built from data provided by users who have that individual's contact information.

Digital enclosure — Andrejevic's concept describing an environment in which participation requires disclosure — engagement with the social space requires generating the surveillance data the platform monetizes.

Geofence warrant — A legal instrument compelling a platform to disclose data about all devices (and persons) located within a specified geographic area during a specified time period.

Graph data — Data about the relationships among users in a social network, which can be analyzed to reveal community structure, influence, and personal characteristics.

Homophily — The social tendency to associate with similar others, which makes network characteristics predictive of individual characteristics.

Participatory surveillance — Mark Andrejevic's term for surveillance systems in which users actively participate in generating the data that surveillance collects, as in social media platforms.

Social network analysis (SNA) — Quantitative methods for studying the structure and properties of social networks.

Synopticon — Thomas Mathiesen's concept of the many watching the few (as in media audiences watching celebrities), contrasted with the panopticon's few watching the many.


Discussion Questions

  1. Mark Andrejevic argues that participatory surveillance is qualitatively different from coercive surveillance because users actively construct the surveillance apparatus. Is this distinction morally significant? Does voluntary participation in one's own surveillance reduce the ethical weight of the harm it might cause?

  2. The Facebook emotional contagion experiment manipulated nearly 700,000 users' feeds without their knowledge. Facebook defended this as covered by the terms of service. Evaluate this defense. What would genuine consent to this kind of research require?

  3. The shadow profile problem means that a person's social media data is partly determined by the choices of their social network — people who upload contacts, tag photographs, and share data about non-users. Who bears moral responsibility for this form of non-consensual data collection? The users who upload contacts? The platform? Both? Neither?

  4. Social media law enforcement data requests numbered in the tens of thousands per year in the United States. How should we evaluate the relationship between platform data collection for commercial purposes and the subsequent availability of that data for law enforcement? Does the commercial collection of data change its character when accessed by government agencies?

  5. The chapter argues that the "you can leave" defense fails because social media is a network good whose value depends on others' participation. Does this network structure create special obligations for platform operators? What might those obligations look like?


Chapter 13 of 40 | Part 3: Commercial Surveillance Backward references: Chapter 2 (Panopticon/Synopticon), Chapter 11 (Data Economy), Chapter 12 (Tracking) Forward references: Chapter 34 (Surveillance Capitalism), Chapter 36 (Racial Surveillance via Social Media)