Imagine a student — call her Daniela, sixteen years old — sitting in class on a Tuesday morning. Her school-issued Chromebook is open. GoGuardian, the district's monitoring software, is running silently in the background, tracking every website she...
In This Chapter
- Opening: A Different Kind of Watched
- Section 1: Why Children Represent a Special Surveillance Problem
- Section 2: School Surveillance Infrastructure
- Section 3: Digital Monitoring in Schools
- Section 4: Legal Frameworks — What FERPA Protects (and Doesn't)
- Section 5: The School-to-Prison Pipeline and Surveillance
- Section 6: Jordan's Reflection — A History of Being Watched
- Section 7: Toward a Framework for Children's Surveillance
- Chapter Summary
- Key Terms
- Discussion Questions
Chapter 37: Children Under the Gaze: Schools, Apps, and Protective vs. Controlling Surveillance
Opening: A Different Kind of Watched
Imagine a student — call her Daniela, sixteen years old — sitting in class on a Tuesday morning. Her school-issued Chromebook is open. GoGuardian, the district's monitoring software, is running silently in the background, tracking every website she visits, every search she conducts, every document she opens. When she opens a Google Doc to draft an essay, GoGuardian logs the activity. When, midway through class, she opens a browser tab and searches for information about depression — not because she is in crisis, but because she's writing about mental health for her health class — GoGuardian flags the search for review by a school administrator.
The administrator receives an alert. They review the search and the surrounding context. They determine, correctly, that it is for a class assignment. No intervention occurs. Daniela never learns that her search was flagged.
But here is what Daniela does know: she knows, somewhere, that her searches are monitored. She learned this at the beginning of the school year, in the acceptable use policy she clicked through on the first day. She does not know exactly what is monitored, or by whom, or for how long. She does not know whether the flag for her mental health search is in a database somewhere. She navigates the school's digital environment with a background awareness — a low-level wariness — that shapes what she searches for and what she does not.
Daniela is not unusual. She is a typical American high school student in 2024, living in what has become a comprehensively monitored educational environment. School surveillance infrastructure now includes physical cameras, digital monitoring software, biometric cafeteria payment systems, behavioral analytics, AI-powered email screening, GPS-enabled school buses, and location-tracking student ID systems. She grew up in this environment; she does not know a different one. That is precisely the problem this chapter addresses.
Section 1: Why Children Represent a Special Surveillance Problem
1.1 The Autonomy Problem
Children occupy a distinctive position in the architecture of surveillance. Unlike adults, they have legally limited autonomy: their parents or guardians hold decision-making authority over many aspects of their lives; their educational participation is legally compelled (through age-based compulsory schooling laws); and they have reduced legal capacity to consent to, challenge, or opt out of surveillance systems that affect them.
This limited autonomy creates what we might call the surveillance double bind for children: they are among the most extensively surveilled populations in any society, yet they have the least capacity to contest that surveillance. A worker who finds their employer's monitoring excessive can quit (at significant cost). A consumer who objects to data collection can delete an app (at some inconvenience). A child in a compulsory school setting monitored by state-mandated surveillance technology has, effectively, no exit option. They must attend; they must use the issued devices; they must submit to the monitoring conditions of attendance.
This is not an argument that children should be free from all monitoring. There are genuine safety rationales for some forms of school surveillance: cameras in school buildings may deter violence; software that detects self-harm ideation in student communications may save lives. But the lack of exit, voice, and contestability that characterizes children's position in surveillance systems means that the stakes of getting surveillance design wrong are particularly high.
💡 Intuition Check: When you think about surveillance of children in schools, your initial reaction is probably shaped by whether you frame the surveillance as protective (keeping kids safe) or controlling (restricting autonomy and development). By the end of this chapter, consider whether that binary is adequate — or whether the same systems can function as both protective and controlling simultaneously, for different children, in different contexts.
1.2 The Developmental Argument
Beyond the autonomy question, there is a developmental argument for treating the surveillance of children as especially consequential. Adolescence and young adulthood are periods of identity formation, experimentation, and the development of autonomous self. Developmental psychologists from Erik Erikson through James Marcia to contemporary researchers have documented that adolescent identity development requires opportunities for:
- Privacy: the ability to think, experiment, and make mistakes without constant observation
- Risk-taking: the ability to try things — identities, behaviors, ideas — without permanent record
- Autonomy: the experience of making choices and bearing their consequences
Comprehensive surveillance environments constrain all three. If every digital communication is potentially monitored, if every search is logged, if every behavioral deviation from school norms is flagged and recorded, the space for authentic experimentation that development requires is compressed. Students learn to perform compliance rather than to develop judgment. They learn to self-censor rather than to think freely. They learn that authority has complete visibility into their private thoughts and communications — a lesson that shapes their relationship to authority, to privacy, and to their own inner life.
The psychologist Shoshana Zuboff, whose concept of behavioral surplus from Chapter 34 we examined, has written about surveillance's "behavioral modification" effects — the ways that the awareness of being watched changes behavior. In children, who are still forming the habits of mind that will characterize their adult lives, these modification effects may be especially persistent.
🎓 Advanced Note: Developmental psychologists distinguish between internalized self-regulation — the capacity to monitor and guide one's own behavior according to internalized values — and externalized compliance — conforming to external demands under the pressure of observation. Comprehensive surveillance environments may produce excellent externalized compliance while stunting the development of internalized self-regulation: students who behave well because they are always watched but who have not developed the internal resources to behave well when no one is watching. This is precisely the opposite of education's stated developmental goals.
Section 2: School Surveillance Infrastructure
2.1 Cameras and Physical Monitoring
Physical security surveillance in American schools has expanded dramatically since the Columbine High School shooting in 1999, and again after Sandy Hook in 2012. According to the National Center for Education Statistics, approximately 91 percent of U.S. public schools had security cameras in their hallways by 2018, up from 61 percent in 2003. Many schools have extended camera coverage to classrooms, cafeterias, gymnasiums, and bathrooms (exterior areas only for the last).
School cameras share the surveillance functions we examined in Chapter 8 for public space CCTV: deterrence (the visibility of cameras is intended to deter behavior), documentation (footage can be reviewed after incidents), and identification (cameras can be used to identify students involved in incidents). In school contexts, cameras also contribute to the disciplinary infrastructure: they enable teachers and administrators to verify accounts of incidents, document behavioral violations, and — in some districts — support school resource officer activity.
📊 Real-World Application: Some school districts have deployed "smart" cameras with AI-powered analytics, including systems that claim to detect weapons, aggressive behavior, or "suspicious" movement patterns. Notably, these systems have the same documented bias problems as commercial facial recognition: they perform less accurately on darker-skinned faces, raising the same racial equity concerns examined in Chapter 36. A "smart" camera that more readily flags Black students as behaving aggressively is not a safety tool; it is a racializing surveillance technology.
2.2 Student ID Tracking Systems
Some school districts have implemented student ID systems with embedded RFID (radio frequency identification) chips that enable the school to track student location within the building. The Northside Independent School District in San Antonio, Texas, piloted such a system in 2012, requiring students to wear ID badges that reported their location to readers throughout the school building. The stated rationale was improving attendance tracking and, therefore, the district's per-pupil funding (which was tied to average daily attendance).
The program became a landmark case in school surveillance debates when student Andrea Hernandez refused to wear the tracking badge on religious grounds (she and her family believed the technology was the "mark of the beast" referenced in Revelation). The district's response — threatening to transfer her to another school, then ultimately expelling her — illustrated how coercive school surveillance systems become when attendance itself is compulsory and the surveillance is tied to that attendance.
The RFID tracking case raised several of the issues that characterize children's surveillance more generally: the conflation of safety and administrative rationales; the absence of meaningful consent from students (or even their parents) in the design of the system; and the punitive consequences for anyone who refused to participate.
2.3 Cafeteria Biometrics and Payment Systems
Many school districts have implemented biometric identification systems in cafeterias, using fingerprint scanners or palm readers to identify students and charge their lunch accounts. These systems are presented as convenience tools — students do not need to carry cards or remember account numbers — but they introduce biometric data collection into an institution that children are legally compelled to attend.
The consent dynamics here are particularly troubling. Children whose families qualify for free or reduced-price lunch have no meaningful choice about whether to use the cafeteria system. The alternative — not eating school lunch — is nutritionally serious for children from low-income families for whom the school meal may be their primary meal. Their participation in a biometric data collection system is effectively compelled by their economic circumstances and by state compulsory attendance laws.
🔗 Connection to Chapter 17: In Chapter 17, we examined how domestic monitoring technologies extend surveillance into the spaces traditionally associated with privacy and refuge. Schools occupy an analogous position in children's lives: they are not homes, but they are the dominant environment of childhood, the places where children spend the most time outside family settings. Comprehensive surveillance of schools effectively eliminates the last relatively unmonitored domain of daily life for many children.
Section 3: Digital Monitoring in Schools
3.1 Student Monitoring Software — GoGuardian, Securly, Bark
The proliferation of school-issued devices during the 2010s — Chromebooks, iPads, laptops — accelerated the deployment of software-based student monitoring. Schools that issued devices found themselves with both the technical capability and the institutional pressure to monitor how those devices were used.
GoGuardian, the most widely deployed school monitoring platform, claims to serve over 6,000 schools and 27 million students. On school-issued devices, GoGuardian captures a continuous stream of browsing activity, search queries, and document interactions. It includes a "Beacon" feature that uses AI to identify students who may be at risk of self-harm based on their search patterns and browsing behavior, sending alerts to designated school counselors or administrators.
Securly performs similar functions with an emphasis on content filtering. Bark takes a different approach, positioning itself as a monitoring tool that parents can use on personal devices — scanning email, text messages, and social media for "concerning content" using AI analysis, then sending alerts to parents when it flags something.
These platforms exist on a spectrum from content filtering (blocking access to certain categories of content) to behavioral monitoring (analyzing content for indicators of concerning behavior) to surveillance (continuously logging activity for review). Different platforms sit at different points on this spectrum, and different districts use the same platform in different ways, making generalizations about "what school monitoring software does" difficult.
What can be said is that these platforms create, for the first time in educational history, a permanent, searchable, analyzable record of students' digital cognitive activity — every curiosity explored, every topic researched, every question asked through a search engine. This record exists in institutional systems, is governed by policies that students and their families rarely fully understand, and is subject to law enforcement access through standard legal process.
⚠️ Common Pitfall: When evaluating school monitoring software, it is tempting to assess it purely by its stated safety rationale. The Beacon feature's proponents note that early identification of students at risk of self-harm genuinely saves lives — and there is evidence to support this. But evaluating a surveillance system only by its best cases (the student at risk who was correctly identified and helped) ignores its typical cases (the thousands of students whose searches for mental health information, LGBTQ+ identity resources, or political content were logged and potentially flagged without any safety benefit) and its worst cases (students whose private communications were shared with parents who used that information abusively, or whose mental health searches were used in ways the student had not anticipated).
3.2 Gaggle — AI That Reads Student Email
Gaggle represents a distinct and particularly invasive category of student monitoring: AI-powered screening of student emails and other communications. Deployed in over 1,500 school districts, Gaggle uses natural language processing algorithms to analyze the content of student emails, Google Drive documents, and other materials created in district-provided accounts, flagging content that its algorithms identify as potentially indicating self-harm, suicidal ideation, violence, or sexually explicit material.
Flagged content is reviewed by Gaggle employees before being escalated to school officials. The company reports that it has "helped prevent thousands of suicides" — a claim that is difficult to verify independently but that speaks to the genuine safety rationale for the product.
The civil liberties concerns about Gaggle are substantial. Student emails on school systems are not private, as courts have confirmed in multiple decisions. But students may not fully understand this, writing in email with a subjective expectation of privacy that does not correspond to their legal reality. A student writing to a friend about their mental health struggles, their questioning of their sexual orientation, their political views, or their personal conflicts may believe they are communicating privately. They are, in effect, communicating in a monitored channel that is being read by AI and potentially reviewed by human employees and school administrators.
The developmental stakes are direct: the spaces in which adolescents process identity, struggle with difficult emotions, and form relationships are precisely the spaces that Gaggle monitors. A system that reads student email does not merely capture data; it colonizes the communication channels through which adolescent social and emotional development occurs.
3.3 The Proctorio Controversy — Remote Surveillance During COVID
When the COVID-19 pandemic forced schools and universities to move to remote instruction in 2020, the demand for remote proctoring solutions — software that could monitor students taking online examinations from home — accelerated dramatically. Proctorio, one of the major providers, deployed a suite of surveillance tools including:
- Webcam monitoring of the student's face throughout the exam
- Screen capture to record everything displayed on the student's screen
- AI analysis of eye movement, head position, and facial expressions to detect "suspicious" behavior
- Browser lockdown to prevent access to external websites
- Recording of the student's entire physical environment through the webcam
Several dimensions of Proctorio's approach generated criticism. The AI systems used to detect "suspicious" behavior had documented racial bias: algorithms trained to flag non-normative behavior patterns flagged students of color at higher rates than white students, because the "normative" behavior patterns were derived from datasets that over-represented white faces. Students with anxiety disorders, physical disabilities affecting eye movement, or conditions affecting their facial expressions were flagged as suspicious by systems that had no capacity to account for these variations.
Hartwell University itself was among the institutions that deployed Proctorio during the pandemic years. Students' protests about invasion of privacy were real; so was the institutional pressure to maintain academic integrity in a remote environment. The Proctorio episode illustrated how crisis conditions can normalize surveillance that would, in ordinary circumstances, be considered excessive — and how difficult it is to un-normalize surveillance once it has been introduced.
🔗 Connection to Chapter 19: In Chapter 19, we examined parental monitoring technologies and the tension between protective oversight and controlling surveillance in domestic settings. The Proctorio debate extends this tension into educational settings: when does monitoring designed to ensure integrity become surveillance that violates dignity and produces racially disparate outcomes?
Section 4: Legal Frameworks — What FERPA Protects (and Doesn't)
4.1 FERPA — The Family Educational Rights and Privacy Act
FERPA, enacted in 1974, governs the privacy of student educational records held by educational institutions that receive federal funding — which is to say, essentially all American public schools and most private colleges. FERPA's core provisions give parents (and students over 18) the right to:
- Access their educational records
- Request correction of inaccurate records
- Control the disclosure of those records to third parties
FERPA has become, through a combination of lobbying by educational technology companies and regulatory interpretation that has not kept pace with technological change, significantly weaker than its text would suggest.
The "school official" exception to FERPA's restrictions on disclosure allows educational institutions to share student records with "school officials" who have a "legitimate educational interest." This exception has been expansively interpreted to permit sharing with third-party vendors who provide services to the school. In practice, this means that GoGuardian, Gaggle, Bark, Proctorio, and other monitoring vendors may have access to substantial student data under the school official exception.
There is no comprehensive federal requirement that educational technology vendors that receive student data under the FERPA school official exception maintain data minimization practices, retain data only as long as necessary, or allow students to access or correct data held by the vendor rather than the school. The result is a legal framework that was designed for paper records in 1974, applied imperfectly to digital surveillance systems in the 2020s.
📝 Note: A consortium of state attorneys general and the Federal Trade Commission have taken increasingly aggressive enforcement positions on student data privacy in the 2020s. Several states — including California, Illinois, and Colorado — have enacted student data privacy laws that go beyond FERPA in important respects. But the landscape remains fragmented and inadequate relative to the sophistication of the surveillance systems deployed in schools.
4.2 COPPA and Children's Online Privacy
The Children's Online Privacy Protection Act (COPPA), enacted in 1998 and strengthened by rule in 2013, requires operators of websites and online services directed to children under 13 to obtain verifiable parental consent before collecting personal information. COPPA also requires these services to maintain data security, provide parents with access to their children's data, and allow parents to have that data deleted.
COPPA's limitations are well-documented. Age verification mechanisms are easily circumvented — children simply lie about their age. Platforms like YouTube, Instagram, and TikTok have faced COPPA enforcement actions for collecting data from users they knew to be under 13, but the enforcement has been retrospective and the civil penalties, while substantial by regulatory standards, are modest relative to the data collection value.
YouTube Kids — Google's platform nominally designed for children — was fined $170 million in 2019 for COPPA violations: the FTC found that YouTube Kids had collected personal information from children under 13, and that YouTube's main platform had collected such information from children in the context of "made for kids" content. The fine was a regulatory record at the time, but amounted to approximately one day of Alphabet's revenue.
4.3 The Hartwell University Case Study
Hartwell University occupies a surveillance position different from K-12 schools: its students are predominantly adults, and the legal frameworks that apply to children's data do not apply to university students. But Hartwell's surveillance infrastructure illustrates the campus extension of surveillance normalization.
Jordan has noticed, over their three years at Hartwell, that the university's digital infrastructure captures far more data than students typically realize:
- The campus card (used for building access, dining, printing, and library services) generates a timestamped location record for every use
- The university WiFi network logs connection data including device identifiers and the times and locations of connections
- The learning management system (Hartwell uses a version of Canvas) logs student login times, time spent on each course page, quiz attempts and their timing, and communication patterns
- Library records are protected under state library confidentiality laws, but database access logs are retained by vendors
- Campus cameras, which Jordan had never particularly noticed, cover most of the public spaces of campus
None of this is secret. It is disclosed, somewhere, in policies that students are expected to read. That Jordan had not read them carefully until this course is not unusual; that the disclosure is structured to minimize awareness rather than maximize it is, by now, a familiar pattern in consent architecture.
The campus card record, Jordan realizes one afternoon, would reveal a reasonably accurate map of their entire three years at Hartwell: when they woke up (first dining hall scan), where they spent time (building access records), when they were in the library (door access), how frequently they used the gym (access scan). It is not surveillance in the dramatic sense. It is the quiet accumulation of behavioral data that, in aggregate, constitutes a detailed portrait of a life.
Section 5: The School-to-Prison Pipeline and Surveillance
5.1 School Resource Officers and the Criminalization of Youth
School Resource Officers (SROs) — law enforcement officers stationed in schools — are another dimension of school surveillance infrastructure. The SRO program expanded significantly in the 1990s and 2000s, often funded through federal community policing grants. By 2018, approximately 58 percent of U.S. students attended schools with at least one SRO present.
The relationship between SRO presence and student outcomes is contested. Advocates argue that SROs build community-police relationships and provide safety. Critics argue that the primary documented effect of SRO presence is an increase in school-based arrests for incidents that, in previous generations, would have been handled through school discipline rather than criminal justice involvement.
The racial dimension is striking. Research published in the Journal of Criminal Law and Criminology found that Black students were 3.6 times more likely to be arrested at school than white students. Latino students were 2.7 times more likely. These disparities were not explained by behavioral differences; they persisted after controlling for school demographics and were partly attributable to racialized perception of student behavior.
The connection to surveillance is direct: SROs are surveillance actors in schools. Their presence means that student behavior in school is subject not merely to educational discipline but to law enforcement observation and potential criminal prosecution. When the surveillance apparatus of the school is integrated with the criminal justice system through SRO presence, the school becomes a node in the surveillance-to-prison pipeline.
📊 Real-World Application: The school-to-prison pipeline is not merely a metaphor. Young people who are arrested at school face dramatically elevated risks of dropping out, subsequent adult criminal justice involvement, and long-term economic precarity — outcomes that are, in the United States, profoundly racialized. School surveillance that generates disproportionate discipline and arrest of Black and Latino students is, through this pipeline, contributing to long-term racial inequality in economic outcomes.
Section 6: Jordan's Reflection — A History of Being Watched
6.1 Mapping the Surveillance Arc
Jordan sits with Dr. Osei after class one Thursday afternoon, working through the implications of this chapter for their own analysis. Dr. Osei has asked each student in the seminar to map the surveillance timeline of their own education — to identify, looking back, the surveillance systems that had been present in their schools from elementary through higher education.
Jordan's list is longer than they had expected.
Elementary school (ages 5–11): Physical security cameras in all common areas. Name tags on backpacks (required for pickup authorization). Report cards with behavioral ratings. Parent-teacher communication through school email that, they now realize, was a monitored school account. The assistant principal who seemed to appear whenever a group of kids congregated in a particular corner of the playground.
Middle school (ages 11–14): Physical cameras in hallways and cafeteria. A swipe card system for building entry. First-ever school-issued iPad, with filtering software that blocked certain search terms. School email account monitored — they had not known this explicitly, but had somehow understood it. An incident in seventh grade when a note about a classmate, passed by email, had been read by a teacher. The social media incident in eighth grade, when a post by another student had been shared with school administrators who then interviewed Jordan and several others.
High school (ages 14–18): Chromebooks with GoGuardian. Cameras throughout the building. An SRO whose presence they had found intimidating though he had never personally done anything to Jordan. Drug searches with dogs twice per year. Mandatory signing of an "acceptable use policy" that, reading it carefully now, gave the school extensive rights to monitor digital activity. The guidance counselor who had mentioned, once, that Jordan had been "flagged" by the monitoring system for searching mental health topics — flagged, reviewed, and cleared, but flagged nonetheless.
Hartwell University (ages 18–22): Campus card data. WiFi logs. LMS tracking. Cameras. Proctorio during pandemic years. The consent forms for research studies that had captured detailed behavioral and psychological data. The university's relationship with third-party data brokers Jordan had never known about until a news article mentioned it.
What strikes Jordan, doing this exercise, is not that any individual surveillance system seems outrageous in isolation. The cameras in the school hallway seem obviously appropriate. The monitoring software seems defensible given the legitimate concern about student safety online. The campus card seems merely convenient. But the cumulative picture — the unbroken chain of surveillance environments from early childhood through emerging adulthood — reveals something different: they had never lived in an unwatched space. They had grown up inside an architecture of monitoring so complete and so naturalized that they had not registered it as surveillance at all.
6.2 What Growing Up Watched Teaches
The most significant thing Jordan notices, mapping this history, is the effect on self-perception. You do not know who you are when you have never not been watched. The identity you develop in monitored spaces is already filtered, already calibrated to anticipated observation. The question of who you would be in genuine privacy — what you would search, what you would write, who you would talk to about what — is, for Jordan's generation, largely theoretical.
This is not nostalgia for a fictional unmonitored past — Jordan's parents were surveilled by different means (school records, peer networks, parental oversight). But the comprehensiveness, the digitization, and the persistence of contemporary childhood surveillance are qualitatively different. A seventh-grade note passed on paper disappears. A seventh-grade email flagged by monitoring software is in a database.
The developmental stakes of this persistence are what make the chapter's argument most urgent. Not just that children are surveilled — they always have been, in various ways — but that the contemporary surveillance of children creates permanent records of the most private dimensions of development, in systems governed by policies designed for administrative convenience rather than human dignity.
Section 7: Toward a Framework for Children's Surveillance
7.1 The Protective vs. Controlling Distinction
The chapter's framing — protective vs. controlling surveillance — is useful but insufficient as a binary. The same system can function as both. The GoGuardian Beacon feature may genuinely identify a student at risk of self-harm (protective) while simultaneously chilling the search behavior of thousands of students with no safety concern (controlling). The school camera system may deter violence (protective) while subjecting Black students to disproportionate scrutiny (controlling through racially biased application).
A more useful framework asks, for any proposed surveillance system in a children's educational context:
- What is the specific safety benefit, and is it documented? Not the theoretical benefit, but the evidence base for the claim.
- What is the scope of data collection, and is it limited to what is necessary for that specific benefit? (Data minimization)
- What are the disparate impacts on different student populations? Particularly race, disability status, gender identity, and immigration status.
- Who has access to the collected data, for how long, and under what conditions? What happens when students turn 18? When they graduate? When a law enforcement request arrives?
- What are the mechanisms for students and families to understand, contest, and correct data about them?
- Is there a sunset provision? Systems introduced as emergency measures (like pandemic-era remote proctoring) should have explicit criteria for discontinuation.
✅ Best Practice: School districts adopting monitoring software should conduct privacy impact assessments before deployment, publish those assessments publicly, provide meaningful notice to students and families in accessible language (not legal boilerplate), and establish community oversight processes that include student voice. Several school districts — including Oakland Unified and Los Angeles Unified in California — have adopted formal community oversight ordinances for school surveillance technology. These represent best practice in a landscape that more typically involves no advance community input at all.
7.2 FERPA Reform and Children's Data Rights
The inadequacy of FERPA for the contemporary school surveillance landscape has prompted calls for legislative reform. Proposed elements of a strengthened student data privacy framework include:
- Extending FERPA's protections to vendor-held data, not merely school-held data
- Requiring data minimization — vendors can only retain data necessary for their stated educational function
- Giving students over 13 independent (not merely parental) access and correction rights
- Requiring transparency reporting on law enforcement requests for student data
- Establishing meaningful enforcement penalties for violations (FERPA currently lacks a private right of action)
- Requiring automatic deletion of student data upon graduation or departure from the institution
These reforms would not resolve all tensions in school surveillance — the safety vs. privacy balance is genuinely difficult — but they would establish a legal infrastructure more adequate to the surveillance capabilities currently deployed against children.
Chapter Summary
Children occupy a distinctive position in the architecture of surveillance: they are among the most comprehensively monitored populations in contemporary society, yet they have the least capacity to contest or opt out of the surveillance systems that affect them. The argument of this chapter has not been that all surveillance of children is wrong — there are genuine protective rationales for some monitoring in school settings. The argument is that the current landscape of school and children's technology surveillance is far more extensive than any protective rationale requires, is inadequately governed by legal frameworks designed for a different era, produces racially disparate outcomes that connect to the school-to-prison pipeline, and has developmental consequences that have not been adequately weighed against its safety benefits.
We traced the surveillance infrastructure of contemporary schools: physical cameras and ID tracking, digital monitoring software (GoGuardian, Gaggle, Bark), cafeteria biometrics, and the remote proctoring turn of the pandemic era. We examined the legal frameworks — FERPA and COPPA — that partially but inadequately govern these systems. We connected school surveillance to the school-to-prison pipeline through the racialized deployment of school resource officers. And through Jordan's exercise of mapping their own surveillance history, we illustrated what it means to grow up inside an architecture of monitoring so complete and naturalized that it escapes conscious recognition.
The normalization of children's surveillance is a particular expression of the broader normalization that this book has tracked throughout: surveillance becomes invisible when it is ubiquitous, becomes natural when it is all you have ever known, and becomes the baseline against which future intrusions are measured. Jordan's generation is the first to have grown up entirely inside the digital surveillance architecture; the next generation will have grown up with smart speakers in their nurseries, GPS tracking in their shoes, and AI monitoring of their emotional expressions in preschool classrooms. The architecture builds on itself.
Key Terms
- Compulsory attendance: Legal requirement that children of certain ages attend school, eliminating exit options from school surveillance environments
- GoGuardian: Widely deployed school monitoring software tracking student browsing, searches, and document activity; includes the Beacon feature for self-harm detection
- Gaggle: AI-powered platform that screens student emails and digital communications for concerning content
- FERPA (Family Educational Rights and Privacy Act, 1974): Federal law governing privacy of student educational records; increasingly inadequate for contemporary digital surveillance
- COPPA (Children's Online Privacy Protection Act, 1998): Federal law requiring parental consent for collection of personal data from children under 13; enforcement is retrospective and penalties are modest relative to data value
- School-to-prison pipeline: Pattern by which school disciplinary practices — including surveillance-enabled discipline — disproportionately funnel Black and Latino students toward criminal justice involvement
- School Resource Officer (SRO): Law enforcement officer stationed in a school; presence correlated with increased school-based arrests, particularly for Black and Latino students
- Remote proctoring: Software that monitors students taking online examinations, using webcams, screen recording, and AI behavioral analysis; documented racial bias in behavioral flagging
- Privacy impact assessment: Systematic evaluation of a system's privacy implications before deployment; best practice for school technology adoption
Discussion Questions
-
The chapter argues that children face a "surveillance double bind" — they are among the most surveilled populations with the least capacity to contest surveillance. Do you find this framing persuasive? What mechanisms, if any, exist for children to contest surveillance in school settings?
-
The Gaggle platform reads student emails to identify students at risk of self-harm. Evaluate the trade-off between this safety benefit and the developmental harm of colonizing students' communication channels. Does your evaluation change if the system has a documented history of preventing suicides?
-
Jordan's exercise — mapping their own surveillance history — revealed how normalized childhood surveillance had become. Conduct a brief version of this exercise for your own educational history. What do you find?
-
The school-to-prison pipeline connects school surveillance to racial inequality in criminal justice and economic outcomes. Does this connection make school surveillance a racial justice issue? If so, what follows from that?
-
If you were advising a school district considering the deployment of AI-powered monitoring software, what conditions would you require as prerequisites for deployment?