Case Study: Neuralink and Neural Data Governance
"The question is not whether we will have brain-computer interfaces. The question is whether we will have governance for them before or after they change what it means to be human." — Adapted from Marcello Ienca and Roberto Andorno, "Towards New Human Rights in the Age of Neuroscience and Neurotechnology" (2017)
Overview
In January 2024, Neuralink — the neurotechnology company founded by Elon Musk — implanted its first brain-computer interface (BCI) device in a human patient. The device, called "Telepathy," consists of 1,024 electrodes embedded in the brain's motor cortex, connected to a small processor sealed in the skull. The patient, Noland Arbaugh, a 29-year-old quadriplegic, used the device to control a computer cursor with his thoughts — playing chess, browsing the internet, and moving through apps without any physical movement.
The moment was hailed as a breakthrough for assistive technology. But it also marked the beginning of a governance challenge that no existing framework is equipped to address: the collection, storage, and use of data generated by the human brain.
This case study examines Neuralink's technology and business model, the governance gaps surrounding neural data, the emerging field of "neurorights," and the implications for the data governance frameworks studied throughout this course.
Skills Applied: - Analyzing an emerging technology through a data governance lens before governance frameworks exist - Evaluating the adequacy of existing regulatory frameworks for a novel data category - Applying the Collingridge dilemma to a real-world case - Connecting neurotechnology to broader themes of power, consent, and accountability
The Technology
How Neuralink Works
Neuralink's N1 implant is a coin-sized device surgically implanted in the skull. Ultra-thin threads — thinner than a human hair — penetrate the brain's cortex, with 1,024 electrodes recording the electrical activity of nearby neurons. The device processes neural signals on-chip, compresses them, and transmits them wirelessly to an external device (a computer or smartphone) via Bluetooth.
The initial application is narrow: translating motor cortex activity into cursor movement for people with paralysis. But the data the device collects — continuous electrical recordings from over a thousand brain sites — is far richer than what is needed for cursor control. The neural signals contain information about:
- Motor intention — what movements the user intends to make (the primary use case)
- Cognitive state — attention levels, mental effort, fatigue
- Emotional indicators — neural correlates of stress, frustration, excitement, sadness
- Potentially, thought patterns — as resolution and decoding improve, neural activity may reveal intentions, preferences, and mental imagery with increasing specificity
The Data Architecture
Neuralink's data architecture raises immediate governance questions:
Volume. The device records from 1,024 channels continuously. Even with on-device compression, the volume of neural data generated per patient per day is substantial.
Storage. Where is the neural data stored? On the device? On the patient's paired device? On Neuralink's cloud servers? For how long? The company's current disclosures do not provide comprehensive answers.
Secondary use. The data collected for motor control also contains information about cognitive and emotional states. What prevents the company from analyzing this secondary information? What governance mechanisms ensure purpose limitation?
Model training. Improving BCI performance requires training machine learning models on neural data from multiple users. If Neuralink aggregates neural data across patients to improve its algorithms, who governs this aggregation? Do patients consent to their brain data being used for collective model training?
The Governance Gap
Existing Frameworks Are Inadequate
No existing data governance framework was designed for neural data:
HIPAA (US). HIPAA protects "protected health information" (PHI) in the healthcare context. Neuralink's initial use case (medical device for paralysis) falls under HIPAA jurisdiction. But HIPAA's protections are limited: it permits use of data for "treatment, payment, and healthcare operations" without specific consent, and its definition of PHI does not contemplate the continuous, involuntary generation of brain data that contains information extending far beyond the medical condition being treated.
GDPR (EU). The GDPR classifies biometric data and health data as "special categories" requiring explicit consent. Neural data would likely fall under these categories. But the GDPR's consent framework assumes that data subjects can meaningfully identify and consent to specific data collection events — an assumption that breaks down when the brain generates data continuously and involuntarily.
FDA (US). As a medical device, Neuralink's implant is subject to FDA regulatory oversight. The FDA evaluates safety and efficacy but does not comprehensively address data governance — it does not regulate what happens to the data the device collects after it has been demonstrated to be safe and effective for its primary medical purpose.
No neurorights framework (US). The United States has no legislation specifically addressing neural data, neurorights, or cognitive liberty. The constitutional right to privacy (as interpreted by the Supreme Court) does not explicitly extend to mental privacy or protection from neural data collection.
Chile's Neurorights Initiative
In 2021, Chile became the first country to amend its constitution to protect neurorights, adding a provision that states: "Scientific and technological development will be at the service of people and will be carried out with respect for life and physical and mental integrity. The law shall especially protect brain activity, as well as the information from it."
Chile also passed implementing legislation (the Neuroprotection Act) that: - Defines neural data as personal data requiring protection - Establishes the right to mental privacy - Prohibits the use of neurotechnology to discriminate - Requires consent for the collection and processing of neural data - Establishes the right to mental integrity — protection from unauthorized modification of neural activity
Chile's initiative is a rare example of anticipatory governance — regulating before the technology is widely deployed. But its effectiveness depends on enforcement capacity, and it applies only within Chilean jurisdiction.
The Business Model Question
From Medical Device to Consumer Product
Neuralink's stated mission is not limited to medical applications. Musk has described a future in which BCIs are used by healthy individuals to enhance cognitive capability, interface with AI systems, and communicate telepathically. The company's investor presentations envision a consumer BCI market worth hundreds of billions of dollars.
This trajectory — from medical device to consumer product — follows a pattern familiar from the health technology sector: Fitbit began as a medical-adjacent wellness device and became a consumer data platform acquired by Google. The data governance implications of each stage are different:
Medical stage: Small number of patients, clinical oversight, HIPAA and FDA governance, strong justification for data collection (medical necessity).
Consumer stage: Millions of users, no clinical oversight, standard terms-of-service consent, business model incentivized to maximize data collection and find commercial applications for neural data.
The transition from medical to consumer use is where governance must intervene — and where the Collingridge dilemma is most acute. At the medical stage, governance is possible but the consumer use case is uncertain. At the consumer stage, governance is urgently needed but the technology and its economic infrastructure will be entrenched.
Mira noted the VitraMed parallel directly: "VitraMed started as a clinical tool for EHR optimization. Every governance challenge we studied — bias, consent, function creep, ethical debt — emerged during the scaling process. Neuralink is at the beginning of the same arc. The question is whether we build governance now or study the failures later."
Neural Data as a Category
Why Neural Data Is Different
This case study argues that neural data constitutes a category requiring governance protections beyond those applied to any existing data type:
Involuntary generation. Unlike a social media post or a search query, neural data is generated by involuntary brain activity. The user cannot choose which neural signals are emitted.
Intimate content. Neural data can contain indicators of mental states — emotions, intentions, cognitive effort, attention — that are more intimate than any information currently collected by digital systems.
Identity-constitutive. Neural patterns are unique to individuals and may constitute the most fundamental form of biometric data — the signature of the mind itself.
Irreversible collection. Like other biometric data, neural data cannot be changed if compromised. But neural data is more dynamic than fingerprints or iris patterns — it evolves over time, potentially allowing longitudinal tracking of cognitive change.
Predictive power. As decoding improves, neural data may predict behavior, preferences, health outcomes, and mental states with a specificity that surpasses any other data source.
Discussion Questions
-
If Neuralink transitions from medical device to consumer product, what governance framework should apply to consumer neural data? Should it be governed as health data, biometric data, or a new category entirely?
-
The consent fiction is already severe for standard digital services. How does it apply to neural data collection? Can anyone meaningfully consent to the continuous collection of brain activity data they cannot perceive or control?
-
Chile's neurorights legislation is an example of anticipatory governance. Evaluate its approach: Is constitutional amendment the right governance mechanism? What are the advantages and disadvantages compared to statutory regulation, regulatory guidance, or industry self-regulation?
-
Eli argued: "They're going to do to brain data what they did to neighborhood data — collect it for one purpose, use it for another, and by the time people realize what happened, it'll be too late to undo." Is this prediction justified? What governance mechanisms could prevent it?
-
Should there be an absolute prohibition on certain uses of neural data (e.g., employment screening, insurance underwriting, law enforcement investigation, advertising targeting)? If so, who should define the prohibited uses, and how should the prohibition be enforced?
Further Investigation
- Read Marcello Ienca and Roberto Andorno, "Towards New Human Rights in the Age of Neuroscience and Neurotechnology," Life Sciences, Society and Policy 13 (2017).
- Research the NeuroRights Foundation (founded by Rafael Yuste, Columbia University) and its proposal for five neurorights.
- Compare Chile's neurorights legislation with the EU's proposed approach to neural data under the AI Act. Which framework provides stronger protection?