
Why EU policy must catch up to the neurotechnology boom
Consumer neurotechnologies are outpacing medical ones, raising urgent questions about oversight. Virginia Mahieu argues that Europe must step up to close regulatory gaps and protect mental privacy and democratic values.
Imagine this: you’re hard at work, when your headphones detect your brain is reaching its limit and suggest a break. During that break, you scroll through social media and see an advertisement that catches you at your most mentally vulnerable. You make an impulse purchase. Your headphones then sense your refreshed state and switch to a “focus mode” playlist.
This isn’t science fiction. Every element of this scenario is theoretically possible with existing neurotechnologies – devices that can read and interpret your brain activity – that are rapidly entering the mainstream market with virtually no regulatory oversight.
In May 2025, the European Commission’s Joint Research Centre published a report highlighting how neurotechnology is “rapidly advancing and likely to have a profound impact on various aspects of society.” Yet the report’s focus on future applications overlooks a critical present reality: consumer brain-sensing devices are already here, under-regulated, and proliferating rapidly.
After conducting the first comprehensive analysis of nearly 300 neurotechnology companies worldwide, we’ve discovered a surprising trend: dedicated consumer neurotech firms now outnumber medical ones, making up 60% of the global neurotechnology landscape. And they’re proliferating at an unprecedented rate – more than quadrupling in the past decade compared to the previous 25 years.
While medical neurotechnology undergoes rigorous clinical trials and regulatory scrutiny, consumer devices face minimal barriers to market. They’re being embedded into everyday wearables like headphones, earbuds, and glasses, often marketed as “smartwatches for your brain” that can enhance productivity, improve sleep, or reduce stress.
The technology at the heart of this revolution – electroencephalography (EEG) – has been around since the 1920s. It’s crude and can’t read individual thoughts, but it can detect patterns of brain activity related to focus, fatigue, and even emotional states. And when coupled with artificial intelligence and other personal data – like location, buying behaviours, biometrics – these patterns can reveal far more about us than we might imagine.
I’ve witnessed this firsthand. At conferences, I invite volunteers to record brief snippets of brain data – just a minute of calm breathing followed by imagining something painful. When fed into AI systems like ChatGPT, with no additional training, the AI can identify their mental state, when their attention shifted, and even guess what drew their attention. All from a simple consumer neurotech headband purchased online.
This marks a critical moment for European policymakers. With increasing miniaturisation and integration into everyday products – from earbuds to glasses – these technologies could reshape how people work, rest, socialise, and interact with digital systems, raising urgent questions around data privacy, consent, and autonomy.
As this technology accelerates, the potential for misuse becomes profound. Imagine pre-election advertising that adapts its messaging based on your emotional reaction. Imagine disinformation campaigns tailored to your subconscious fears, measured directly from your brain. Imagine authoritarian governments monitoring emotional responses to propaganda, searching for dissent in citizens’ brainwaves.
Some jurisdictions are taking action. Chile amended its constitution to enshrine the right to mental integrity. California and Colorado have amended their privacy legislation to protect brain data specifically. But Europe risks falling behind.
The European Union, despite its progressive stance on digital rights through GDPR and the AI Act, has significant blind spots regarding brain data protection. These regulations weren’t designed with neurotechnology in mind, and it’s unclear how their provisions against profiling, manipulation, and subliminal messaging will be enforced in view of the unique challenges of brain data – particularly outside a clinic.
Our research reveals that Europe accounts for 38% of consumer neurotech firms worldwide, second only to North America’s 48%. As these technologies become more integrated into mainstream devices, the window for establishing proper governance is rapidly closing.
While international frameworks like the OECD’s Recommendation on Responsible Development of Neurotechnology and the European Charter offer high-level principles, they remain voluntary. The industry argues it’s democratising access to brain health, while critics question whether consumer neurotech meaningfully contributes to human flourishing compared to medical applications.
Existing regulations – GDPR, the AI Act, and the Medical Devices Regulation – must be stress-tested to ensure they’re equipped to manage the blurred boundaries and unique risks posed by consumer neurotechnologies. With its strong industrial base, Europe has the opportunity to lead with both innovation and regulation that is anticipatory, proportionate, and grounded in rights protection – before public safeguards are outpaced by technological momentum.
Europe needs enforceable standards that prioritise ethical innovation, particularly for genuine medical needs; establish clear and cohesive rules specific to brain data; and foster broad public discourse on the risks and benefits of these technologies before they become ubiquitous in our daily lives.
Mental privacy is a fundamental human right. Brain data cannot become just another commodity in the digital economy. The time to act is now – before our most private thoughts become the next frontier of surveillance capitalism.
This article was originally published at Tech Policy Press.