Who is the bully? How platform power magnifies online harm
The European Commission has just published its EU action plan against cyberbullying[1] on Safer Intern Day, at a time of rising concern about youth mental health and online harms. As that concern grows, parents are angry, schoolteachers are exhausted, and policymakers are flirting with a dramatic fix: ban social media for children entirely. Australia has led the way, and Brussels has just announced its panel of experts to explore EU-wide restrictions.
But first: who is the bully? Research defines cyberbullying as aggressive behaviour that is intentional, repeated over time, and involves a power imbalance that makes it hard for victims to defend themselves. That definition reveals something Brussels debates often miss. When you focus on the structure of the relationship rather than the content of individual posts, the bully isn’t always another user. Sometimes it’s the platform itself.
Power imbalance is structural
Platforms map users in such high resolution that recommender systems know them better than they know themselves. Every click, pause, and late-night session feeds algorithms optimised to predict and influence behaviour. Research shows TikTok can recommend suicide-related content within minutes of a user pausing on mental health videos[2]. Unlike a physical bully, this psychological surveillance operates 24/7, with no safe space to escape to.
Repetition is by design
Infinite scrolling, autoplay, and variable reward schedules hijack attention and reward systems to maximise engagement. With every scroll, users are pulled deeper into rabbit holes. These algorithmic loops distort what users see as normal and real, making self-harm, eating disorders, and radicalisation appear far more common than they are. And when it comes to self-harm content, repeated exposure increases the risk of copycat behaviour among vulnerable users — the “Werther effect”[3].
Intentional aggression is the business model
Platforms don’t wake up intending to harm a particular teenager. But when a platform is “free”, user attention and data are the product. In the past, Meta has been accused of identifying moments when teens felt “worthless” or “insecure” and targeting them with weight-loss and beauty ads[4]. When harm is a predictable outcome of engagement optimisation, it’s hard to keep calling it incidental.
Generative AI next in the “enshittification” line (à la Doctorow)
The online environment is evolving fast. Generative AI is making system-to-user influence more intimate and user-to-user abuse easier to manufacture. Several teen suicide cases [e.g.[5],[6]] now in court involve chatbots designed to keep users talking, including vulnerable kids in crisis. OpenAI’s recent mental-health safety upgrades and dedicated “ChatGPT Health” experience signal the industry knows this is no fringe edge case[7]. But it’s now also bringing ads to ChatGPT’s free and low-subscription plans[8]. Meanwhile, the Grok AI scandal shows how quickly platform-integrated AI can accelerate non-consensual sexual deepfakes[9].
The obvious pushback: platforms merely reflect a complicated world. People are anxious, society is polarised, kids can be brutal, and algorithms mostly surface what users post or react to. There’s truth in that. But these aren’t passive mirrors — they’re mirrors with knobs, tuned to maximise engagement. Platforms control what spreads and what gets suppressed, what goes viral and what gets shadow-banned. They can turn a rumor into mass reality and a vulnerable moment into an engagement loop.
The real bully
So who is the bully? Sometimes it’s a schoolmate or stranger behind a screen. More often, it’s the system that tilts the playing field. There are encouraging signs that Europe is beginning to name it. In February 2026, the Commission issued preliminary findings that TikTok’s infinite scroll, autoplay, and hyperpersonalised recommender system breach the Digital Services Act by fostering compulsive use, shifting users’ brains into “autopilot mode” [10]. Days later, formal proceedings were opened against Shein for the same addictive design logic[11]. These are meaningful steps, and they matter precisely because they target platform architecture and design.
The Commission’s cyberbullying action plan is a welcome step, but its focus on reporting tools is a reminder that online harm is still too often treated as a matter of etiquette, solvable through better apps and quicker takedowns. Europe should protect minors urgently, including through serious debate on age thresholds, and through digital literacy and awareness that help families recognise risks early on. But the underlying vulnerability doesn’t end at 14[12], 15[13], 16[14], or 18 and neither do the platform dynamics that exploit it.
If Europe wants real impact, it must confront the design and business incentives that fuel the attention economy—and now, increasingly, the attachment economy. That’s the bully worth naming, and the only one powerful enough to change the game. The Commission has already shown it’s possible. Now it needs to become the rule, not the exception.
A different version of this Op-ed was originally published on Encompass.
[1] European Commission (2026, February 10). Action plan against cyberbullying – protecting children online. Shaping Europe’s digital future. Retrieved February 24, 2026, from https://digital-strategy.ec.europa.eu/en/policies/cyberbullying
[2] Center for Countering Digital Hate. (2022). Response to Ofcom’s call for evidence: Second phase of online safety regulation (Consultation response). Ofcom. Retrieved January 26, 2026, from https://www.ofcom.org.uk/siteassets/resources/documents/consultations/category-1-10-weeks/call-for-evidence-second-phase-of-online-safety-regulation/responses/center-for-countering-digital-hate?v=202775
[3] Calvo, S., Carrasco, J. P., Conde-Pumpido, C., Esteve, J., & Aguilar, E. J. (2024). Does suicide contagion (Werther effect) take place in response to social media? A systematic review. Spanish Journal of Psychiatry and Mental Health. Advance online publication. Retrieved January 26, 2026, from https://pubmed.ncbi.nlm.nih.gov/38848950/
[4] U.S. Senate Committee on the Judiciary, Subcommittee on Crime and Counterterrorism. (2025, April 16). Questions for the record for Sarah Wynn-Williams (Submitted April 16, 2025). Retrieved January 26, 2026, from https://www.judiciary.senate.gov/imo/media/doc/2025-04-09_qfr_responses_wynn-williams.pdf.
[5] Yousif, N. (2025, August 27). Parents of teenager who took his own life sue OpenAI. BBC News. Retrieved January 26, 2026, from https://www.bbc.com/news/articles/cgerwp7rdlvo.
[6] Tabachnick, C. (2026, January 7). AI company, Google settle lawsuit over Florida teen’s suicide linked to Character.AI chatbot. CBS News. Retrieved January 26, 2026, from https://www.cbsnews.com/news/google-settle-lawsuit-florida-teens-suicide-character-ai-chatbot.
[7] OpenAI. (2026, January 7). Introducing ChatGPT Health. Retrieved January 26, 2026, from https://openai.com/index/introducing-chatgpt-health.
[8] OpenAI. (2026, January 16). Our approach to advertising and expanding access to ChatGPT. Retrieved January 26, 2026, from https://openai.com/index/our-approach-to-advertising-and-expanding-access
[9] Booth, R. (2026, January 22). Grok AI generated about 3m sexualised images in 11 days, study finds. The Guardian. Retrieved January 26, 2026, from https://www.theguardian.com/technology/2026/jan/22/grok-ai-generated-millions-sexualised-images-in-month-research-says.
[10] European Commission. (2026, February 6). Commission preliminarily finds TikTok’s addictive design in breach of the Digital Services Act [Press release]. Retrieved February 24, 2026, from https://ec.europa.eu/commission/presscorner/detail/en/ip_26_312
[11] European Commission. (2026, February 17). Commission launches investigation into Shein under the Digital Services Act [Press release]. Retrieved February 24, 2026, from https://ec.europa.eu/commission/presscorner/detail/en/ip_26_420
[12] Gkritsi, E., & Wälde, M. (2026, February 18). Germany eyes social media ban for kids. POLITICO. Retrieved February 24, 2026, from https://www.politico.eu/article/germany-social-media-ban-children/
[13] Chrisafis, A. (2026, January 27). France social media ban under-15s. The Guardian. Retrieved February 24, 2026, from https://www.theguardian.com/world/2026/jan/27/france-social-media-ban-under-15s
[14] European Parliament. (2025, November 26). Children should be at least 16 to access social media, say MEPs [Press release]. European Parliament. Retrieved February 24, 2026, from https://www.europarl.europa.eu/news/en/press-room/20251120IPR31496/children-should-be-at-least-16-to-access-social-media-say-meps
