Skip to content

Enforcement spotlight - Spring 2026

Tracking the EU's digital rulebook in practice

This article is one of our “Enforcement Spotlights“, a regular analysis series of how the EU’s digital laws are being applied, tested, and contested.


AI is testing the EU’s digital rulebook in  four  places at once. Some of our frameworks are adapting, others are showing cracks that were there all along. Here is where things stand.

  • AI Act: Leaves military and national security AI largely unregulated
  • GDPR: EU–US data transfers rest on a weakening rule-of-law assumption
  • DSA: Captures AI harms on platforms, but misses standalone chatbots
  • DMA: Emerging as a backdoor AI governance tool through competition law

We will dive into these one at a time, starting with the one generating the most heat right now.

The AI Act’s double blind spot

The AI Act’s military exemption, Article 2(3), was a design choice. But the exemption bundles two very different governance problems into one carve-out. On national security (surveillance, intelligence, domestic use), members led by France pushed to keep the EU out, a political choice to treat these as sovereign matters. On military AI in armed conflict (targeting, lethal force, kill chains), the Act deferred to international humanitarian law and member state rules. Those frameworks are still catching up. The result: both sides of the exemption lack AI Act safeguards, and a platform like Palantir’s can operate across both without meaningful EU oversight.

The trigger: AI in the US-Iran war

On 28 February 2026, US forces struck the Shajareh Tayyebeh school in Minab, Iran, killing at least 175 people, most of them schoolgirls. Investigations[1] point to human failure, not AI, but the Pentagon confirmed[2] it uses “a variety of advanced AI tools” in Iran, and over 120 members of Congress demanded answers[3] about AI’s role in target selection.

The Anthropic-Pentagon standoff exposed[4] how military AI governance actually works: through contract negotiations, executive directives and court injunctions. Anthropic insisted on two red lines (no autonomous weapons, no mass domestic surveillance). The Pentagon demanded unrestricted access. When Anthropic refused, it was labelled a “supply chain risk,” a designation previously reserved for firms linked to hostile foreign governments. A federal judge has since blocked[5] this as “classic First Amendment retaliation.”

The most important AI safety decisions in the military domain are being settled in US contract clauses and US courtrooms, not legislatures. No coherent regulatory framework fills the gap.The governance-by-contract model reflects a broader pattern: weakening institutional checks in the US are reshaping how AI is governed across domains.

Can the AI Act offer a model? Not really.

While the US settles military AI and national security governance in contract disputes and courtrooms (Anthropic vs the Pentagon on military targeting, Palantir’s work with US Immigration and Customs Enforcement aka ICE), can the EU offer something better? Not under the current framework. Art. 2(3) contains two gaps that together form a double blind spot.

  • On national security: Palantir’s Gotham platform is already embedded[6] in Europol, French intelligence, German state police forces and Frontex. When these agencies invoke national security, the AI Act does not apply. The scope of this exemption was a political choice. During negotiations, France, supported by Italy and Sweden, led[7] efforts to widen it so that activities like AI-powered surveillance in public spaces would remain outside the regulation’s scope when national security is invoked.
  • On military AI in armed conflict: the AI Act deliberately exempted military use, deferring to international humanitarian law and member state rules (Recital 24 explicitly cites “the use of lethal force” as belonging to public international law, not EU regulation). The problem is that those frameworks have not kept pace. NATO[8] acquired a version of Palantir’s Maven system in 2025, built on the same platform at the centre of the Iran targeting controversy. EU members are now using it, with no AI Act safeguards and no international rules specifically addressing AI-compressed targeting or accountability when things go wrong.  International humanitarian law applies in principle, but lacks[9] tailored provisions for algorithmic decision-making in combat).

From exemption to blind spot

On military AI, there is a political appetite for rules at the European level. The European Parliament has twice called for a global prohibition on lethal autonomous weapons without meaningful human control (in 2018[10] and 2021[11]), and the European Defence Agency[12] is developing certification and validation framework for military AI. These remain non-binding, and neither initiative has been translated into legislation.

Yet in military AI, the case for national-only governance is weakening. EU members already collaborate on defence AI through the European Defence Fund, Permanent Structured Cooperation (PESCO), and joint NATO procurement. When they jointly acquire and deploy systems like Palantir’s Maven, treating oversight as a purely sovereign matter becomes harder to sustain.

At the international level, momentum is also building. Lethal force falls under international humanitarian law, and events like the Minab school strike can turn into war crimes investigations. The third Responsible AI in the Military Domain (REAIM) Summit[13] (A Coruña, February 2026) produced a more action-oriented outcome document, but one endorsed by only 35 of the 85 attending states, a sharp decline from the roughly 60 that backed the Seoul Blueprint in 2024, and notably without US or Chinese support. A UN General Assembly resolution[14] and International Committee of the Red Cross (ICRC) recommendations[15] on military AI compliance point in the same direction, but none of this yet amounts to binding rules.

On national security, the outlook is less promising. The exemption is already in the AI Act, and the Digital Omnibus is reshaping the regulation before enforcement is fully running. The political direction is towards simplification and less regulatory burden, not more. Proposed amendments to Annex I[16] would even reduce the AI Act’s reach over sectors it already covers. In that climate, extending coverage to defence or national security AI is unlikely.

As EU members deploy American national security tools, and as the civilian/military line continues to blur, Art. 2(3) looks less like a carve-out and more like a blind spot. The EU’s claim to global AI standard-setting stops short of the areas where governance matters most.

GDPR: when the democracy premise cracks

EU-US data transfers have been legally fragile for a decade. What is new is not the legal conflict itself, but how explicitly courts are now confronting its political foundation. A German court recently had to justify why the United States still qualifies as a democracy in order to uphold a data transfer. The system is visibly under strain.

The long-running problem

EU-US data transfers have been contested since Schrems I[17] struck down Safe Harbour in 2015 and Schrems II[18] invalidated Privacy Shield in 2020. In both cases, the Court of Justice of the European Union (CJEU) found that US surveillance law, particularly Section 702 of the Foreign Intelligence Surveillance Act (FISA), failed to meet EU fundamental rights standards. The Data Privacy Framework (DPF), adopted in July 2023, was meant to settle it. It has not.

The DPF survived its first legal challenge. In September 2025, the EU General Court dismissed an annulment action brought by French MP Philippe Latombe[19] (France’s data protection authority (CNIL) member acting in his personal capacity) and upheld the adequacy decision. But the court limited its analysis to the facts as of July 2023, when the decision was adopted. It did not consider what has happened since: the Trump administration dismissed Privacy and Civil Liberties Oversight Board (PCLOB) board members in January 2025[20], leaving the oversight body non-functional. Congress broadened the scope of FISA 702 surveillance in 2024. 

The Commission, which owns the adequacy decision politically, has not publicly revisited its assessment since these developments. And Latombe has appealed to the CJEU, with noyb, the Vienna-based digital rights organisation, signalling a broader challenge focused on Trump-era developments. A potential Schrems III is no longer hypothetical but procedurally and politically taking shape.

The Bonn ruling: when rights stop at the border

Latombe challenges the framework from above, through the EU’s institutional channels, the Regional Court of Bonn (3 June 2025) introduced a new layer to the problem[21].  An individual asked a service provider whether US agencies had accessed their personal data. Under EU law (Art. 15 GDPR), they had a right to know. But the service provider refused, arguing it was legally forbidden from answering under US secrecy obligations tied to Section 702 FISA. The court accepted this defence: the company was genuinely stuck between two legal systems, and could not comply with both at once.In practice, this means a core GDPR right, the right to know what happens to your data, stops at the US border.

This is not entirely new. What is new is how the court justified it.The duty-conflict defence only works if the conflicting legal system is recognised as legitimate. The court therefore had to address a prior question: does the United States still qualify as a rule-of-law democracy? It answered yes but not without hesitation. The judgment explicitly referenced anti-democratic, autocratic and even fascist tendencies in the US, while ultimately maintaining its classification as a constitutional democracy[22]. That classification is doing significant legal work, and if it fails, the reasoning collapses with it.

From isolated judgment to systemic trajectory

Since the Bonn ruling, the broader trajectory has reinforced its underlying logic. The court’s classification is increasingly contested. The V-Dem Institute’s 2026 report reclassified the US as an “electoral democracy,” losing its liberal democracy status for the first time in over 50 years[23]. And  the Economist Intelligence Unit  has already rated it a “flawed democracy” since 2016[24]. Rule of law, the very thing the LG Bonn court relied on, is among the indicators showing the steepest decline.

The paradox is structural. The same US surveillance regime that strengthens the duty-conflict defence (by imposing stricter secrecy obligations on service providers) simultaneously weakens the legal foundation for EU-US data transfers (which depend on the US qualifying as a rule-of-law democracy). If courts increasingly question whether US surveillance obligations are compatible with EU fundamental rights, the DPF faces the same structural vulnerability that brought down its predecessors.

What this means

A German court had to write a statement justifying why the US is still a democracy before it could uphold a data transfer. It is the clearest sign yet that the structural tension is reaching a breaking point.

The political direction in the US is not towards stronger privacy safeguards. It is towards less oversight, broader surveillance powers, and weaker institutional checks. The democracy premise that holds the entire transfer architecture together is an increasingly difficult argument to make, and courts are now saying so openly.

DSA: regulating AI by feature, not by label

A recurring argument in industry and parts of the AI technical community is that LLMs are fundamentally new and do not fit frameworks built for platforms and search engines. Similar claims surface in copyright (whether existing rules apply to AI training data), data protection (whether GDPR covers model training) and content moderation (whether platform rules apply to generated outputs). This argument is sometimes used to delay or avoid regulation altogether. The Grok enforcement[25] and ChatGPT Very large online search engine (VLOSE) assessment[26] suggest otherwise: existing rules apply just fine when an LLM does something those rules already cover. New technology does not earn a free ride. It may require a fresh look, but it does not make everything that came before irrelevant.

The DSA and AI Act divide the work along these lines. The DSA regulates distribution: how content reaches users through platforms and search engines. The AI Act regulates the model: transparency obligations (including labelling AI-generated content) and prohibitions on certain practices. When an LLM generates harmful content and distributes it through a platform, both frameworks are relevant, but they cover different things.

The Grok trigger

Grok’s deepfake scandal is the clearest example of the feature-based approach in action. In January 2026, the Commission opened a formal investigation into X under the DSA, assessing whether the company properly assessed and mitigated risks from deploying Grok’s functionalities on its platform[27]. These include risks related to illegal content (such as manipulated sexually explicit images, potentially including child sexual abuse material), gender-based violence, and harms to physical and mental well-being. The Commission also extended its existing investigation into X’s recommender systems. The DSA caught this not because Grok is an LLM, but because X is a designated Very large online platform (VLOP) and the harmful content was generated and distributed through its platform.

The generation side is a different problem. The AI Act introduces transparency obligations for synthetic content, but it does not explicitly prohibit generative systems capable of producing non-consensual intimate imagery. The Parliament is now moving to close this gap: on 26 March 2026, MEPs adopted their position on the Digital Omnibus, which includes a ban on AI “nudifier” systems that generate sexually explicit images resembling an identifiable real person without consent[28]. Trilogue negotiations with the Council are next.

The ChatGPT designation

The Grok case shows the DSA reaching an LLM through a platform. The ChatGPT designation would go a step further: designating an LLM as a VLOSE in its own right, based on what its search feature does.

OpenAI self-reported 120.4 million monthly EU users for ChatGPT’s search feature, nearly three times the VLOP/VLOSE threshold[29]. The Commission has been assessing designation since October 2025. It was expected to conclude in Q1 2026 but a decision has not yet been made. The timing is politically loaded: the US House Judiciary Committee has subpoenaed major tech companies for encrypted communications with EU officials involved in DSA enforcement, framing European content moderation as censorship of American speech[30]. Expected mid-2026. One to watch. Scholars argued designation would be “legally sound, economically justified, and geopolitically coherent.”[31]

The legal logic is again feature-based. The question is not whether ChatGPT is an LLM. It is whether its search function behaves like a search engine under Article 3(j) DSA, which defines a search engine as a service that allows users to input queries and returns results “in any format”[32]. According to scholars, ChatGPT is a hybrid: a search engine when it retrieves live information from the web, but also offering features like custom GPTs and persistent conversations that do not fit neatly into existing DSA categories. The Commission’s assessment so far has focused on the search feature specifically.

If designated, ChatGPT would face the DSA’s most demanding obligations: systemic risk assessments, independent audits, data access for researchers, and transparency on recommender systems. This would be a first for a standalone chatbot.

The coverage gap

The feature-based approach works where LLM capabilities map onto existing regulatory categories. But some use cases do not fit. AI companions, creative generation tools, or one-to-one chatbot interactions do not involve platform-style distribution or search-style retrieval, and sit outside the DSA’s reach[33].

This matters because the DSA contains the EU’s most developed tools for tackling content harms: systemic risk assessments, notice-and-action mechanisms, trusted flaggers, protections for minors. These tools were built for platforms. But the harms they address (disinformation, illegal content, self-harm encouragement) can now be generated in a closed chatbot conversation that never touches a platform. The Grok case was catchable because the content was distributed on X, a designated VLOP. If the same content had been generated in a standalone chatbot, the DSA would not have had a hook.

The AI Act covers some of this ground. General-purpose AI (GPAI) providers have transparency obligations, high-risk classifications apply to certain use cases, and providers of models with systemic risk face additional requirements including evaluations and adversarial testing. But these obligations focus on the model’s capabilities and safety. They do not address how the service affects users at scale: how content reaches people, what recommender logic shapes interactions, or how specific harms like disinformation or self-harm encouragement manifest in user-facing products. That is what the DSA’s systemic risk framework was built for, and it does not extend to standalone chatbots.

The question is where to fill this gap. The DSA was built for services that host or distribute other people’s content. Chatbots generate their own. Stretching the DSA to cover them would mean redefining what an intermediary service is, with unpredictable consequences for the whole framework. And in a political climate focused on “simplification”, reopening the DSA is not on the table. The AI Act may be the more natural home for service-level obligations on high-reach AI applications. Whether that holds as LLM use cases multiply is another question.

DMA: competition as AI governance

The DMA review is shaping up as a key venue for governing AI market power. The debate is no longer whether to act, but how: enforce existing core platform service (CPS) categories against AI features already embedded in gatekeeper services, or designate AI as a new CPS altogether. The Commission is not waiting for the review to begin acting.

The review verdict

The Commission published its first statutory DMA review on 28 April, concluding that the regulation remains fit for purpose and requires no legislative revision[34]. On the central AI question, the review stopped short of introducing a standalone AI CPS category, instead identifying AI and cloud computing as priority enforcement areas going forward. The debate is not closed; it is deferred. Consultation responses had revealed two camps[35]. One argues that existing CPS categories (search engines, virtual assistants, operating systems) already capture AI functionalities embedded in gatekeeper ecosystems. The other calls for a new standalone CPS category covering generative AI services. The review’s verdict favours the first camp for now: work within the existing framework, with sharper focus on AI and cloud, rather than expand the regulatory perimeter.

Gatekeepers themselves warn against premature expansion, arguing that AI markets remain dynamic and early intervention risks stifling innovation. The pressure is also diplomatic: US Ambassador Puzder warned that over-regulating American tech companies could exclude the EU from the AI economy[36]. But gatekeepers are already embedding AI into designated CPSs: Google’s AI Overviews in Search[37], Apple Intelligence integrated into iOS[38]. This allows them to use existing data advantages and ecosystem control to consolidate market power before regulatory responses catch up.

Enforcement is moving, but not fast enough

The Commission has already begun testing how the DMA applies to AI features in gatekeeper ecosystems[39]. In January 2026, it launched proceedings against Google to answer two practical questions. First: if Google gives its own AI assistant Gemini deep access to Android features, must it give rival AI services the same access? Second: should Google’s search data be shared with competitors, including AI chatbot providers, so they can build genuine alternatives? Preliminary findings are expected in April 2026, with a conclusion by July.

But enforcement pace remains a sore point. A coalition of industry stakeholders across aviation, hospitality and rail  recently noted that more than three years after the DMA entered into force, and two years after the Commission opened proceedings against Google on self-preferencing in search (Article 6(5)), meaningful compliance is still lacking[40]. The DMA was designed to restore fairness quickly through ex ante rules. Whether it can deliver on that promise in AI markets, where positions consolidate fast, is the test.

The review could go further. If generative AI becomes a standalone core platform service category, AI companies that meet the DMA’s gatekeeper thresholds could themselves be designated, the DMA equivalent of the DSA’s VLOSE question for ChatGPT. The Commission is also looking at the infrastructure layer: it has opened three market investigations into cloud computing, including whether Amazon and Microsoft should be designated as gatekeepers. Since cloud is the compute backbone of AI services, these investigations could shape competitive conditions for AI development itself.

What this means for AI governance

The DMA is approaching AI governance on two levels. It targets how gatekeepers use AI to strengthen their existing platforms (self-preferencing, ecosystem foreclosure, data hoarding). And through the CPS review and cloud investigations, it is starting to ask who controls the AI infrastructure itself. If this trajectory continues, the DMA may become one of the EU’s most consequential tools for governing AI, not because it regulates AI systems, but because it shapes the competitive conditions under which they develop and scale. The Google proceedings are the first test of whether that approach works in practice.

Also on our radar

  • Minors & addictive design: Minors & addictive design: Emerging as a key enforcement battleground. In February, the Commission flagged TikTok’s design under the DSA[41] and found major porn platforms non-compliant with age verification rules[42]. In April it extended the addictive design front to Instagram and Facebook, flagging both for violating Article 35’s obligations, a sign that the Commission is building out a pattern of enforcement.  In parallel, US courts are moving through litigation[43] (Meta, YouTube), setting up a real-time test: EU ex ante regulation vs US case-by-case accountability.
  • AI Act priorities under scrutiny: Reports claim strong influence of effective altruism within the Commission’s AI Office[44], raising concerns that safety efforts may prioritise existential risks while fundamental rights safeguards remain more limited. As enforcement begins, scrutiny is likely to focus on whether immediate harms—such as discrimination, surveillance, and electoral interference—receive sufficient attention.
  • National regulators scaling up: Enforcement is increasingly driven at national level. DSA cases (X/Grok, Snapchat, porn platforms[45]) and GDPR fines (CNIL’s €486.8M in 2025[46]) show regulators acting as early detectors and coordinating across borders, suggesting a more resilient multi-level enforcement model than often assumed.

[1] Baker, K.T., ‘AI got the blame for the Iran school bombing. The truth is far more worrying’, The Guardian, 26 March 2026, https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying.

[2] TRT World, ‘US confirms use of “advanced AI tools” amid debate if AI error led to deadly attack on Iran school’, TRT World, 12 March 2026, https://www.trtworld.com/article/ce171f6a1aa5 

[3] Kube, C., ‘Democrats ask Pentagon about Iran school strike and role of AI’, NBC News, 12 March 2026, https://www.nbcnews.com/politics/national-security/democrats-ask-pentagon-iran-school-strike-role-ai-rcna263083 

[4] Hays, K., Jamali, L., ‘Trump orders government to stop using Anthropic in battle over AI use’ 28 February 2026, https://www.bbc.com/news/articles/cn48jj3y8ezo 

[5] Hays, K., ‘Judge rejects Pentagon’s attempt to ‘cripple’ Anthropic’, 27 March 2026, https://www.bbc.com/news/articles/cvg4p02lvd0o 

[6] Pugnet, A., Strohmaier, B. and Henning, M., ‘Palantir is well on its way to conquering Europe’, Euractiv, 8 August 2025, https://www.euractiv.com/news/palantir-is-well-on-its-way-to-conquering-europe/ 

[7] Amnesty International, ‘France: Allowing mass surveillance at Olympics undermines EU efforts to regulate AI’, 24 March 2023, https://www.amnesty.ie/france-olympics-ai/ 

[8] NATO Joint Warfare Centre, ‘JWC integrate Maven’, NATO JWC, 25 August 2025, https://www.jwc.nato.int/article/jwc-integrate-maven/.

[9] Pearson, S., ‘Military AI and the limits of multilateralism’, ECDPM, 4 March 2026, https://ecdpm.org/work/military-ai-and-limits-multilateralism .

[10] European Parliament, ‘European Parliament speaks out against killer robots’, press release, 12 September 2018, https://www.europarl.europa.eu/news/en/press-room/20180906IPR12123/european-parliament-speaks-out-against-killer-robots 

[11] European Parliament, ‘Guidelines for military and non-military use of Artificial Intelligence’, press release, 20 January 2021, https://www.europarl.europa.eu/news/en/press-room/20210114IPR95627/guidelines-for-military-and-non-military-use-of-artificial-intelligence 

[12] European Defence Agency, TAID White Paper, 9 May 2025, https://eda.europa.eu/docs/default-source/brochures/taid-white-paper-final-09052025.pdf 

[13] Third Summit on Responsible AI in the Military Domain (REAIM), analysis published in Just Security, 2 February 2026, https://www.justsecurity.org/129936/third-reaim-summit/ 

[14] United Nations General Assembly, Resolution on autonomous weapons and artificial intelligence in the military domain, 24 December 2024, UN Digital Library Record 4071348, https://digitallibrary.un.org/record/4071348?v=pdf 

[15] International Committee of the Red Cross (ICRC), ‘Artificial intelligence in the military domain: ICRC submits recommendations to UN Secretary-General’, ICRC, 17 April 2024, https://www.icrc.org/en/article/artificial-intelligence-military-domain-icrc-submits-recommendations-un-secretary-general 

[16] Joint Statement, ‘Preserving the Integrity of the AI Act: Why Annex I Must Remain Unchanged’, TIC Council, 10 March 2026, https://www.tic-council.org/application/files/3817/7322/5543/Joint_Statement_-_Preserving_the_Integrity_of_the_AI_Act_Why_Annex_I_Must_Remain_Unchanged.pdf 

[17] Electronic Privacy Information Center (EPIC), ‘Max Schrems v. Data Protection Commissioner (CJEU – Safe Harbor)’, https://epic.org/max-schrems-v-data-protection-commissioner-cjeu-safe-harbor/ 

[18] European Parliamentary Research Service, ‘The CJEU judgment in the Schrems II case’, September 2020, https://www.europarl.europa.eu/RegData/etudes/ATAG/2020/652073/EPRS_ATA(2020)652073_EN.pdf 

[19] Bissacco, F., ‘Case T-553/23 Latombe: Overturning Schrems… and Plaumann?’, European Law Blog, 19 November 2025, https://www.europeanlawblog.eu/pub/20d2hhrr/release/1 

[20] Nojeim, G., Perez, S.L., ‘Trump’s sacking of PCLOB members threatens data privacy’, Lawfare, 31 January 2025, https://www.lawfaremedia.org/article/trump-s-sacking-of-pclob-members-threatens-data-privacy 

[21] Landgericht Bonn, Urteil vom 3. Juni 2025, Az. 13 O 156/24, https://nrwe.justiz.nrw.de/lgs/bonn/lg_bonn/j2025/13_O_156_24_Urteil_20250603.html.

[23] V-Dem Institute, Democracy Report 2026, University of Gothenburg, 2026, https://v-dem.net/publications/democracy-reports/ 

[24] Economist Intelligence Unit, ‘Democracy Index’, Our World in Data, https://ourworldindata.org/grapher/democracy-index-eiu. [Data visualisation; continuously updated.]

[25] European Commission, ‘Commission investigates Grok and X’s recommender systems under the Digital Services Act’, press release IP/26/203, 26 January 2026, https://ec.europa.eu/commission/presscorner/detail/en/ip_26_203 

[26] Jahangir, R., ‘EU weighs regulating OpenAI’s ChatGPT under the DSA. What does that mean?’, Tech Policy Press, 29 October 2025, https://www.techpolicy.press/eu-weighs-regulating-openais-chatgpt-under-the-dsa-what-does-that-mean/

[27] European Commission, ‘Commission investigates Grok and X’s recommender systems under the Digital Services Act’, press release IP/26/203, 26 January 2026, https://ec.europa.eu/commission/presscorner/detail/en/ip_26_203 

[28] European Parliament, ‘Artificial Intelligence Act: delayed application, ban on nudifier apps’, press release, 26 March 2026, https://www.europarl.europa.eu/news/en/press-room/20260323IPR38829/artificial-intelligence-act-delayed-application-ban-on-nudifier-apps.

[29] OpenAI, ‘EU Digital Services Act (DSA)’, OpenAI Help Center, 2026, https://help.openai.com/en/articles/8959649-eu-digital-services-act-dsa.

[30] Gkritsi, E., ‘US committee demands Big Tech share private comms with EU officials’, House Judiciary Committee Republicans, 17 March 2026, https://judiciary.house.gov/media/in-the-news/us-committee-demands-big-tech-share-private-comms-eu-officials.

[31] Schaal, J., Lenner, M. and Akinyemi, T., ‘Searching for Answers: Why the EU Commission Should Designate Chatbots as Search Engines under the DSA’, Verfassungsblog, 20 February 2026, DOI: 10.59704/0387022df3095d87, https://verfassungsblog.de/searching-for-answers/.

[32] Digital Services Act, Article 3: Definitions, consolidated text, https://www.eu-digital-services-act.com/Digital_Services_Act_Article_3.html.

[33] Governing AI Companions, with Dr. Raffaele Ciriello and Dr. Jessica Szczuka, YouTube, https://www.youtube.com/watch?v=eBzxhnpbvdg.

[34] European Commission, ‘Review highlights Digital Markets Act remains fit for purpose and has positive impact’, 2026, https://ec.europa.eu/commission/presscorner/detail/en/ip_26_914  

[36] Nicol-Schwarz, K., ‘Stop fining Big Tech, says U.S. ambassador to EU Andrew Puzder’, CNBC, 27 March 2026, https://www.cnbc.com/2026/03/27/big-tech-eu-fines-ai-data-centers-us-ambassador-puzder.html.

[37] Google, ‘AI Overviews and more: how AI helps you find answers on Google Search’, The Keyword (Google Blog), May 2024, https://blog.google/products-and-platforms/products/search/generative-ai-google-search-may-2024/.

[38] Apple, ‘Use Apple Intelligence on your iPhone’, https://support.apple.com/guide/iphone/intro-to-apple-intelligence-iphc28624b81/ios 

[39] European Commission, ‘Commission opens proceedings to assist Google in complying with interoperability and online search data sharing obligations under the Digital Markets Act’, 27 January 2026, https://digital-markets-act.ec.europa.eu/commission-opens-proceedings-assist-google-complying-interoperability-and-online-search-data-sharing-2026-01-27_en.

[40] ERAA et al., ‘Joint Industry Statement: Art. 6(5) DMA — No decision’, 25 March 2026, https://www.eraa.org/wp-content/uploads/2026/03/Joint-Industry-Statement-Art.-65-DMA-No-decision.pdf.

[41] European Commission, ‘Commission preliminarily finds TikTok’s addictive design in breach of the Digital Services Act’, press release IP/26/312, 6 February 2026, https://ec.europa.eu/commission/presscorner/detail/en/ip_26_312.

[42] Datta, A., ‘Pornhub and three other major porn sites fail EU age check rules’, Euractiv, 26 March 2026, https://www.euractiv.com/news/pornhub-and-three-other-major-porn-sites-fail-eu-age-check-rules/ 

[43] Hays, K., Saad, N., Morris, R., ‘ Campaigners welcome Meta and YouTube’s defeat in landmark social media addiction trial’, BBC News, 27 March 2026, https://www.bbc.com/news/articles/c747x7gz249o.

[44] MLex, ‘Is effective altruism’s catastrophist risk agenda shaping EU AI enforcement?’, MLex, 2026, https://www.mlex.com/articles/2453810/is-effective-altruism-s-catastrophist-risk-agenda-shaping-eu-ai-enforcement-.

[45] MLex, ‘Europe’s national digital regulators stepping up enforcement of big tech’, MLex, 2026, https://www.mlex.com/mlex/articles/2458729/europe-s-national-digital-regulators-stepping-up-enforcement-of-big-tech.

[46] CNIL, ‘Sanctions and corrective measures: CNIL’s actions in 2025’, 9 February 2026, https://www.cnil.fr/en/sanctions-and-corrective-measures-cnils-actions-2025 

Centre for Future Generations
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.