Skip to content

Enforcement spotlight – Spring 2025

Where tech policy meets reality

Winter 2024–25 has been a defining moment for tech governance in Europe. Amid rising transatlantic tensions, botched elections and a flailing economy, the EU’s tech agenda has become central to Europe’s democracy and sovereignty at home as well as its competitiveness in the world.

This second edition of the Enforcement Spotlight takes stock of how EU digital and data laws are put into practice – with a special focus on the Digital Services Act (DSA), which turned one year old in February – and assesses their enforcement across the four dimensions of politics, players, policies and procedures. We explore the latest developments in detail, examining key trends, lessons learned and what the future holds for tech governance in Europe.

Happy birthday, DSA!

February marked one year since the DSA became fully applicable, covering digital service providers of all sizes, including 25 very large online platforms (VLOPs) and two very large online search engines (VLOSEs). The act has become a pivotal piece of the EU’s digital rulebook, with a total of 86 enforcement actions against large online platforms and search engines since it entered into force in April 2023 (see figure 1)—far more than the 20 actions under the the General Data Protection Regulation (GDPR), the Digital Markets Act (DMA), and the Artificial Intelligence (AI) Act combined. In its first year, the DSA drove over 80% of the EU’s tech enforcement efforts, focusing on aligning big tech with EU values. Naturally, this has put a bit of a target on the DSA by its detractors – but we’ll get into that further down.

Figure 1: DSA enforcement actions by company since June 2023

The law set out to protect EU citizens and their rights and freedoms online by setting clear and proportionate rules, from mandatory systemic-risk assessments to obligatory content-moderation policies for digital services like online platforms and search engines.

Under the DSA, violating companies can be fined up to 6% of their annual global turnover. For scale, this means fines could amount to €162 million ($175 million) for X, formerly known as Twitter, based on the company’s reported profits in 2024, and over €9 billion ($9.7 billion) for Meta, based on its 2024 global revenue.

The first year of the DSA’s implementation was full of new challenges and opportunities for a wide range of enforcement actors. As implementing the act has fallen largely to national authorities, attention has been on paving national paths to enforcement. Key challenges reported by European civic groups include delays in passing national laws, regulators’ resource shortages, procedural and funding obstacles for civil society collaboration, and a lack of public awareness of the purpose and spirit of the DSA. All of these hurdles can result in weak public support for the law’s enforcement.

A good way to explore these enforcement dynamics is to look more closely at the situation in Ireland. The country’s digital services coordinator (DSC), Coimisiún na Meán, one of the most prominent national DSCs in EU tech policy enforcement, has been working its way through around 400 complaints since the DSA took full effect and is coordinating the allocation of tasks with the European Commission. Irish Digital Services Commissioner John Evans has emphasised that enforcement is still in its early stages, with significant activity happening behind the scenes despite a lack of high-profile investigations. His team of 45, which includes former tech industry workers, regulators and NGO employees, aims to grow to 80.

Another example concerns the DSA’s article 22, which focuses on so-called trusted-flaggers, or entities equipped with specialised expertise to flag illegal content online. In implementing article 22, national authorities and NGOs spent the first year collaborating to build a first-of-a-kind framework of trusted-flaggers. In the process, civil society groups cited resource constraints and slow uptake, and even became targets of disinformation campaigns themselves. But this trusted-flagger framework has set a high bar for collaborative governance in other areas of tech policy, where civil society is institutionalised into the enforcement ecosystem.

Looking at the DSA’s first year, some of the law’s provisions have shown their teeth more than others:

  • The DSA’s article 34 bore plenty of fruit. This article requires VLOPs and VLOSEs to assess systemic risks, including those related to recommender systems, the protection of minors and illegal content. So far, 27% of all DSA actions against big tech – 17 requests for information filed and seven sets of proceedings opened – were in the name of article 34.
  • Article 43, which requires the Commission to charge VLOPs and VLOSEs an annual supervisory fee, has also been remarkable, albeit in a different way, as it sparked legal challenges from companies reluctant to pay supervisory fees, including Alphabet, ByteDance, Meta and Zalando.

While these enforcement actions underscore the DSA’s growing role as a standard-setter for platform accountability and regulatory oversight, major political events—such as rising transatlantic tensions and alleged disruptions to democratic processes in Europe—have demonstrated the DSA’s value in bolstering democratic resilience across the continent. 

Let’s now take a closer look at these events and more to uncover lessons learned, best practices, and key enforcement trends that are shaping the way technology impacts European societies and markets.

Enforcement developments, unpacked

Looking beyond headlines, we unpack how tech policy is enacted, where enforcement is working, and what to keep an eye on.

Politics: What’s driving – or stalling – enforcement?

Over the past few months, three forces have been driving and stalling tech policy enforcement: the merging of tech power and politics in the US and rising transatlantic tensions; major disruptions in democratic processes involving platforms; and shifts in platforms’ fact-checking policies and compliance strategies.

Transatlantic tension presents both a barrier and a driver to enforcement

As soon as US President Donald Trump took office in January, US politicians, lawmakers and tech industry leaders began framing the EU’s digital rules as protectionist and over-reaching. This put the digital regulation of the EU and other actors at the centre of a political battleground over trade, sovereignty and security – effectively jeopardising the regulations’ legal and technical values.

Here are a few noteworthy attacks on EU digital regulation:

  • Meta CEO Mark Zuckerberg urged the Trump administration to defend US tech companies against EU fines and investigations, comparing these measures to ‘tariffs’ on American firms, and to adopt a tougher stance on European regulators.
  • Members of the US House of Representatives sent letters to European commissioners expressing concerns about the DSA and the DMA. One such letter, to European Commission Executive Vice President-Henna Virkkunen, requested details of DSA enforcement against American companies and raised concerns about the act’s impact on free speech in the US. Another, to European Commission Executive Vice President-Teresa Ribera, warned that the DMA might unfairly target US companies.
  • At the World Economic Forum in Davos, Trump criticised European authorities for what he described as ‘very unfair treatment, accusing them of imposing ‘a form of taxation’ on tech giants like Apple, Google and Meta.
  • At the Paris AI Action Summit and the Munich Security Conference, US Vice President JD Vance attacked the DSA and the GDPR as over-regulatory, even likening EU enforcement to Soviet-era censorship.
  • American opposition to the EU’s tech rulebook has also manifested itself in executive actions by Trump. The White House issued a memo proposing retaliatory tariffs on countries that impose heavy fines on US tech firms. With this, Trump resumed investigations – initiated during his first term – into European digital services taxes.

Rising transatlantic tension over EU digital regulation has had a double-edged impact on EU tech policy enforcement. On the one hand, it seems to have delayed enforcement. For instance, the Commission extended its DSA investigation against X, even though it had been ongoing since December 2023. When questioned, a regulator in the Commission commented that enforcement is ‘slow but sure’ as the stakes are higher than ever, and that any mistakes could lead to lengthy and costly appeals. On the other hand, the high profile of these political tensions has fuelled support for enforcement, driving civic actors to stand up for effective enforcement of the entire digital rulebook, with reports, analyses and open letters all aiming to ensure that Europe holds its ground.

Democratic processes have become a critical battleground for DSA enforcement

The botched Romanian presidential election in November 2024 first put the DSA to the test. Authorities triggered the DSA’s transparency and content-moderation requirements to tackle election-related disinformation linked to ByteDance-owned TikTok. Although Romania’s constitutional court cancelled the result of the first round of the election, the DSA provided European authorities with the legal grounds to investigate TikTok’s role in the process. Rather than make an immediate political decision without evidence, the DSA allowed the authorities to assess the issue at stake – taking measures such as a retention order for deeper investigations, which can lead to accountability if evidence of wrongdoing is found. However, the DSA also exposed its limitations, such as time constraints, as acting quickly is crucial in politically charged situations.

Later, in Germany, the DSA was tested again. In the run-up to the country’s federal election in February 2025, Musk came out strongly in support of the far-right Alternative for Germany (AfD) party. He posted on X that ‘only the AfD can save Germany and livestreamed an interview with AfD Co-Leader Alice Weidel. This intervention sparked legal action in Germany, and in response, four major groups in the European Parliament called for a plenary debate on Musk’s comments and sought clarification on whether they violated the DSA. Meanwhile, a coalition of 12 European affairs ministers urged the Commission to strengthen election-integrity measures, with a focus on foreign interference and algorithmic distortion.

This time, the Commission took a more holistic approach. In response, it not only extended its ongoing DSA investigation against X, issued a retention order for deeper investigations, and conducted a stress test in collaboration with the German digital services coordinator; it also strengthened the law by integrating key regulatory frameworks. This included incorporating the revised Code of Conduct on Countering Illegal Hate Speech and reclassifying the voluntary Code of Practice on Disinformation as a co-regulatory instrument under the DSA.

Shifts in platforms’ fact-checking policies

Despite recent serious cases of disinformation on the continent, US tech companies have begun moving away from traditional journalist-led fact-checking in favour of user-driven models. This shift has raised alarms about the firms’ intention to comply with EU rules and fulfil broader regulatory obligations.

For example, Meta announced it would replace its US journalist-led fact-checking with community ratings – similar to X’s Community Notes – and even submitted a risk assessment of these changes to the Commission. Meanwhile, Zuckerberg escalated his criticism of EU regulations, urging the US to defend tech companies against what he called European ‘censorship’. Following this, Google signalled a shift away from fact-checking by announcing plans to withdraw its commitments under the 2022 Code of Practice on Disinformation, affecting both Google Search and YouTube. LinkedIn soon followed suit.In response to these moves, top EU officials defended the DSA, arguing that it did not require platforms to remove lawful content. But some legal experts were not completely convinced. They pointed out that the DSA targets not only illegal content but also lawful content that poses systemic risks, highlighting the grey zone between free expression and platform governance.

Players: Are enforcers equipped to uphold these laws?

This winter was a pivotal period for institutional players who shape enforcement priorities and structures. New EU leadership is now in place, with confirmed European Commissioners and dedicated European Parliament working groups focused on implementing major tech legislation. Yet, the evidence presented below raises concerns about how prepared and unified this leadership is in navigating the challenges of enforcement.

During her confirmation hearing in November, Executive Vice-President Virkkunen notably sidestepped questions about the impact of EU-US relations under the new Trump administration. Similarly, the European Ombudsman published preliminary findings stating that the Commission’s secrecy about disclosures by X under the DSA breached EU transparency rules and constituted ‘maladministration’. Most recently, meetings between the Commission and various tech groups including Apple, Meta and Google have fuelled speculation about whether DMA enforcement priorities are being quietly recalibrated in response to political pressure.

The Parliament is not immune to similar concerns. While it has finalised its working groups to oversee the implementation of the AI Act, the DMA and the DSA, recent plenary discussions of DSA enforcement revealed a lack of political consensus on addressing enforcement challenges. Although the creation of working groups shows a commitment to tackling regulatory issues, internal debates – especially on politically sensitive topics, like free speech and disinformation – suggest that a unified approach to enforcement remains elusive.

In addition to these concerns, there are persistent calls for increased personnel at the Commission to manage the growing enforcement burden of major tech laws.On DMA enforcement, Executive Vice-President Ribera emphasised during her confirmation hearing that ensuring compliance from Apple, Google and others required significantly more staff, as collaboration among national authorities alone was not enough. Similarly, concerns about AI Act enforcement are growing. In December, Member of the European Parliament (MEP) Axel Voss urged the Commission to raise its ambition for the AI Act to match the DSA, calling for over 200 staff in the Commission’s AI Office this year, warning that without sufficient legal and operational resources– the EU’s AI rules might fall short in practice.

Policies: What is actually being enforced?

With time and implementation experience, especially under mounting political pressure, EU tech regulations are beginning to show their strengths and weaknesses.

Starting with the DSA, the eligibility thresholds for VLOP and VLOSE statuses are proving an unstable metric. As platforms’ user bases fluctuate, they move in and out of regulatory obligations, depending on whether they surpass or fall below the user threshold. For instance, Waze and WhatsApp recently surpassed the 45 million user threshold, triggering VLOP status and stricter compliance, while major adult-content platforms reported declining traffic, potentially exempting them from DSA obligations. These market shifts underscored the need for adaptable and responsive regulatory oversight.

Under EU antitrust laws and the GDPR, both the EU and member states issued significant fines this winter—a move that, while exposing existing issues, reinforces their policy commitments and may help allay concerns about the EU’s dedication to holding platforms accountable.

The Commission fined Meta for €797.7 million ($862.3 million) for abusing its dominant market position to benefit its own Facebook Marketplace. For context, this fine is 0.53% of Meta’s $164.5 billion global turnover last year. In response, Meta opened Facebook Marketplace to third-party classified ads. This isn’t a closed case yet, though, as Meta appealed the Commission’s decision – watch this space!

At the national level, major GDPR fines against Meta and OpenAI show the strength and importance of the law for compelling data protection practices among big-league players. In the first case, the Irish Data Protection Commission fined Meta €251 million ($271 million) for a 2018 data breach that affected 29 million users, including 3 million in Europe. In the other, Italy’s data protection authority, the Garante, fined OpenAI €15 million ($16 million) and mandated a six-month public information campaign over GDPR violations related to the company’s ChatGPT service.

Another interesting trend to note is that enforcement is not purely about fines. As Virkkunen and Ribera rightly defended, the purpose of these laws is to ‘ensure compliance – not to issue fines. One way the enforcement of these policies encourages compliance is through proactive measures, like publicly reviewing and deliberating compliance methods.

For instance, after DMA-designated gatekeepers—or large digital platforms that entrench control over a core platform service—submit their compliance reports, the Commission organises public workshops to allow stakeholders to seek clarifications and provide feedback on proposed compliance solutions. This winter, Booking took the stage to address perceived compliance concerns and emerged without incident. Despite political concerns about the Commission’s reassessment of cases, these efforts suggest that DMA enforcement was on track.

Similarly, the European Data Protection Board (EDPB) made notable efforts to integrate GDPR enforcement into other legislative areas, particularly AI governance, by fostering regulatory coordination with the AI Office and the AI Board. The EDPB also reinforced its commitment to an integrated and cohesive regulatory framework through an opinion on AI models, highlighting the GDPR’s role in responsible AI development, and a paper on the intersection of data protection and competition law.

And finally, implementation of the AI Act saw both progress and challenges.

Since February, the act has been in full force. This means that prohibitions of high-risk AI systems, such as social scoring, are in effect. Compliance preparations have begun for other AI systems set to face additional requirements from August 2026. In noteworthy progress, the AI Office released the third draft of the General-Purpose AI Code of Practice after a several-week delay and launched an interactive website with it.

Yet, implementation and enforcement challenges also remain for AI governance in Europe: 

  • Most concerning is the Commission’s decision to withdraw the AI Liability Directive, citing a lack of a foreseeable agreement. This has drawn criticism from experts who view the move as a concession to big tech at the expense of consumer protection and European small and medium-sized enterprises.
  • The policy’s definition of systemic risk remains unclear. This lack of clarity will make the policy difficult to enforce, especially as AI technologies evolve so quickly. One concern is whether AI-driven content amplification of politically sensitive topics – such as protests, government criticism, the Israeli-Palestinian conflict, LGBTQ+ rights or abortion – could be classified as a systemic risk if deemed to significantly impact public security or fundamental rights.

Another pressing issue is the lack of clear methods and robust criteria to identify systemic risks. As MEP Brando Benifei emphasised and journalists detailed, more precise guidelines are urgently needed.

Procedures: How is the enforcement ecosystem working in practice?

Beyond formal actions taken by the Commission, enforcement is a collective effort. In other words, regulators don’t carry the enforcement burden alone. The DSA, the DMA and the GDPR are being enforced via decentralised models, with monitoring and oversight responsibilities carried by private actors. This can help avoid bottlenecks and enhance effective application of the rules.

To complement public enforcement action, private enforcement is gaining momentum. Funders, activists, advocacy groups and litigation firms are leveraging strategic litigation and class actions to push for compliance in Europe. In a landmark case in February, a Berlin court ruled in favour of two German civil rights organisations and ordered X to provide data access to researchers for monitoring misinformation ahead of Germany’s election. As demonstrated by this case, private actors can play a vital role in Europe’s tech policy enforcement ecosystem. Their reach and agility enable them to complement public enforcement efforts, making strategic litigation an effective tool for ensuring accountability and regulatory compliance. 

Beyond private litigation, civic actors played a crucial role by engaging in monitoring and reporting, publishing analyses, as well as public advocacy. These efforts supported enforcement and helped shape a more collaborative tech regulation ecosystem.

  • To compel better compliance of the DMA, the European Consumer Organisation (BEUC) published a report urging companies to step up their compliance efforts. To support better enforcement of the DSA and the DMA, the European Digital Rights (EDRi) network issued an open letter calling on the Commission to strengthen enforcement, invest in decentralised social networks and resist ‘political pressure from Big Tech companies’. 
  • EDRi also launched its Digital Services Coordinator Database, which provides a comprehensive overview of all enforcement authorities and the cases they have taken against online platforms under the DSA.
  • To advocate stronger GDPR enforcement, the European Centre for Digital Rights (NOYB) revealed that only 1.3% of cases before EU data protection authorities result in fines, highlighting long procedures that often end in settlements or dismissals. 
  • A collection of civil society groups criticised the ongoing negotiations on GDPR procedural rules in an open letter, warning that these discussions risk missing the opportunity to strengthen enforcement procedures and calling for shorter, more efficient and rights-respecting processes as a core part of EU digital law enforcement.
  • To support effective enforcement of the AI Act, the Future of Life Institute is tracking national implementation plans, including by providing an overview of the national authorities to be designated under the act.

Private actors are also strengthening compliance and reducing the enforcement burden through internal and external audits. For instance, the Commission deemed the first round of independent audits under the DSA for VLOPs largely successful, noting that many platforms had already begun self-correcting identified flaws. Meanwhile, the 19 VLOPs and VLOSEs designated in April 2023 have released their first risk assessments and audit reports. The Commission recently organised workshops where platforms presented their findings to regulators, civil society and other stakeholders, fostering accountability and collaboration.

Final word

Although Trump’s second term is only two months old, its impact on global digital governance is already evident. The EU’s digital rulebook – long praised as having a ‘Brussels effect’ on tech regulation worldwide – is under attack. The Commission’s forthcoming Digital Package, planned for the fourth quarter of 2025, will review cybersecurity laws, reporting obligations and other digital regulations. It will even potentially amend key legislation, such as the GDPR, the Data Governance Act, the Data Act, the Cybersecurity Act, the Cyber Resilience Act, the EU Chips Act and the AI Act.

Although the German and would-be Romanian elections have come and gone, retention orders were issued to preserve information and evidence in the event of further Commission investigations into platforms’ compliance with their DSA obligations. The results of such investigations are yet to be shared, and accountability for any interference in European democratic processes remains pending. Similarly, the Commission is still expected to decide whether shifts in third-party fact-checking policies align with the DSA.

The good news is that the EU has gained valuable experience in handling election interference with its new digital rulebook and has lent EU policymakers confidence in the value and purpose of digital and data policy in the face of mounting transatlantic tension. Significant enforcement actions continue, with major cases ongoing under the DSA and DMA

But there is no denying that uncertainties abound in the digital enforcement space. Will calls for increased personnel be addressed as Europe shifts its policy focus towards competitiveness and defence? Will the White House follow through on the policy complaints of big tech? And are EU leaders prepared to defend the digital policy legacy of the past decade?

Predicting the future is famously a fools errand but one thing is sure: European enforcement actors must brace themselves for a turbulent period and plan to push back against mounting challenges, from within and without.

Enforcement highlights by legislation

In this section, catch up on major enforcement highlights, sorted by legislation.

General Data Protection Regulation (GDPR)

  • The European Data Protection Board (EDPB) welcomed Meta’s announcement of a new option for less personalised advertising in the EU in response to regulatory demands, with EDPB Chair Anu Talus describing it as a positive step towards ‘less invasive ads’, though the solution still requires evaluation. (18 November)
  • The European Data Protection Supervisor (EDPS) found that the European Commission had breached the GDPR by processing special categories of personal data during social media ad targeting campaigns, following a 2023 complaint by the European Centre for Digital Rights (NOYB). The EDPS issued a reprimand, formally declaring the processing illegal and issuing a warning, but refrained from imposing fines as the Commission had already ceased the practice. (13 December)
  • NOYB filed GDPR complaints against AliExpress, Shein, Temu, Tiktok, WeChat and Xiaomi over unlawful data transfers to China, targeting five EU member states over concerns that these companies’ European addresses are mere mailboxes. (16 January)

Digital Services Act (DSA)

  • The public interest association e-Enfance became France’s first trusted-flagger under the DSA. e-Enfance was recognised for its nearly 20 years of work protecting minors online and meeting criteria such as expertise, independence and diligence in flagging illegal content. France’s Regulatory Authority for Audiovisual and Digital Communication (ARCOM) hailed the designation as a significant step in enhancing online audience protection. (6 November)
  • TikTok became the first platform to comply with decisions by User Rights, the first organisation certified as an out-of-court dispute-settlement body under the DSA in Germany. The platform reinstated content and accounts after recognising unjustified restrictions. While addressing systemic risks on big tech platforms is a priority, individual remedies remain essential for effective DSA enforcement, noted User Rights Co-Founder Niklas Eder. (18 December)
  • The Commission released a DSA elections toolkit for digital services coordinators, outlining best practices in stakeholder management, media literacy, incident response and risk monitoring to safeguard EU electoral integrity. (21 February)

Digital Markets Act (DMA)

  • In its response to a consultation on interoperability under the DMA, Apple criticised Meta’s 15 requests for ‘far-reaching access, arguing that they could jeopardise user privacy by enabling actions such as reading messages and emails, tracking phone calls and app usage, and scanning photos. In a five-page document highlighting its privacy-focused approach, Apple emphasised the risks of opening its systems to ‘data-hungry companies’. (18 December)
  • Despite reports to the contrary, the Commission denied any pause in DMA investigations, stating that ongoing meetings with US representatives were focused on case maturity and resource allocation, not scaling back probes into Apple, Google and Meta. (15 January)
  • Olivier Guersent of the Commission’s Directorate General for Competition emphasised that EU law requires all investigations to be concluded with a formal decision and cannot be arbitrarily dropped. (27 January)

Artificial Intelligence (AI) Act

  • Member states did not advocate new legislation on AI and copyright, according to a summary by the Hungarian presidency of the EU Council. The report highlighted that most member states see no relevant case law to justify new legislation on AI and copyright, despite legislative actions in two countries. (11 December)
  • The Commission’s AI Office presented a draft template for AI model training data disclosure under the AI Act, detailing data set sources, processing methods and the top 10 % of consulted sites. Final guidelines are expected between April and June 2025 before the AI Act applies on 2 August. (20 January)
  • Big tech pushed back against the General-Purpose AI Code of Practice, with Meta’s Joel Kaplan calling it ‘unworkable’ and confirming the company would not sign, while Google’s Kent Walker criticised the code as exceeding the AI Act’s scope but deemed it ‘workable’, with copyright being a key concern. (7 February)
  • The Commission adopted an implementing act to establish a scientific panel of independent AI experts to advise the AI Office on identifying systemically risky AI and developing classification and assessment tools, though the panel’s composition remains unknown. (10 March)

Competition

  • French publishers summoned X to a Paris court to demand payment of neighbouring rights owed to French media, following a May 2024 ruling that required the platform to provide data for calculating these rights – a requirement X has yet to meet. Similar actions were filed against LinkedIn and Microsoft. (12 November)
  • Intel received $536 million in interest from a €1.1 billion ($1.2 billion) EU antitrust fine. (31 January)
  • Mediahuis sued X in Ireland over scam ads that mimic genuine news articles, alleging trademark and copyright infringement. The lawsuit claimed that these ads, which used the Irish Independent’s logo and journalist by-lines, were promoted by ‘verified’ blue-tick users, posing reputational risks. (19 February)
  • TikTok cut moderation staff in Europe, including Ireland, as part of a broader restructuring of its safety teams across multiple regions. The Commission stated that it was in close contact with TikTok to assess the implications of these changes. (21 February)

Authors

Mehmet Onur Cevik

Governance Analyst – Policy

Centre for Future Generations
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.