Skip to content

Europe and the geopolitics of AGI

The need for a preparedness plan

This paper was co-authored with RAND Europe, MIT, University of Oxford, Max Planck Institutes, and the ELLIS Institute. It was published by RAND Europe here.


Executive Summary

This report recommends that the European Commission President should commission an AGI Preparedness Report with the political authority, analytical breadth and urgency of the Draghi Report. This recommendation follows three broad assessments made in the report. Firstly, artificial intelligence (AI) systems matching or exceeding humans at most economically useful cognitive work – our working definition of artificial general intelligence (AGI) – could plausibly arrive between 2030 and 2040, or even earlier. Secondly, the emergence of AGI systems could fundamentally reshape the global distribution of power, with major implications for economic growth, military capabilities and international stability. Thirdly, Europe currently does not yet have the comprehensive strategic awareness, competitive positioning, or policy strategies to navigate a potential shift to a world with AGI. Without a preparedness plan that treats the prospect of AGI as a core strategic concern for its security and prosperity, Europe risks marginalisation in what may prove the world’s most consequential technological transition.

The near-term emergence of AGI is likely enough to warrant significant attention from European policy makers

AI capabilities have progressed quickly but unevenly in recent years. Six years ago, general purpose AI systems struggled to write coherent text; today, they win gold medals at international mathematics olympiads and outperform top coders in competitions. Yet they remain brittle in important ways: they hallucinate facts, sometimes fail at simple visual reasoning, and cannot reliably perform physical tasks such as folding laundry. This uneven landscape is sometimes described as a ‘jagged frontier’, with systems excelling in domains with clear success criteria while still struggling with tasks that rely on tacit knowledge or long horizon planning.

Rapid increases in compute, data and algorithmic efficiency can likely continue over the next few years. Past progress in AI has been driven by rapid increases in training compute, training data and algorithmic efficiency. Training compute has grown by roughly a factor of 5 per year, training data by about 3.6 per year and algorithmic efficiency by about 3 per year. These rates could slow in the coming years, which would still mean continued progress, just at a more measured pace. There is also a real possibility of further acceleration, especially in algorithmic progress. At present, the evidence does not suggest a sharp slowdown before 2030.

While future capabilities are hard to predict, AGI could plausibly emerge between 2030 and 2040 or even earlier. Definitions of AGI are contested, with authors using different milestones and some rejecting the concept outright. We adopt the working definition of ‘AI systems matching or exceeding humans at most economically useful cognitive work’. On this definition, empirical trends and expert judgement suggest that AI could plausibly reach AGI-level performance between 2030 and 2040 or even earlier. This range involves significant uncertainty and should be seen as one plausible scenario rather than a precise forecast. Even so, the plausible near term emergence of AGI is sufficiently likely to warrant significant attention from European policy makers.

The emergence of AGI could expose Europe to major shifts in global power as well as strategic instability

Actors that successfully deploy AGI could gain major economic and military advantages. Countries could achieve rapid economic growth by automating human labour and accelerating science and research, allowing them to outproduce competitors and exert significant geopolitical influence. In the military realm, AGI could transform warfare through automated intelligence fusion, enhanced decision making and planning, large fleets of autonomous systems, sophisticated cyber operations and potentially the development of entirely new classes of weapons. Some of these effects already become visible with current AI systems, but AGI could amplify them by enabling much higher levels of competence and generality.

The emergence of AGI might lead to geopolitical instability. States and companies may race to develop AGI, acting aggressively to gain a competitive advantage. Governments may try to block rivals through export controls, cyberattacks or escalatory deterrence actions. Shocks to labour markets may lead to internal instability and unrest. AGI also poses risks beyond nation states: it could help terrorists build weapons of mass destruction, enable AI systems that pursue goals misaligned with human values or allow for dangerous concentration of power. Current AI capabilities have already enabled largely autonomous cyber attacks and have shown potential for misuse in biological weapon design.

Major powers already treat AI and the potential for AGI as a strategic priority. The US frames AI leadership as essential to national security and has imposed semiconductor export controls on China. Beijing aims to become the global AI leader by 2030 and is accelerating efforts to indigenise its AI supply chain. The United Kingdom (UK), the United Arab Emirates, Singapore and other middle powers are scrambling to position themselves to retain influence while keeping access to the AI frontier. Europe must orient itself to this new competitive reality and respond strategically in a way that fits its own asymmetric strengths and weaknesses.

Europe is not yet sufficiently prepared for a transition to a world shaped by AGI

Strategic awareness of frontier AI developments remains uneven across European governments and institutions. In the absence of leading domestic frontier AI firms, European policy makers often rely on external sources of information which can complicate the assessment of technical claims. The UK has begun to build substantial analytical capacity through its AI Security Institute, but similar capabilities on the continent are still emerging. Germany, for instance, does not participate in key international AI safety fora, and the EU AI Office, while comparatively well-staffed, currently operates with a focused mandate and relatively limited budget.

Europe trails on most measures of AI competition and its strategic levers provide limited geopolitical influence. European AI models lag behind US and Chinese models by six to twelve months. Europe hosts only about 5 per cent of global AI computing capacity, compared with roughly 75 per cent in the US. EU start-ups attract just 6 per cent of global AI venture funding. Energy costs in Europe are significantly higher than in the US, and Europe loses much of its top AI talent to better-capitalised US companies. In theory, the EU’s market power and chokepoints, such as Dutch firm ASML’s monopoly on EUV lithography tools, give Europe levers to negotiate access to foreign frontier AI and compute and to influence global AI governance beyond the AI Act. In practice, these levers are constrained by geopolitical sensitivities and by partners’ potential concerns around the diffusion of more transformative AI systems.

Europe’s existing AI strategies are fragmented across institutions and policy domains and often lack scale and integration. Current EU AI initiatives, including the AI Continent Action Plan, ApplyAI, InvestAI and AI Factories, are important steps but remain under-resourced and spread across multiple Commission Directorates-General and Member States, with limited coordination. The EU’s reliance on consensus-based decision making slows policy responses in geopolitical crises, while many AGI-relevant levers in defence, intelligence and critical infrastructure remain national competencies without clear mechanisms for joint action. Recent moves such as the Frontier AI Initiative announced by France, Germany and the European Commission signal growing ambition, but practical implementation paths remain uncertain.

Recommendation: The EU should commission an AGI Preparedness Report

The President of the European Commission should commission an AGI Preparedness Report. Getting AGI policy wrong, whether through premature investments or failure to secure access to critical capabilities, could be very costly. Europe therefore needs an integrated and rigorous preparedness plan that clarifies the big picture and addresses the many interlocking challenges it may face in a transition to AGI. An AGI Preparedness Report would identify the most important questions, map the main options, and recommend strategic directions grounded in shared assumptions across EU institutions and Member States.

The AGI Preparedness Report should take inspiration from the Draghi Report on EU
Competitiveness. It should be led by a figure with exceptional political authority and technical credibility, supported by a world-class team with the freedom, access and resources to deliver a rigorous assessment within months rather than years. The Report should clearly delineate which actions require coordination at the EU level and which demand national leadership, and set out mechanisms to ensure that both these strands act in concert. Its authors should have the intellectual independence to reach evidence-based conclusions, which should be presented to the European Council at a dedicated event to enable a parallel diplomatic process.

The Report should address three core challenges relating to AGI: capturing economic benefits while staying sovereign, preparing societies for rapid change, and strengthening global governance. It must identify which capabilities can be procured from partners, which require European control, and how Europe should invest in domestic capabilities. It should set out how European societies can prepare for potential labour shocks and novel security threats, while building the state capacity to manage unforeseen challenges. Finally, it should articulate Europe’s vision for stable international AGI governance and position the EU as a trusted and influential actor committed to safe and ethical AI development.

Full report

Centre for Future Generations
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.