Frontier AI initiative: Five promising signs and five key choices to get right for digital sovereignty
France, Germany, and the European Commission will launch the European Frontier AI Initiative, the world’s best funded non-profit AI initiative. The ambition and key design principles of the effort suggest that European actors are taking the importance of frontier AI development seriously. Now, the challenge is proper implementation ahead of the Q1 2026 launch of the Initiative. With a clear mission, effective resource sharing structures, appropriate security measures, leadership that is both technically and politically savvy, and agile governance, the Initiative can deliver and scale breakthrough frontier AI innovations towards more powerful and reliable models.
The missing piece: sovereign frontier AI research
CFG endorses the need for a new Frontier AI Initiative focused on developing more reliable models. On the one hand, secure access to and control over this technology provides a source of strength. Reliable AI can deliver substantial economic value, help counter hybrid threats, and mitigate unintended social harms. On the other hand, creating dependency on currently leading American or Chinese providers risks data theft, service restrictions, and unfavourable terms of trade. Resolving this predicament is possible through international resource-pooling to match the scale required for frontier AI development.
To invent truly reliable models operating at the scale of frontier models, focused efforts are pivotal. Therefore, the Initiative complements Europe and key like-minded partners’ planned AI innovation framework (assuming adequate and prompt implementation) by addressing a critical gap:
- The AI Gigafactories would provide the large-scale compute needed for AI development.
- The Data Union Strategy could facilitate high-quality untapped data for AI training.
- RAISE could help coordinate and better resource Europe’s existing knowledge ecosystem.
- The Frontier AI Initiative could build on these foundations to pursue research best done in-house: security-sensitive and dual-use work that requires careful institutional oversight.
Promising early plans
The Frontier AI Initiative has:
- the right high-level goal,
- the resources to deliver world-class progress on it,
- a fitting fundamental structure,
- the openness to including key partners including outside of Europe, and
- the correct timeline to act.
Aiming to “develop models that push the frontiers of AI” enables Europe and its likeminded partners to develop a cutting edge in AI that is not only “more capable [but also] more reliable”. Reliable models have better scientific, manufacturing, and myriad of other use cases, strengthening Europe’s leading industries. Moreover, the Initiative seeks to push the AI frontier to be “more aligned with our values, more complementary to humans’ skills,” addressing cultural and labour market worries.
Resourcing the organisation as the “best funded non-profit Frontier AI initiative in the world” gives it the necessary funds to not only innovate in frontier AI but also scale up breakthroughs. According to the latest estimates by EpochAI, only organizations with billion-dollar budgets will have the resources to carry out frontier AI training starting in 2027. As the announcement rightly pointed out, “competitive salaries, scientific freedom, access to efficient computing and technology stack, and unique data sources” are key elements, yet they come at an increasingly steep cost.
Structuring the organisation as a public-private non-profit helps focus resources on high-risk high-reward research. This different setup enables experimentation with novel research paradigms, something that profit-oriented AI labs are less incentivised to do. Shielding the Initiative from immense commercial and US-China competition-driven geopolitical pressures to “release or perish” — a dynamic all too common among leading AI labs—helps the Initiative prioritise groundbreaking research over near-term capabilities. Justifiably, the Initiative aims to hand over incubated champions once “private capital can take over.”
Partnering with “other Member States, European and international partners and investors and philanthropies” taps into funding, compute, data, and expertise from a diverse range of stakeholders. For instance, including Canadian, Swiss, and UK stakeholders could bring uniquely strong talent, monetary, and infrastructural assets to the table.
Urgently acting on frontier AI is the only way to not miss the boat. Frontier model development cost increases 3.5 times every year. Barriers to entry into frontier AI development will soon become prohibitively expensive unless the EU starts reaping and then reinvesting the economic gains of AI immediately.
The devil in the details
For the Initiative to deliver on its promise, it must:
- translate its high-level goal into a clear eyed, granular mission,
- share resources effectively,
- secure its geopolitically-sensitive research
- choose a visionary, yet politically-savvy leader, and
- govern with agility on all levels.
Specifying the goal of inventing reliable and human-centric frontier AI could help the Initiative focus. For AI systems to serve Europe’s public administration and industries best, they need to be interpretable, technically robust, and enable human agency. The reproducibility of science, the reliability of defence systems, and the trustworthiness of government systems hinge on the availability of AI systems whose deployers understand and can effectively control it. This dynamic, in turn, means that these more trustworthy AI systems have higher adoption rates (26% according to Deloitte), leading to higher returns on investment.
Sharing resources effectively enables stable long-term collaboration. Researchers working at the Initiative must receive the resources they need for promising research projects—be it AI Gigafactory compute, data, or funding. To ensure a smooth redistribution of resources to the highest expected value research projects, teams with mixed nationalities and affiliations could help. The physics research institute CERN offers a strong precedent of such design with 24 member countries and 100+ nationalities among researchers. However, concerns around intellectual property, cross-border data flows, and other practicalities that emerge under shared projects with several contributing parties require effective management too. Moreover, different promising AI innovation projects are proven under different timelines, making redistribution a difficult, though not any less crucial decision.
Securing sensitive research is pivotal to best leverage the Initiative for digital sovereignty. The organisation’s research into reliability is likely to generate a lot of value. This makes it a lucrative target for espionage and theft, as well as a geopolitical target. Adequate (cyber)security is vital.
Selecting leadership well substantially affects organisational performance across effectiveness, speed, and other metrics. An executive team with proven experience in rapidly building a world-class research institute, securing sufficient and stable political and scientific buy-in for the Initiative’s most promising work, and creating a culture of bold experimentation can help the organisation hit the ground running along the planned ambitious timelines. Matt Clifford’s founding leadership of the UK AI Safety Institute provides a partial template of establishing an AI-focused organisation with strong purpose, political capital, and agile structures.
Governing well extends beyond the executive team, to the Board, and lower decision-making committees. Including the right mix of technical and political stakeholders and moving away from consensus-based models are key components of agile governance, as demonstrated by the cautionary tales of earlier European ventures. CFG research into transparent and accountable governance design of a public-private frontier AI lab further details a possible institutional setup.
Next steps
The Frontier AI Initiative represents Europe’s best opportunity to shape the future of AI innovation rather than merely react to it. Success requires the founding task force to make five critical decisions correctly before the Q1 2026 launch:
- defining a mission centered on interpretability, robustness, and human agency;
- creating resource-sharing mechanisms that truly integrate European assets;
- preparing appropriate security for geopolitically sensitive research;
- selecting leadership with both technical credibility and political acumen; and
- designing agile governance that enables rapid delivery on the Initiative’s ambitious goals.
CFG stands ready to support this work, drawing on our research into institutional design, complementary EU initiatives, and technical AI innovation. Our research into implementing the above five recommendations continues.


