Response to the call for evidence on a European strategy for AI in science
Executive summary
Europe faces a critical challenge in accelerating AI adoption in scientific activities to enhance research capabilities, maintain global competitiveness, and reduce technological dependency. As outlined in the present call for evidence[1] on a European Strategy for AI in Science, this challenge affects “the speed and quality of scientific output, reducing the impact of EU science globally,” with significant implications for Europe’s economic growth and strategic autonomy. This adoption challenge reflects a deeper issue: Europe’s lack of domestic trustworthy AI capabilities. The Centre for Future Generations (CFG) has extensively researched this through our “CERN for AI” proposal[2] – an ambitious institution to develop trustworthy general-purpose AI and catalyze European AI innovation.
The Resource for AI Science in Europe (RAISE), announced in the AI Continent Action Plan[3], represents a promising institutional approach through the creation of a European AI Research Council. If designed properly, RAISE could bring to life the research hub part of CERN for AI[4] and in turn:
- Boost adoption and development of AI tools for scientific applications
 - Add scientific rigour to the field of trustworthy AI, ensuring AI models are reliable, robust and safe
 - Help build a thriving European AI ecosystem
 - Become one of Europe’s premier AI talent magnets
 - Establish European AI sovereignty and ensure economic competitiveness
 
However, RAISE’s design must be as bold as its goals. In this submission, we start by outlining challenges of the European AI landscape that RAISE must address, and proceed to strategic design recommendations that would give RAISE the best chance of success to advance both Science in AI, as well as AI in Science.
Context: The European AI landscape faces three interconnected challenges that RAISE must address
First, European science (talent and infrastructure) remains strong but fragmented, as noted by Mario Draghi[5], and evident in the scarcity of large European tech firms and AI start-ups. While European universities produce exceptional AI researchers, the fragmentation of resources and opportunities drives them to concentrated hubs in Silicon Valley, London, and Beijing. This fragmentation hampers progress because frontier AI research benefits from concentration[6] of computing resources and talent. The most successful AI companies—OpenAI, Anthropic, and Google DeepMind—have achieved breakthroughs precisely by combining massive computing infrastructure with dense clusters of elite researchers and the same pattern holds for catch-up actors like DeepSeek, whose much-publicized efficiency gains still required[7] billion-dollar infrastructure investments comparable to Western labs.
Second, current AI models lack trustworthiness, and the science of making them trustworthy is lagging[8]. While other industries invest 50-90% of their R&D budget in safety research, the AI industry invests only about 2%[9]—a gap RAISE could help address. This could not only make AI systems safer and more reliable, it can also boost adoption in Europe. Europe excels in many sectors where high reliability is crucial —like manufacturing and pharmaceutics. Without AI systems companies in such sectors can trust, adoption will remain limited. More trustworthy, European-made models would also be better aligned with existing regulations like the AI Act, further lowering adoption frictions.
Third, the popular sectoral approach to European AI innovation has made Europe vulnerable in general-purpose AI (GPAI). Meanwhile, progress in AI is increasingly dominated by GPAI models that work across many sectors with minimal customization. The trends point towards[10] these versatile systems rendering more and more sectoral approaches obsolete, as general-purpose models already outcompete specialized counterparts in catching software bugs[11], translating languages[12], and diagnosing medical conditions[13]. Europe must develop its own general-purpose capabilities to remain competitive globally.
To address these challenges and develop trustworthy AI, RAISE should balance centralized and decentralized approaches. As described in CFG’s recent report, ‘Building CERN for AI’[14], the EU can draw inspiration from successful research models like Focused Research Organizations (FROs) and ARPA-style institutions. The ARPA model, endorsed by Mario Draghi[15], has driven transformative innovations like GPS, the internet, and early AI systems, with European adaptations now emerging through Germany’s SPRIN-D[16] and the UK’s ARIA[17].
Meanwhile, FROs continue the spirit of the original CERN by concentrating exceptional talent and computational resources in dedicated facilities to tackle ambitious scientific challenges, which resulted among others in innovations like the World Wide Web[18] and attracted 70%[19] of the world’s leading particle physicists. Moreover, such concentrated structure becomes particularly crucial as AI research advances toward potentially transformative breakthroughs with dual-use implications, allowing implementing stringent security practices and tiered access protocols to prevent misuse and create stable, protected environment necessary for responsible scientific progress. By adopting a hybrid FRO/ARPA structure, RAISE would feature a centralized talent hub with concentrated computing resources and in-house research teams working on ambitious, high-risk projects like more trustworthy general-purpose AI models, while simultaneously coordinating distributed networks of researchers across Europe’s existing scientific ecosystem. Such design would combine the benefits of concentration needed for breakthrough AI development with the breadth and diversity of Europe’s scientific strengths, all the while implementing lean governance that empowers project leaders, reduces bureaucracy, and enables rapid resource allocation—essential features for competing in the fast-moving AI landscape.
RAISE offers an opportunity to address Europe’s AI challenges through a strategic dual-track approach. In its ‘Science in AI’ pillar, RAISE could focus on developing sovereign European general-purpose models that are explicitly trustworthy, ethical, and aligned with European values and regulations. This would counter fragmentation by concentrating resources on foundational advances. Simultaneously, the ‘AI in Science’ pillar could apply AI to scientific domains by both tailoring these general-purpose models and creating specialized systems in high-impact areas like materials science, medicine, and climate modeling where Europe already has strengths. This balanced strategy would ensure RAISE develops future-proof trustworthy models while leveraging Europe’s domain expertise to deliver transformative scientific and economic benefits.
Strategic Design: Building RAISE as Europe’s AI powerhouse
With the EU’s AI Continent Action Plan launching a network of high-performance computing hubs—AI Gigafactories—Europe has laid a strong hardware foundation for AI innovation. To fully realize this potential, it must now be matched by assembling world-class talent and cutting-edge expertise. RAISE offers a unique opportunity to create both a central talent hub and an integrated ecosystem of ambitious research programs, driving European leadership in AI for science and science for AI. To fulfil these ambitions, RAISE would also benefit from institutional independence rather than being embedded in the Commission structure. Such independence, perhaps in the legal form of Joint Undertaking[20], will make it possible to offer competitive wages to attract elite talent, operate more flexibly in setting up partnerships, and cut through bureaucratic constraints that have hampered other EU research initiatives.
To capitalise on this opportunity, RAISE could be designed as a dual-structure organization following proven, high-impact models endorsed by Mario Draghi[21]:
Centralized FRO-Style talent hub
Under this proposal, part of RAISE would function as a Focused Research Organization (FRO), creating a dense cluster of excellence where elite AI researchers work together in a physical hub. This institutional concentration would combine top talent with access to significant computing resources, enabling the intensive collaborative work required for breakthrough advances in foundational AI models and the science of trustworthy AI. Such a hub would focus on long-term fundamental research that addresses challenges unlikely to be tackled by private industry or traditional academic settings.
Distributed ARPA-style research programs
Complementing the central hub, RAISE would coordinate a network of mission-driven projects across Europe. Following the ARPA model, these distributed teams would work on specific high-impact challenges with clear deliverables and timeframes—either scientific applications of AI in high-impact domains, or technical problems in trustworthy AI. Program Directors would have significant autonomy to rapidly assemble teams and allocate resources across the European ecosystem, with time-bound objectives and concrete milestones to ensure focus and prevent institutional inertia. First 5-year evaluation of Germany’s ARPA-adaptation SPRIN-D has concluded[22] that the agency managed to ‘attract lots of ideas from the research community and created a much faster, more dynamic working culture than traditional funders’, supporting the agency’s goal of developing radically new technologies that can create whole new industries—a crucial attribute for a still nascent field of trustworthy AI.
As outlined in our report Building CERN for AI[23], this twin-engine design would create a dynamic system that strips out bureaucratic constraints, supports high-risk research portfolios, and fosters public-private collaboration to translate breakthroughs into real-world applications. It balances the concentration needed for frontier research with distributed expertise throughout Europe.
This vision for RAISE would complement existing European scientific infrastructure. Diverging from the typology of Research Infrastructures (RI)[24] and Technology Infrastructures (TI)[25] that fit well into other ecosystems, RAISE would serve as an active talent hub with its own, proactive research agenda. It would house dedicated research teams driven by clear missions to pursue moonshot projects with transformative potential while leveraging existing European infrastructures to maximize their value. Such a mission driven, proactive approach is well suited for the goal of developing trustworthy AI systems that are unlikely to be developed by American private actors.
The time for bold action is now
The world of AI innovations moves rapidly forward and some important technological milestones might be just around the corner. Ursula von der Layen recently admitted[26] that AI approaching human reasoning is expected as soon as next year, underlying a strong sense of urgency to act on AI.
The timing for establishing RAISE is particularly opportune. Traditional EU research structures are increasingly recognized as insufficient for the pace of AI development. Commissioner Zaharieva emphasized in her confirmation hearing[27] that excessive bureaucracy currently hampers innovation—a point echoed in Emanuel Heitor’s comprehensive evaluation[28] of the EU’s Framework Programmes. Meanwhile, Enrico Letta has called for[29] ‘large-scale, cross-border AI projects’ that align perfectly with RAISE’s mission.
For RAISE to fulfil the ambition of developing European trustworthy AI science and sovereign foundational models, it must secure significant computing resources. The emerging Gigafactories and AI factories present a perfect opportunity. Since these facilities will be co-funded by public and private actors, with the EU likely retaining control only over compute shares proportional to public investment, RAISE could become the primary and most strategically important user of public compute shares[30] available through the network of AI (Giga)Factories.
Establishing RAISE should be treated with urgency. European AI talent is leaving for opportunities abroad[31] as other countries like the US and China actively recruit European experts. RAISE could serve as one of Europe’s premier AI talent magnets, offering researchers not just access to world-class computing infrastructure but also the autonomy and opportunity to work on big projects—conditions that have proven essential for institutions like the UK AI Security Institute in attracting top-tier talent from industry. Importantly, RAISE can become operational before the Gigafactories are. Research teams could begin their work in smaller AI factories and through renting cloud compute, testing ideas that could later scale when the Gigafactories become operational, ensuring no time is lost in Europe’s quest for digital sovereignty. As the US and China continue to advance their AI capabilities, delays risk deepening Europe’s reliance on foreign AI systems that are increasingly becoming the standard infrastructure for scientific research and innovation worldwide.
How RAISE could advance ‘science in AI’
Drawing inspiration from successful research structures like ARPA and FROs, RAISE would propel the science of AI forward through an approach that combines foundational breakthroughs with practical applications. Since a lack of trustworthiness poses a significant obstacle to adoption, RAISE could initially invest a substantial amount of its resources in tackling this challenge. Over time, as the trustworthiness challenge is addressed, the bulk of RAISE’s portfolio could shift towards building and promoting AI applications.
Foundational programmes would focus on the fundamental science behind general-purpose AI, working to improve the robustness, reliability, and transparency of AI systems. In selecting specific programmes, RAISE could be guided by two core principles: making transformative “big bets” where even a small fraction of successful projects would generate outsized impact, and targeting research areas unlikely to be pursued by academia and industry. Recent surveys by organizations like IAPS have mapped AI research directions[32] currently neglected by leading AI companies, such as ensuring that AI systems accurately communicate their beliefs and reasoning, or understanding and mitigating risks from interactions between AI systems. Furthermore, new frameworks[33] are emerging that can help RAISE differentially accelerate the science of AI trustworthiness.
RAISE could serve as the crucial bridge between promising research ideas and production-scale AI systems. By aggregating computing power from European Gigafactories and AI Factories, it could develop trustworthy foundation models tailored to European values and regulatory frameworks.
As part of supporting Science in AI, RAISE could also invest in research to improve AI hardware design. Developing hardware-enabled security mechanisms[34] for AI chips and infrastructure would strengthen European security and strategic autonomy. These physical safeguards—like location tracking and privacy-preserving usage monitoring built directly into processors—would also enable technical verification of compliance[35] with future international AI agreements, building trust between nations. Additionally, optimizing AI hardware for energy efficiency would align with Europe’s climate goals.
How RAISE could advance ‘AI in science’
RAISE could drive AI adoption across scientific disciplines by developing tailored systems to address critical societal challenges. These could be both adaptations of general-purpose models or newly created specialized systems for domains with transformative potential where general-purpose approaches are less effective. As foundational trustworthiness research progresses, investment in domain-specific applications would increase proportionally. These application-focused programmes would bring researchers from diverse fields together with AI experts to create solutions aimed at Europe’s most pressing challenges.
The application portfolio could be strengthened through public-private partnerships with interested companies to accelerate market translation. For example, pharmaceutical companies could partner with RAISE’s drug discovery program, providing proprietary molecular datasets and domain expertise while gaining early access to AI models that could accelerate their R&D pipeline by years[36].
Applied programmes would be strategically selected by RAISE’s board to focus on critical societal challenges across multiple domains. Based on our preliminary assessments, examples of such domains could include:
- In climate science, RAISE-developed models could optimize renewable energy integration across Europe’s diverse geographies, directly supporting carbon reduction targets while ensuring energy grid stability.[37]
 - In healthcare, RAISE could enable the creation of diagnostic cancer tools trained on diverse European patient populations, while ensuring these systems are trained and used in a privacy-preserving manner.[38]
 - In biotech, RAISE could transform the field by accelerating drug discovery through the analysis of rich genomic databases.[39]
 - In security, RAISE could help develop advanced cyberresiliency solutions to protect critical national infrastructure.[40]
 - In robotics, models developed through RAISE could design systems capable of executing intricate laboratory procedures with superhuman precision, from blood sample preparation to complex diagnostic tasks.[41]
 
By catalyzing AI adoption across scientific domains while ensuring these technologies remain aligned with European values, RAISE would create a virtuous cycle of innovation and trust. This dual focus—developing AI systems tailored to European scientific needs while conducting the foundational research to make them trustworthy—represents a strategic opportunity that no other institution is currently positioned to fulfill. As scientific advances accelerate and economic benefits materialize, RAISE would demonstrate that European leadership in ethical, trustworthy AI is not just a regulatory aspiration but a practical reality with tangible benefits for citizens, researchers, and industries across the continent.
For a detailed blueprint, see our full report
At the Centre for Future Generations, we have recently published an in-depth report[42] laying out how the research arm of a potential “CERN for AI” could be designed, including detailed recommendations on governance and legal structure, research focus areas, and membership policies. We believe RAISE is well-positioned to become the host of such an institution and invite you to read the full report for a comprehensive institutional blueprint.
Endnotes
[1]European Commission, ‘A European Strategy for AI in science: paving the way for a European AI research council’, European Commission, n.d., available at https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14547-A-European-Strategy-for-AI-in-science-paving-the-way-for-a-European-AI-research-council_en
[2] Juijn, D., Pataki, B., Petropoulos, A., and Reddel, M., ‘CERN for AI: the EU’s seat at the table’, Centre for Future Generations, 2024, https://cfg.eu/cern-for-ai-eu-report/
[3] European Commission, ‘The AI Continent Action Plan’, Directorate-General for Communications Networks, Content and Technology, 2025, available at https://digital-strategy.ec.europa.eu/en/library/ai-continent-action-plan, p.17
[4] Janku, D., Juijn, D., Pataki, B., Petropoulos, A., and Reddel, M., ‘Building CERN for AI – An institutional blueprint’, Centre for Future Generations, 2025, https://cfg.eu/building-cern-for-ai/
[5] European Commission, ‘The Draghi report on EU competitiveness: The future of European competitiveness: Report by Mario Draghi’, European Commission, 2024, available at https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en
[6] Cottier B, Rahman R, Fattorini L, Maslej N and Owen D, ‘The rising costs of training frontier AI models’, arXiv, 2024, available at https://arxiv.org/abs/2405.21015v1
[7] Wildeford, P., ‘Ten Takes on DeepSeek’, The Power Law, 2025, available at https://peterwildeford.substack.com/p/ten-takes-on-deepseek
[8] Janků, D., Reddel, M., Yampolskiy, R. and Hausenloy, J., ‘We Have No Science of Safe AI’, Centre for Future Generations, 2024, available at: https://cfg.eu/we-have-no-science-of-safe-ai ↩
[9] ibid, p.31
[10] Janků, D., Reddel, F., Graabak, J. and Reddel, M., ‘Beyond the AI hype: A critical assessment of AI’s transformative potential’, Centre for Future Generations, 2025, available at: https://cfg.eu/beyond-the-ai-hype/#chapter-6, section ‘Becoming generalist’
[11] Nouwou Mindom, P.S., Nikanjam, A. and Khomh, F., ‘Harnessing Pre-trained Generalist Agents for Software Engineering Tasks’, arXiv, 2023, available at https://arxiv.org/abs/2312.15536.
[12] Garcia, X., et al., ‘The unreasonable effectiveness of few-shot learning for machine translation’, arXiv, 2023, available at https://arxiv.org/abs/2302.01398
[13] Kanjee, Z., Crowe, B. and Rodman, A., ‘Accuracy of a Generative Artificial Intelligence Model in a Complex Diagnostic Challenge’, JAMA, 2023, available at https://jamanetwork.com/journals/jama/fullarticle/2806457.
[14] Janku, D., Juijn, D., Pataki, B., Petropoulos, A., and Reddel, M., ‘Building CERN for AI – An institutional blueprint’, Centre for Future Generations, 2025, https://cfg.eu/building-cern-for-ai/
[15] European Commission, ‘The Draghi report on EU competitiveness: The future of European competitiveness: Report by Mario Draghi’, European Commission, 2024, available at https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en, p.247
[16] SPRIND, ‘About us’, Federal Agency for Disruptive Innovation, n.d., available at: https://www.sprind.org/en/we
[17] ARIA, ‘Home’, Advanced Research + Invention Agency, n.d., available at: https://www.aria.org.uk/
[18] CERN, ‘Key Achievements’, CERN, n.d., available at: https://home.cern/about/key-achievements
[19] Kohler, K., ‘CERN for AI: An overview’, Machinocene, 2024, available at:
https://machinocene.substack.com/p/cern-for-ai-an-overview
[20] Janku, D., Juijn, D., Pataki, B., Petropoulos, A., and Reddel, M., ‘Building CERN for AI – An institutional blueprint’, Centre for Future Generations, 2025, https://cfg.eu/building-cern-for-ai/, section ‘Legal framework’, p.45
[21] European Commission, ‘The Draghi report on EU competitiveness: The future of European competitiveness: Report by Mario Draghi’, European Commission, 2024, available at https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en
[22] Matthews, D., ‘Germany’s Sprind innovation agency: what works, what does not’, Science|Business, 2025, available at: https://sciencebusiness.net/news/research-and-innovation-gap/germanys-sprind-innovation-agency-what-works-what-does-not
[23] Janku, D., Juijn, D., Pataki, B., Petropoulos, A., and Reddel, M., ‘Building CERN for AI – An institutional blueprint’, Centre for Future Generations, 2025, https://cfg.eu/building-cern-for-ai/
[24] European Commission, ‘European Research Infrastructures’, Directorate-General for Research and Innovation, n.d., available at https://research-and-innovation.ec.europa.eu/strategy/strategy-research-and-innovation/our-digital-future/european-research-infrastructures_en
[25] European Commission, ‘Technology Infrastructures’, Directorate-General for Research and Innovation, n.d., available at https://research-and-innovation.ec.europa.eu/research-area/industrial-research-and-innovation/technology-infrastructures_en
[26] von der Leyen, U., ‘Speech at the Annual EU Budget Conference 2025’, European Commission,  2025, available at: https://ec.europa.eu/commission/presscorner/detail/en/speech_25_1284
[27] European Parliament, ‘Hearing of Commissioner-designate Ekaterina Zaharieva’, European Parliament Press Service, 2024, available at https://www.europarl.europa.eu/news/en/press-room/20241029IPR25034/hearing-of-commissioner-designate-ekaterina-zaharieva
[28] Directorate-General for Research and Innovation (European Commission), ‘Align, Act, Accelerate: Research, Technology and Innovation to boost European Competitiveness’, Publications Office of the European Union, 2024, available at https://op.europa.eu/en/publication-detail/-/publication/2f9fc221-86bb-11ef-a67d-01aa75ed71a1/language-en ↩
[29] Enrico Letta, ‘Much more than a market – Speed, Security, Solidarity: Empowering the Single Market to deliver a sustainable future and prosperity for all EU Citizens’, Council of the European Union, 2024, available at https://www.consilium.europa.eu/media/ny3j24sm/much-more-than-a-market-report-by-enrico-letta.pdf
[30] Daan Juijn and David Janků, ‘Delivering the EU’s AI Continent Action Plan’, Centre for Future Generations, 2025, available at https://cfg.eu/ai-continent-action-plan/
[31] Siddhi Pal, ‘Where is Europe’s AI workforce coming from? Immigration, Emigration & Transborder Movement of AI talent’, Interface, 2024, available at https://www.interface-eu.org/publications/where-is-europes-ai-workforce-coming-from
[32] Oscar Delaney, Oliver Guest and Zoe Williams, ‘Mapping Technical Safety Research at AI Companies: A literature review and incentives analysis’, Institute for AI Policy and Strategy, 2024, available at https://www.iaps.ai/research/mapping-technical-safety-research-at-ai-companies
[33] Ren R, et al., ‘Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?’, arXiv, 2024, available at https://arxiv.org/abs/2407.21792
[34] Sastry G, et al., ‘Computing Power and the Governance of Artificial Intelligence’, arXiv, 2024, available at https://arxiv.org/pdf/2402.08797
[35] Wasil, A. R., Reed, T., Miller, J. W. and Barnett, P., ‘Verification methods for international AI agreements’, arXiv, 2024, available at https://arxiv.org/pdf/2408.16074v1
[36] A comprehensive analysis published in Drug Discovery Today (2024) found that AI-discovered drugs achieve 80-90% success rates in Phase I clinical trials compared to the historical industry average of 40-65%, [Source:https://www.sciencedirect.com/science/article/pii/S135964462400134X], with Insilico Medicine’s INS018_055 becoming the first fully AI-discovered and designed drug to reach Phase II trials in under 30 months [Source: https://www.nature.com/articles/d41586-023-03172-6 ] and research funder Wellcome claiming that AI could bring “time and cost savings of at least 25–50%” in drug discovery up to the preclinical stage [Source: https://cms.wellcome.org/sites/default/files/2023-06/unlocking-the-potential-of-AI-in-drug-discovery_report.pdf ]
[37] For example, Google DeepMind’s AI models increased wind farm energy value by 20% through improved 36-hour predictions, achievements now being commercialized across 700 MW of wind capacity. [Source: https://deepmind.google/discover/blog/machine-learning-can-boost-the-value-of-wind-energy/]
[38] For example, Google’s AI system for breast cancer screening achieved 94% accuracy in a study published in Nature, outperforming average radiologists by 11.5% while reducing false positives by 5.7% and false negatives by 9.4%, with real-world implementation in Germany showing a 17.6% increase in cancer detection rates across 463,094 women. [Source: https://www.nature.com/articles/s41586-019-1799-6]
[39] The UK Biobank’s completion of exome sequencing for 500,000 participants, published in Nature Genetics, created the world’s largest genomic dataset that enabled pharmaceutical companies to accelerate drug target identification by 2x, with GSK, AstraZeneca, and Pfizer forming a consortium to leverage this resource. [Source: https://www.nature.com/articles/s41588-021-00885-0]
[40] IBM’s 2024 Cost of a Data Breach Report revealed that organizations using AI and automation extensively in security prevention saved an average of $2.22 million per breach compared to those without these technologies, reducing the global average breach cost from $5.72 million to $3.84 million. [Source: https://www.ibm.com/reports/data-breach]
[41] A peer-reviewed study in PMC documented that clinical laboratory automation at Spedali Civili hospital in Italy achieved a 12.55% total cost reduction despite equipment investment, with emergency department turnaround times improving by 14.9 minutes and a 74.1% reduction in manual tube handling. [Source: https://pmc.ncbi.nlm.nih.gov/articles/PMC5477477/]
[42] Janku, D., Juijn, D., Pataki, B., Petropoulos, A., and Reddel, M., ‘Building CERN for AI – An institutional blueprint’, Centre for Future Generations, 2025, https://cfg.eu/building-cern-for-ai/