
The closing window for AI governance
This analysis is part of AI Possible futures, expanding on the key takeaways presented on the main publication page. The highlights and evidence featured in this analysis are not exhaustive, and each scenario contains additional context-specific, nuanced insights that may be particularly relevant for specific audiences. We encourage you to explore the full scenarios for more comprehensive context and implications.
Traditional governance is struggling to meet the AI moment. While policymakers debate frameworks designed for gradual technological change, across our scenarios we see rapid transformation ahead, regardless of specific AI capabilities assumed. This creates a fundamental mismatch: regulatory processes that take years to implement are trying to govern technologies that transform in months. This gap is particularly stark in the EU, where, as Mario Draghi acknowledged last year[1], national and regional institutional layers add complexity that slows the legislative process. The AI Act, for example, took over three years to pass while the technology advanced with increasing speed, and it will be difficult to amend.
The consequences of this governance gap are already visible. Current AI systems, even without further breakthroughs, contain sufficient capability to fundamentally change labour markets, democratic institutions, and global power structures. Yet most policy frameworks assume we have time to adapt gradually.
The EU took an important step with the AI Act and the creation of its AI Office. However, these efforts cover only part of the picture and risk being premature, potentially locking institutions into a regulatory trajectory that may not align with how AI actually evolves. Many of the risks and opportunities identified in our scenarios – including defence, labour market disruption, the environment, and power concentration – fall outside the scope of the Act. The EU must also grapple with the fact that most frontier AI is developed elsewhere, even though it still profoundly affects European societies. Meeting the full challenge of AI will require broader and more adaptive governance across domains, supported by three core capabilities that must be substantially strengthened.
- Governments need significantly deeper technical expertise embedded throughout their institutions.
- Policy processes must be redesigned for greater agility, with mechanisms to rapidly evaluate and adapt frameworks as capabilities evolve.
- Foresight capabilities must become more sophisticated, pairing strategic scenario exercises with specific, quantified projections that can directly inform decision-making.
As AI continues transforming society at an accelerating pace, building this capacity for forward-looking governance becomes not merely advantageous, but essential for maintaining democratic oversight of this transformative technological transition.
After an extensive scenario planning project, Centre for Future Generations has published 10 thought-provoking visions of how AI could impact our society – please find those in full here.
Below, we have summarised the five most interesting themes we discovered from this process.
1. AI may accelerate its own progress
We are already witnessing positive feedback loops in AI development that could become significantly more powerful. Leading AI labs are deploying advanced reasoning models to optimize their own R&D processes – debugging code, designing experiments, and fine-tuning models. This creates a cycle where AI improvements directly enable further AI advancements. As these systems gain greater capabilities in agentic software engineering and research taste (the ability to come up with promising new hypotheses and re-evaluate hunches based on experimental results), these feedback loops may intensify dramatically.
The scale of this trend is already substantial. According to Epoch AI estimates[2], training compute has been increasing by a factor of 4-5× every year since 2010, while algorithms are becoming approximately 3× more efficient per year[3]. Together, this constitutes an estimated 12-15× effective capability gain annually – an exponential growth rate unseen in other fast-growing sectors. Meanwhile, early signs of AI models accelerating parts of the R&D process are beginning to emerge[4], with developers reporting automated tasks such as writing evaluation harnesses, generating training data, and suggesting architectural tweaks.
The implications of this acceleration are profound. As AI begins to augment or replace human researchers in increasingly sophisticated AI R&D tasks, we could witness years of expected progress compressed into mere months. This rapid evolution threatens to outpace our collective ability to develop appropriate governance structures, establish safety standards, and adapt societal institutions. Traditional regulatory approaches assume relatively predictable innovation timelines, but AI-accelerated progress could render governance frameworks obsolete within months of implementation.
Crucially, this dynamic could extend beyond AI. As automated R&D capabilities mature, they may spread to other scientific and technical domains, potentially triggering unprecedented progress in biotechnology, materials science, energy systems, and beyond. This could usher in a new era of scientific flourishing and prosperity, but it would also place enormous stress on existing institutions, suddenly confronted with a world transforming at a pace far beyond historical precedent. Educational systems, regulatory bodies, economic policies, and international agreements – all designed for relatively gradual change – may struggle to adapt to this compressed timeline, creating both institutional challenges and societal dislocations that demand urgent attention.
2. Non-disruptive AI scenarios are unlikely
Current AI systems, when widely deployed, will already significantly alter labour markets, reshape cultural and educational institutions, and transform how power is distributed throughout society. It is not a question of ‘if’ and it isn’t simply a question of ‘when’, but of ‘who’ and ‘how’ and ‘where’.
Evidence of this transformation is already visible across multiple domains. In labour markets, companies are implementing explicit AI-first policies, with Duolingo requiring teams to prove AI cannot perform a role before hiring humans[5] and planning to “gradually stop using contractors to do work that AI can handle.” High-skill professional work is increasingly affected, with AI systems outperforming human doctors on medical diagnosis[6]. Cultural institutions face upheaval as creative workers strike over AI-generated content, with Hollywood writers and actors demanding protections[7] against synthetic scripts and digital replicas. Geopolitically, U.S. export controls on AI chips to China[8] have reshaped global supply chains and intensified technological competition, while civil rights groups organize demonstrations against AI surveillance systems. Educational systems grapple with fundamental questions about assessment and learning as AI capabilities reshape traditional notions of human-specific skills.
Many policymakers and business leaders underestimate both the magnitude and likelihood of these changes. Equally important, the potential for societal resistance to these shifts is consistently underexamined. As AI systems increasingly displace human workers in white-collar and creative sectors, public sentiment could rapidly shift from fascination to anxiety, and worse, potentially triggering backlash that impedes beneficial applications.
The transformative effects are already visible across multiple sectors – from AI encroaching on professional work previously thought to require human judgment to reshaping core cultural institutions. Education systems must reconsider their purpose when AI can perform many skills they were designed to teach. Media ecosystems face disruption as synthetic content becomes indistinguishable from human-created work, while democratic processes confront new challenges in increasingly manipulable information environments. These transformations will occur regardless of whether further technical breakthroughs materialize, as the capabilities already demonstrated by current systems contain sufficient disruptive potential to fundamentally reshape societal structures.
3. Technical guardrails face increasing pressure
Current safety approaches that rely primarily on human feedback and oversight face mounting challenges as AI systems become more capable and autonomous. These methods were designed for models with limited agency and scope, but as AI systems gain greater capabilities for strategic planning, scientific reasoning, and environmental manipulation, existing guardrails may prove inadequate.
Recent evidence supports these concerns. Recent evaluations of OpenAI’s o3 model by METR[9] found concrete instances of reward hacking behavior, such as the model tampering with timing functions to falsely report faster performance. Meanwhile, the International Scientific Report on the Safety of Advanced AI[10] notes that AI systems can often produce unpredictable outcomes due to poor generalization or unclear goals, with existing safety techniques like red-teaming and interpretability proving insufficient to reliably catch these failures in frontier systems.
The deeper issue is that AI development today outpaces the growth of any comparable safety discipline. There is, at present, no systematic science of safe AI[11]. Existing safety techniques remain brittle and often fail to generalise to novel, more capable systems. As frontier models become more autonomous and strategic, this lack of a robust safety foundation becomes an existential policy challenge.
This challenge is exacerbated by market pressures and geopolitical competition that incentivize rapid deployment over robust safety measures. Companies and nations competing for AI advantage could resort to superficial safety shortcuts that address symptoms rather than underlying vulnerabilities. Without scalable safety protocols that grow more sophisticated in tandem with AI capabilities, humanity risks developing systems that can evade controls and pursue objectives that diverge from their developers’ values.
International agreements could help establish common safety standards and manage dangerous competitive dynamics. However, meaningful international cooperation faces substantial obstacles in the current geopolitical climate. As AI increasingly becomes a strategic battleground between major powers – particularly the US and China – rising tensions and eroding trust undermine collaborative governance efforts. Private AI companies are forming deeper partnerships with defence establishments[12], further blurring the lines between commercial and military AI development.
A fundamental barrier to effective international coordination is the lack of robust verification mechanisms to ensure compliance with any potential agreements. While major data centres can be identified through satellite imagery, unlike nuclear facilities, the critical aspects of AI development occur at the software level, hidden from external observation. What exactly transpires inside these facilities – whether a company is training a harmless language model or developing advanced capabilities with military applications – remains largely opaque to outside verification.
This verification challenge, combined with diminishing trust between major powers, suggests that meaningful international cooperation might unfortunately become feasible only after incidents serious enough to force political leaders to address emerging risks collectively.
4. AI may centralize power on unprecedented scales
As AI becomes essential infrastructure for economic growth and national security, control over key components of the AI supply chain – from model development and chip manufacturing to data centre operations – creates new power dynamics. This concentration threatens a world in which even fewer entities, whether a limited set of dominant powers or corporations, hold disproportionate influence over global economic and political systems[13]. Within states themselves, control over AI capabilities could further concentrate power among smaller groups of political and technical elites.
We are already witnessing early manifestations of this power concentration. Major technology companies have leveraged their capabilities to extract concessions from governments and reshape regulatory landscapes. Big Tech companies have repeatedly used the threat of withdrawing services from entire regions as a negotiating tactic. AI companies might increasingly use this tactic, with potentially large economic consequences for the regions affected.
Within governments themselves, the concern is that AI could concentrate power within state institutions. If AI systems can surpass or equal human experts in military strategy and cyber operations, small groups could gain outsized control over and within critical institutions. Unlike human institutions that naturally distribute power, AI workforces can be engineered for singular loyalty, potentially enabling coups even in established democracies.
The European Union faces particular vulnerability to these power shifts, caught between dominant powers and with limited control over critical AI infrastructure. But this challenge isn’t unique to the EU; it extends to most nations and communities worldwide, raising fundamental questions about the future of economic security, democratic governance, and technological sovereignty.
5. Building resilience alongside innovation
As AI capabilities advance and diffuse throughout society, maintaining the benefits of innovation and openness requires deliberate investment in resilience across multiple dimensions. The challenge is not simply managing specific risks, but ensuring that societal systems can adapt to and withstand the disruptive forces that advanced AI will unleash.
Beyond our own scenario analysis, there is growing foresight work that can strengthen resilience planning. The OECD AI governance scenarios[14] and “AI 2027[15]” provide scenarios for anticipating challenges across different development pathways. These can help policymakers identify vulnerabilities and design adaptive responses before crises emerge.
This need for resilience spans several critical domains, including economic systems and labour markets, educational institutions, information ecosystems, and geopolitical stability. In cybersecurity specifically, open-weight models fuel decentralized innovation and expand access to cutting-edge capabilities[16], but as noted in the International Scientific Report on the Safety of Advanced AI, widespread access to powerful models could destabilize defensive capabilities if protective systems cannot keep pace. The solution involves building adaptive defences that can evolve alongside emerging threats, rather than just controlling access.
Perhaps most importantly, institutional resilience requires governance structures that can rapidly adapt to technological change. AI’s pace of development demands more agile institutions capable of continuous learning and adjustment, moving beyond traditional regulatory approaches that assume relatively stable technological landscapes.
Ironically, AI itself may prove to be an effective tool for building these defences, from automated threat detection[17] to personalized education systems. However, this requires deliberate coordination across public and private sectors, sustained investment in resilience infrastructure, and recognition that the goal is not to prevent change but to ensure society can thrive amid transformation.
Conclusion: A closing window of opportunity
The range of possible AI futures remains extraordinarily wide. Progress could accelerate dramatically, creating destabilizing effects across multiple domains, or it might proceed more gradually while still fundamentally transforming society. What’s clear is that the decisions made today will significantly shape which futures become possible or probable.
The stakes of these decisions are hard to overstate. Responsible development could unlock extraordinary benefits for humanity – from scientific breakthroughs and economic prosperity to enhanced human capabilities and institutional effectiveness. Conversely, misaligned development or deployment could create severe risks that threaten key values and institutions and may even lead to loss of control over AI systems.
The window for shaping these outcomes is rapidly narrowing. As AI capabilities advance and deployment accelerates, the opportunity to establish foundational governance frameworks, safety standards, and societal adaptations diminishes. This urgency demands immediate, coordinated action across public and private sectors.
Endnotes
[1] Draghi, M.’ The future of European competitiveness. Part A: A competitiveness strategy for Europe’. Publications Office of the European Union, 2025
[2]Epoch AI, “Machine Learning Trends”, Epoch AI, 11 April 2023, https://epochai.org/trends, accessed 27 June 2025.
[3]Ho, Anson, “Algorithmic Progress in Language Models”, Epoch AI, 12 March 2024, https://epochai.org/blog/algorithmic-progress-in-language-models, accessed 27 June 2025.
[4]Owen, David, “Interviewing AI Researchers on Automation of AI R&D”, Epoch AI, 27 August 2024, https://epoch.ai/blog/interviewing-ai-researchers-on-automation-of-ai-rnd, accessed 27 June 2025.
[5]Notopoulos, Katie, “Duolingo Drama Underscores the New Corporate Balancing Act on AI Hype”, Business Insider, 15 May 2025, https://www.businessinsider.com/ai-messaging-backlash-duolingo-shopify-controversy-2025-5?international=true&r=US&IR=T, accessed 27 June 2025.
[6]Goh, Ethan et al., “Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial”, JAMA Network Open, Vol. 7, No. 10, 28 October 2024, https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2825395, accessed 27 June 2025.
[7]Poniewozik, James, “TV’s War with the Robots Is Already Here”, The New York Times, 10 May 2023, https://www.nytimes.com/2023/05/10/arts/television/writers-strike-artificial-intelligence.html, accessed 27 June 2025.
[8]Reuters, “U.S. Updates Export Curbs on AI Chips and Tools to China”, Reuters, 30 March 2024, https://www.reuters.com/technology/us-commerce-updates-export-curbs-ai-chips-china-2024-03-29, accessed 27 June 2025.
[9] METR, “Details About METR’s Preliminary Evaluation of OpenAI’s O3 and O4”, METR’s Autonomy Evaluation Resources, 16 April 2025, https://metr.github.io/autonomy-evals-guide/openai-o3-report/#reward-hacking-examples, accessed 27 June 2025.
[10] UK Government, “International AI Safety Report 2025”, GOV.UK, 18 February 2025, https://www.gov.uk/government/publications/international-ai-safety-report-2025, accessed 27 June 2025.
[11] Janků, David; Reddel, Max; Yampolskiy, Roman; and Hausenloy, Jason, “We Have No Science of Safe AI”, cfg.eu, 13 June 2025, https://cfg.eu/we-have-no-science-of-safe-ai, accessed 4 July 2025.
[12] Bloomberg, “AI Protest at OpenAI HQ in San Francisco Focuses on Military Work”, Bloomberg Newsletter, 13 February 2024, https://www.bloomberg.com/news/newsletters/2024-02-13/ai-protest-at-openai-hq-in-san-francisco-focuses-on-military-work, accessed 27 June 2025.
[13]UK Government, “International AI Safety Report 2025, Section 2.3.3”, GOV.UK, https://www.gov.uk/government/publications/international-ai-safety-report-2025/international-ai-safety-report-2025#systemic-risks, accessed 25 June 2025.
[14]OECD, “2024 OECD Global Strategy Group”, OECD Events, 15 October 2024, https://www.oecd.org/en/events/2024/10/2024-oecd-global-strategy-group0.html, accessed 27 June 2025.
[15]AI 2027, “AI 2027: Scenarios for AI Development”, AI 2027, https://ai-2027.com, accessed 27 June 2025.
[16]UK Government, “International AI Safety Report 2025”, GOV.UK, 18 February 2025, https://www.gov.uk/government/publications/international-ai-safety-report-2025, accessed 27 June 2025.
[17]Nagel, Johannes, Lindegaard, Marius, and Graabak, Jakob, “Strengthening AI Trustworthiness”, Centre for Future Generations, 22 May 2025, https://cfg.eu/strengthening-ai-trustworthiness, accessed 27 June 2025.