Skip to content

The AI Safety Institute Network:

Who, What and How?

Key findings:

  1. Diverse Approaches: Countries have adopted varied strategies in establishing their AISIs, ranging from building new institutions (UK, US) to repurposing existing ones (EU, Singapore).
  2. Funding Disparities: Significant variations in funding levels may impact the relative influence and capabilities of different AISIs. The UK leads with £100 million secured until 2030, while others like the US face funding uncertainties.
  3. International Cooperation: While AISIs aim to foster global collaboration, tensions between national interests and international cooperation remains a challenge for AI governance. Efforts like the UK-US partnership on model evaluations highlight potential for effective cross-border cooperation.
  4. Regulatory Approaches: There’s a spectrum from voluntary commitments (UK, US) to hard regulation (EU), with ongoing debates about the most effective approach for ensuring AI safety while fostering innovation.
  5. Focus Areas: Most AISIs are prioritising AI model evaluations, standard-setting, and international coordination. However, the specific risks and research areas vary among institutions.
  6. Future Uncertainties: The evolving nature of AI technology and relevant geopolitical factors create significant uncertainties for the future roles and impacts of AISIs. Adaptability will be key to their continued relevance and effectiveness.

This brief provides a comprehensive overview of the current state of the AISI Network, offering insights into the challenges and opportunities in global AI governance. It serves as a foundation for policymakers, researchers, and stakeholders to understand and engage with this crucial aspect of AI development and regulation.

1 Australia, Canada, Germany, Italy and South Korea also joined the network agreement but are not included in this analysis because they haven’t announced substantial detail on their plans.

Country / Region
Executive Leader Appointments
Roles2
What are they doing
Parent Department
Funding

UK (UK AISI)

As frontier model taskforce: 24 April 2023, 18 June 2023

1 – R&D
2 – Standard setting
3 – International coordination

  • Evaluations (pre- & post-deployment)
  • Fundamental AI Safety Research
  • Organizing AI Safety Summits & State of Science Report
  • DSIT (Department of Science, Innovation and Technology)
  • UK AISI meant to function as a “startup within government”

100 million GBP, funding secured until 2030

US (US AISI)

2 Feb 2024

1 – R&D
2 – Standard setting
3 – International coordination

  • Evaluations (pre- & post-deployment)
  • Fundamental AI Safety Research (deepfake detection, model security, best practices, and standards)
  • Convening AISI Network in San Francisco
  • NIST (National Institute of Standards and Technology)

10 million USD for 2024/25

EU (EU AI Office)

29 May 2024

1 – Regulation
2 – Standard setting
3 – International coordination

  • Drawing up codes of practice and standards (and aligning them internationally)
  • Enforcing EU AI Act
  • Evaluations on GPAI models for “systemic risk”
  • DG Connect (Directorate-General for Communication Networks, Content and Technology)

46.5 million EUR, setup funding

Japan (Japan AISI)

4 Feb 2024

1 – R&D

  • Evaluations (how to implement & design them)
  • Research into possible standards
  • International Coordination
  • IPA (Information-technology Promotion Agency)

Unclear

Singapore (Digital Trust Center [DTC])

1 Jun 2022

1 – R&D

  • Testing and Evaluation
  •  Safe Model Design, Development and Deployment
  • Content Assurance (Transparency about where and how AI content is generated)
  • IMDA (Info-communications Media and Development Authority)
  • AI Verify Foundation

37 million USD, setup funding (in 2022)

France (AI Evaluation Center)

1 Jul 2024

1 – R&D

  • Focus research and innovation
  • Develop new evals, and provide eval infrastructures
  • Organize a recurring evaluation campaign
  • National Institute for Research in Computer Science and Automation (Inria) and the Laboratory for Metrology and Testing (LNE)

Unclear

Country / Region

UK (UK AISI)

US (US AISI)

EU (EU AI Office)

Japan (Japan AISI)

Singapore (Digital Trust Center [DTC])

France (AI Evaluation Center)

Executive Leader Appointments

As frontier model taskforce: 24 April 2023, 18 June 2023

2 Feb 2024

29 May 2024

4 Feb 2024

1 Jun 2022

1 Jul 2024

Roles<sup>2</sup>

1 – R&D
2 – Standard setting
3 – International coordination

1 – R&D
2 – Standard setting
3 – International coordination

1 – Regulation
2 – Standard setting
3 – International coordination

1 – R&D

1 – R&D

1 – R&D

What are they doing
  • Evaluations (pre- & post-deployment)
  • Fundamental AI Safety Research
  • Organizing AI Safety Summits & State of Science Report
  • Evaluations (pre- & post-deployment)
  • Fundamental AI Safety Research (deepfake detection, model security, best practices, and standards)
  • Convening AISI Network in San Francisco
  • Drawing up codes of practice and standards (and aligning them internationally)
  • Enforcing EU AI Act
  • Evaluations on GPAI models for “systemic risk”
  • Evaluations (how to implement & design them)
  • Research into possible standards
  • International Coordination
  • Testing and Evaluation
  •  Safe Model Design, Development and Deployment
  • Content Assurance (Transparency about where and how AI content is generated)
  • Focus research and innovation
  • Develop new evals, and provide eval infrastructures
  • Organize a recurring evaluation campaign
Parent Department
  • DSIT (Department of Science, Innovation and Technology)
  • UK AISI meant to function as a “startup within government”
  • NIST (National Institute of Standards and Technology)
  • DG Connect (Directorate-General for Communication Networks, Content and Technology)
  • IPA (Information-technology Promotion Agency)
  • IMDA (Info-communications Media and Development Authority)
  • AI Verify Foundation
  • National Institute for Research in Computer Science and Automation (Inria) and the Laboratory for Metrology and Testing (LNE)
Funding

100 million GBP, funding secured until 2030

10 million USD for 2024/25

46.5 million EUR, setup funding

Unclear

37 million USD, setup funding (in 2022)

Unclear

2 Roles inspired by a forthcoming Institute for AI Policy and Strategy publication

Why was the AISI Network Created?

According to the agreement made in Seoul, all countries and regions creating AI Safety Institutes in the network do so “to promote the safe, secure and trustworthy development of AI”. What that actually means and how their institute should function is very much up to the interpretation of each.

Overall, the AISIs are working on evaluating frontier models, drawing up codes of practice and common standards, and leading international coordination on AI governance. The desire of each country to shape international AI governance according to its own vision has led to positive competitive dynamics with AISIs being set up across multiple countries. It should be noted, however, that speed isn’t everything. Anecdotally, some labs told ICFG that they have chosen to work with the UK AISI because they did the best work, rather than the fact that they were the first movers.

The AISIs of the UK, US and Japan, as well as the EU AI Office, have committed to evaluating Advanced AI models and have expressed concern over severe risks future advanced AI systems could pose, including the exacerbation of chemical and biological risks, cybercrime, and humanity’s potential loss of control over smarter-than-human AI systems.

In particular, at Seoul AI Safety Summit, participating countries and AI companies agreed “to develop shared risk thresholds for frontier AI development and deployment, including agreeing when model capabilities could pose ‘severe risks’ without appropriate mitigations.” These thresholds will be confirmed at the French AI Action Summit in February 2025, so the clock is ticking on identifying how to design and where to set these thresholds. This is an open technical problem that will require technical research and understanding and creates a common deadline that all AISIs now share.

AI Safety Institute_Graphic1

However, some of these institutions, like the EU AI Office and the Singaporean DTC, have a wider scope and therefore have to balance safety with competing interests, like driving innovation and AI adoption. Furthermore, the EU AI Office, unlike the national AISIs, is tasked with the enforcement of hard law, namely the EU AI Act, which contains regulation of advanced AI. Also, stated goals for some institutes may differ from their visible actions.

Many of the AISIs are still in early setup stages. The existing public work is limited to public evaluations from the UK AISI, the drafting of codes of practice and broad-stroke international coordination. Many questions still remain about what the AISI network will look like in practice – until more work is done, it is impossible to be completely certain. However, we can observe some general trends emerging based on the public information available.

How is the AISI Network taking shape?

Some AISIs, like the UK AISI, have been built from the ground up, taking modern approaches to how public institutions can be staffed and run in a flexible and startup-like manner, inspired by learnings from the COVID pandemic. Others, like the US and Japan, are also building institutes from the ground-up, albeit slower.

Other AISIs have been created by expanding existing institutions, or institutions with other remits, to also include AI Safety. This approach can be more comprehensive, but also more bureaucratic and resource-intensive. The EU AI Office is a prime example of this, evolving from existing EU mandates and institutions, and inheriting a regulatory agenda on AI that started before generative AI entered the public eye. However, evolving out of existing institutions doesn’t necessarily lead to a slow start. Singapore has taken a similar path to the EU, also starting from a model that was developed in 2019 and in contrast to the EU, has already achieved outputs such as red-teaming a frontier model.

This difference in approaches mirrors two different AI governance strategies: a reactive one versus a proactive one.3

The first approach aims to keep up to date with fast-paced and uncertain technological advancements. Its response to a high-uncertainty environment is to lower the uncertainty window by gathering as much information as possible on the direction and progress of the technology.

The second approach aims to preemptively and actively steer the development of the technology in a fixed direction.This approach responds to a high-uncertainty situation by instead focusing on one specific outcome in mind, ahead of time, and laying out a plan to achieve it.

These methods come with tradeoffs. The reactive approach could fail to prepare for certain risks or act in time, thus exposing businesses and the general public to otherwise avoidable harms. However, the proactive approach risks wrongly anticipating either the direction that technology could go, or the risks that could end up actually mattering most, or, crucially, pushing technology or solutions in suboptimal directions.

In an ideal world, regimes would start off in the reactive mode, but with the ability to rapidly shift to the proactive model once uncertainty was sufficiently low and more information was available. Importantly, starting in a reactive mode doesn’t mean that states shouldn’t be taking any action right now. Rather, it means that the type of action they should be taking is action that enables them to:

  1. gather more information so that they can shift from reactive to proactive sooner, and
  2. be able to quickly and effectively shift from reactive to proactive when that moment comes.

These factors are examples of building state capacity, and we can directly study AISIs being set up as an exercise in building state capacity. Importantly, they don’t have to exist as an institution that will carry out proactive action. Rather, they can act as an enabler of that transition between a more reactive regime and a proactive one.

Both reactive and proactive models of AI governance have the potential to be successful even if limited by institutional or bureaucratic constraints. Regardless of which model a regime decides to choose, embracing the factors that bring it closer to the ideal model are most important. Reactive regimes benefit from building up capacity to quickly act in the future, even if they don’t act now. And proactive regimes benefit from holding onto flexibility and the ability to adjust and adapt their schemes as new information becomes available.

3 Institutionalization, Leadership, and Regulative Policy Style: A France/ItalyComparison of Data Protection Authorities, (Righettini, 2011). Link.

Methodology

Research primarily consisted of desk research, backed by a series of semi-structured interviews with staff at various AISIs or other researchers studying AISIs.

This brief provides a comprehensive analysis of the AI Safety Institute (AISI) Network by examining individual AISIs across several key dimensions. For each AISI, we explore two main areas:

Background:

  • Historical context and events leading to the establishment of the AISI
  • Key dates and milestones in the AISI’s development
  • Institutional setting and parent organisations
  • Initial funding and resource allocation

 

Vision:

  • Stated goals and objectives of the AISI
  • Core focus areas and priorities
  • Planned activities and initiatives
  • Approach to AI safety and governance
  • Key statements from leadership reflecting the AISI’s mission

This approach allows for a systematic comparison across different AISIs while acknowledging their unique contexts and approaches. By consistently applying this framework, we can identify common themes and divergent strategies.

The brief covers the AISIs or equivalent institutions of the UK, US, EU, Japan, Singapore, and France in detail, with brief mentions of other countries’ efforts where information is available. These AISIs were chosen as they are, at time of writing, the most advanced and therefore provide the most useful evidence base for  the identification of  best practices and key features for successful AI governance, providing valuable insights for governments who have not yet established their AI governance strategies or institutions.

Some questions unanswered by this brief are why the AISIs have their goals and exploring what they will need to do to properly internationalize the AISI network. This could be future work done by ICFG, other organizations, or a collaboration of both. Also, while other research conducted on AISIs backs up many of the findings, the AISI network is still in its infancy, so assessing outcomes is challenging.

AISI Country Overview

Critical exogenous factors

As the world rushes to set up AISIs, it’s useful to reflect on the successful models so far and the overlapping features, namely talentspeed and flexibility. Embracing these principles will be important for AISIs to succeed in their missions. However, beyond these attributes there is room for more work on what a network of AISIs should look like and how it can become more than the sum of its parts. ICFG is considering future research and collaborations on this topic. We are also working on resources to help policymakers and the public better understand the fast-moving developments on AISIs, starting with this brief and to be followed with an interactive timeline of AISI-related announcements.

Beyond the importance of the network design, there are other critical factors at play for understanding the role of AISIs in AI governance. At the end of the day, the AISIs will only guide regulation, not make it. The trajectory of national AI regulations and individual AISIs will be determined by the broader political environment they exist within – and two particular factors are worth exploring in the early stages of the development of AISIs.

Elections

For several of the countries mentioned above, 2024 is an election year. Changes in leadership could shift the international AI governance landscape by shifting national policy priorities and goals. Elections and their ensuing political support (or lack thereof) are a key input to an AISI’s speed, including resources.

  • EU Parliamentary elections (6-9 June 2024) – It is uncertain how the EU’s approach to AI may change following this election. For example, the more right-wing post-election EU Parliament could lead to a stronger emphasis on the innovation or national security aspects of AI, rather than on consumer protections.
  • UK General election (4 July 2024) – Incumbent Rishi Sunak (Conservative) was replaced by opposition leader Keir Starmer (Labour). Sunak was a strong supporter of the UK AISI, which has allowed it to operate at its expedited pace. In its manifesto, Labour has committed to ensuring the safe development of AI with binding regulation on the companies making advanced AI models. This is a move away from the current UK model which relies on voluntary commitments and the UK AISI pursuing strong working relationships with AI companies. The timeline, scope and direction of new regulation is unknown, but it could reflect a shift in strategy of the UK’s AI Safety efforts and possibly weaken UK AISI’s relationships with AI companies. At the same time, the measures could simply complement the existing work done by the UK AISI and act as a check against companies backing out of voluntary commitments.
  • US elections (5 November 2024) – Polls place Democrats and Republicans at too-close-to-call margins for the presidency. It is also unknown how the US’s AI governance Agenda would change under a Trump presidency. With these factors combined, the November election makes the future of US AI governance highly uncertain.  Along with the presidential election, large parts of Congress are also up for election. With Congress having control over NIST funding, this is another uncertain factor that will determine the trajectory of US AI governance.

Funding

One final aspect to touch upon is how the funding of the AISIs will contribute to their effectiveness and relevance. Funding is a key input to talent.

  • The UK AISI is currently the most generously funded AISI, with over £100 million in initial investment, as well as commitments to maintain its yearly 2024/25 funding until 2030.
  • The US AISI has a comparatively weak funding position, receiving $10 million in funding for 2024/25 through NIST. These funds are allocated on a yearly basis and largely determined by the makeup of Congress, making it more dependent on short-term political developments. The bi-partisan Senate AI Workgroup is trying to secure more substantial and stable funding for the US AISI and NIST.
  • The EU AI Office’s initial budget was €46.5 million, reallocated from existing budgets. Given the AI Office’s legislative backing, it is expected that its funding will be stable, but the scope and scale of continued funding is unknown.
  • The Canadian AISI has a comparatively modest, but stable funding situation for the next five years, with 50 million Canadian dollars being allocated to it as part of the 2024-2029 Federal Budget.
  • The Singapore AISI is being created out of the DST, which had a S$50 million (37 million USD) setup grant in 2022. What remains from those setup funds is unknown.

Cross-cutting Observations in the AISI Network

While the AISI network is still in its early stages, several noteworthy patterns and potential implications have already emerged. This section presents cross-cutting observations based on the available information about various AISIs, highlighting key factors that may influence their development and effectiveness – trends that are particularly instructive for civil services in the planning process of launching a new AISI.

Structural Approaches and their Implications

New vs repurposed institutions

  • Observation: Most countries (e.g., UK, US) have built AISIs from scratch, while others (e.g., EU, Singapore) have repurposed existing institutions.
  • Potential Implication: New institutions may offer greater flexibility and speed, while repurposed ones may benefit from established processes and expertise. While Singapore has the newly established AI Verify foundation to provide flexibility, the EU only has the AI Office

Regulatory Integration

  • Observation: The EU AI Office serves as both an AISI and a regulatory body, unlike other AISIs that focus primarily on research and advisory roles.
  • Potential Implication: This dual role may present unique challenges in building trust with AI companies while also enforcing regulations.

Funding Disparities

  • Observation: There’s significant variation in funding, from the UK’s £100 million to the US’s initial $10 million.
  • Potential Implication: These disparities may impact the relative influence and capabilities of different AISIs in the global network.

Operational Factors

  1. Talent Acquisition Strategies
    • Observation: AISIs are employing various strategies to attract top talent, with some adopting more flexible, startup-like structures.
    • Potential Implication: Success in attracting talent from the private sector and academia may significantly influence an AISI’s effectiveness and impact.
  2. Operational Speed and Flexibility
    • Observation: AISIs built from scratch, like the UK’s, appear to be moving faster in operationalizing and producing outputs.
    • Potential Implication: Speed and flexibility may be crucial factors in an AISI’s ability to keep pace with rapid AI advancements and maintain relevance.

While it’s too early to draw definitive conclusions about the effectiveness of different AISI models, these observations provide a framework for understanding the current landscape and potential future developments. As the AISI network continues to evolve, ongoing monitoring and analysis of these factors will be crucial for assessing their impact on global AI governance and safety efforts.

 

The EU AI Office is not an AI Safety Institute

The analysis shows that the EU AI Office is an outlier among the network of AISIs being set up.

It is the only institution set up with regulatory powers, has the broadest scope of all the AISIs and is one of the few AISIs that has evolved out of an existing structure, rather than as a new institution set up from scratch.

This is possibly a symptom of the fact that the AI Office’s “Safety Institute duties” were handed to it by the Commission when the AISI network was announced, despite its original scope being focused on the EU AI Act. This after-the-fact designation of the EU AI Office as the EU’s AISI comes with limitations. While it makes sense to concentrate AI talent, the EU AI Office may be designed in a way that makes it challenging to pursue the work of an AISI.

These structural limitations may be limiting its ability to successfully function as an AISI. 81 days after it was founded, the UK AISI had released its first progress report, hiring over 50 years worth of frontier-lab technical expertise, bringing prominent AI researchers Yoshua Bengio onto it’s advisory board and hiring researchers like David Kreuger and Yarin Gal as research leads. 74 days into its setup, the US AISI had hired Paul Christiano as Head of AI Safety.

As of writing, it has been more than 104 days since launch (24 May 2024) and the EU AI Office still hasn’t hired a head of Unit for AI Safety, or a chief scientist. These are essential roles for the AI Office to act as the EU’s AISI.

These dates are already on the conservative end of the spectrum, starting from when each institution announced its senior leadership. If we instead count from when each institution was funded and announced, the AI Office is a further 40 days behind the US and an additional 80 days behind the UK, when it comes to announcing research leadership and heads of AI safety.

Using that metric, at this point in its setup the UK AISI had tripled its technical talent pool to 150 hours worth of frontier research talent, with major hires like Jade Leung from OpenAI and Rumman Chowdhury.

The EU AI Office’s outlier status raises serious questions of whether it can effectively fill its role as the EU’s AISI. It may be better for the AI Office to focus its efforts on the effective implementation and enforcement of regulations and to leave the risk assessment and safety work to some new institution. One vision for this could see a possible CERN for AI being established, which (being more science-focused rather than regulation-focused) would be able to better carry out the roles needed from the EU’s AISI.

Conclusion

The emergence of the AI Safety Institute (AISI) Network marks a significant step towards global cooperation in addressing the challenges posed by advanced AI systems. Our analysis reveals a landscape characterised by diverse approaches, varying resources, and shared challenges.

Despite political changes, the UK AISI is likely to remain the leader in the international AISI network, due to its early founding, significant funding, and effective implementation of key success factors. It has successfully attracted top talent, maintained operational speed through political support, and established a flexible structure with access to cutting-edge information on AI advances.

The US AISI, while facing funding uncertainties and potential political shifts, holds the potential to become a key player if it can secure long-term funding and bipartisan support. Its success could reshape the landscape of international AI governance. However, struggling to secure consistent or substantial funding and facing budgetary constraints alongside political pressure from a future president whose stance on AI safety is still unclear could lead to the US AISI playing a smaller role internationally, with the UK leading the charge.

The EU has the potential to have a large impact on international AI Governance being the regulatory first-mover. However, it will need to seriously adjust its approach in order to mitigate the EU AI Office’s weaknesses as an AISI.

Japan’s focus on international standards, Singapore’s balanced governance model and Canada’s secure funding suggest  that they will all be able to contribute to the international AI Governance field.

As AI technology continues to advance at a rapid pace, the existing work being done by AISIs will only grow in importance. Their success will depend on their ability to attract and retain top talent, move quickly in response to new developments, and maintain the flexibility to adapt to an uncertain future. The AISI Network has the potential to play a crucial role in shaping the future of AI development and governance, ensuring that as AI systems become more powerful, they remain aligned with human values and interests.

The path forward will require continued cooperation, innovation, and commitment from all stakeholders – governments, industry, academia, and civil society. By learning from the successes and challenges of the current AISI landscape, we can work towards a future where AI safety is at the forefront of technological progress, enabling the benefits of AI while mitigating its risks.

Author

Alex Petropoulos

Advanced AI Analyst – Policy

Post 1 Post 2