Strengthening AI trustworthiness – Strategic innovation for European security part 3
To remain sovereign, Europe must lead on safe AI. We outline three priorities: secure chips, trusted evaluations, and AI-powered cybersecurity.
To remain sovereign, Europe must lead on safe AI. We outline three priorities: secure chips, trusted evaluations, and AI-powered cybersecurity.
This report explores the institutional design of CERN for AI, covering its structure, legal basis, governance, and funding model.
Ahead of the AI Action Summit in Paris, we make policy recommendations drawing on key lessons from other safety-critical industries like civil nuclear technology and aviation.
Advanced AI holds immense potential to benefit humanity but also poses risks when applied in military contexts, such as weapon development or enabling cyberattacks. This brief provides an overview of the ongoing debate surrounding the dual-use nature of advanced AI systems.
To compete in AI, Europe needs big investments and an EU-wide initiative to pool resources, talent and ambition.
A moonshot project for Europe? Max Reddel and Bálint Pataki argue it is time for the European Commission to take a more ambitious stance on advanced AI.
European Commission President Ursula von der Leyen’s proposed a 'CERN for AI' initiative to boost Europe's AI capabilites. Our report unpacks how such an institution could look like in practice.
Ahead of the AI Action Summit in South Korea in May 2024, we make five policy recommendations on AI Safety.
Ahead of the AI Action Summit in South Korea in May 2024, we make five policy recommendations on AI Safety.