About

The Mohammed Bin Rashid School of Government (MBRSG), with the support of Future of Life Institute, is organizing a side event to the Paris AI Action Summit over the afternoon and evening of February 9th.

We are launching the result of Global Risk and AI Safety Preparedness (GRASP) mapping. This global mapping aims to create a comprehensive global reference framework of general-purpose AI safety tools and solutions.

The mapping effort is conducted in partnership with Project SAFE of the Global Partnership on AI (GPAI) and its Tokyo Centre of Expertise, and it will be published on the OECD's AI Observatory. It will take the form of an interactive user interface designed to assist policymakers, entrepreneurs, frontier labs, and investors in navigating these solutions.

We will also focus on international efforts around AI Safety, hinging on the US-China dialogue that we want to foster, and bring back the focus of the Summit around AI Safety as Frontier Labs are building AGI systems. 

This event is invitation-only for policymakers, AI Safety researchers and entrepreneurs, frontier labs, as well as investors, with a limited capacity of 150 people. 

Schedule

13:00–14:00

Registration

Guests will need to present ID.


14:00–14:20

Opening Remarks

Why AI Safety is important and why we need international collaboration on AIS

  • Max Tegmark, FLI

  • Fadi Salem, MBRSG

  • XIAO Qian, Vice Dean of I-AIIG, Tsinghua


14:20–14:35

Presentation of the Global Risk and AI Safety Preparedness mapping

  • Cyrus Hodes, MBRSG/OECD.ai/GPAI

  • Charbel-Raphael Segerie, CeSIA

  • Jonathan Claybrough, CeSIA


14:35–15:15

Roundtable on International Cooperation in AI Safety

  • Irakli Beridze, UNICRI

  • Yoshua Bengio, MILA

  • Stuart Russell, CHAI

  • Dean Xue Lan, Institute for AI International Governance, Tsinghua University

  • Dawn Song, UC Berkeley

Moderated by Karine Perset, OECD


15:20–16:00

Roundtable on AI Safety Institutes

  • Yi Zeng, Beijing AISI

  • Wan Sie Lee, Singapore AISI

  • Juha Heikkilä, European Comission

  • Agnes Delaborde, LNE

  • Abhishek Singh, Ministry of Electronics and IT, India

Moderated by Yuko Harayama, GPAI

16:00–16:55

Roundtable on Frontier Labs and AI Safety

Part 1: Are frontier labs ready for AGI? (40’)

  • Chris Meserole, Frontier Model Forum

  • Michael Sellitto, Anthropic

  • Katarina Slama, ex OpenAI

  • Miles Brundage, ex Head of Policy Research OpenAI

  • Roman Yampolskiy, University of Louisville

    Moderated by Nicholas Dirks, New York Academy of Sciences


17:10–17:40

Speed presentations of AI Safety ventures and solutions providers

Intro: Shameek Kundu, AI Verify Foundation 

  • Nicolas Miailhe, PRISM Eval

  • Kristian Rönn, Lucid computing

  • April Chin and Oliver Salzmann, Resaro.ai

  • Gabriel Alfour, Conjecture

  • Matija Franklin, UCL (an Infinitio AI project)

17:40–18:20

Roundtable on Investing in AI Safety

  • Ben Cy, Temasek

  • Nick Fitz, Juniper Ventures/AI Assurance Market Report

  • Jaan Tallinn, Co-founder Skype, Metaplanet

  • Brandon Goldman, Lionheart Ventures

Moderated by Seth Dobrin, 1infinity Ventures, former Chief AI Officer of IBM

18:30–20:00

Cocktail reception


20:00–22:00

Evening salon

Featuring live AI demonstrations across art, music, and technical capabilities, where researchers, thinkers and innovators showcase their work in an interactive format. 

Attendees are invited to explore various expositions, engage with demonstrations and authors, and continue the day's discussions in a more relaxed setting.

AI expert and artists will demonstrate the capabilities and dangers of AI systems:

  • GenAI music and art performance

  • Live deepfakes demonstrations with CivAI

  • Live AI jailbreaking by PRISM Eval

  • AI Agents: Demonstrations of AI deception and AI pursuing unaligned goals

Think tanks, academics and AI visionaries will present their latest publications:

  • Nell Watson: Safer Agentic AI Guidelines

  • Roman Yampolskiy: AI: Unexplainable, Unpredictable, Uncontrollable

  • Max Tegmark: Life 3.0

  • Kristian Rönn: The Darwinian Trap 

  • Haydn Belfield, Centre for the Study of Existential Risk, Cambridge University: Computing Power and the Governance of AI 

  • Saurabh Mishra, Taiyō.AI: Reliability, Resilience and Human Factors Engineering for Trustworthy AI Systems 

  • Charbel Segerie, Centre pour la Sécurité de l'IA: AI Safety Atlas

  • Caroline Jeanmaire, The Future Society: Global Consultations for France’s 2025 AI Action Summit

  • Adam Shimi, Control AI: A Narrow Path

  • Eva Behrens, Conjecture: The Compendium