We are now the AI Security Institute
Please enable javascript for this website.

Recent work

HiBayES: Improving LLM Evaluation with Hierarchical Bayesian Modelling

Research

May 12, 2025

HiBayES: a flexible, robust statistical modelling framework that accounts for the nuances and hierarchical structure of advanced evaluations.

Research Agenda

Research

May 6, 2025

We outline our research priorities, our approach to developing technical solutions to the most pressing AI concerns, and the key risks that must be addressed as AI capabilities advance.

RepliBench: measuring autonomous replication capabilities in AI systems

Research

April 22, 2025

A comprehensive benchmark to detect emerging replication abilities in AI systems and provide a quantifiable understanding of potential risks

Our mission is to equip governments with a scientific understanding of the risks posed by advanced AI.

Empirical research

Monitoring the fast-moving landscape of AI development

Evaluating the risks AI poses to national security and public safety

Advancing the field of systemic safety to improve national resilience

Global impact

Working with AI developers to ensure responsible development

Informing policymakers about current and emerging risks from AI

Promoting global coordination on AI governance

Join us to shape the trajectory of AI

For our ambitious and urgent mission, we need top talent. We have built a unique structure within the government so we can operate like a startup. We have recruited over 50 technical staff, including senior alumni from OpenAI, Google DeepMind and the University of Oxford, and we are scaling rapidly. Our staff are supported by substantial funding and computing resources, priority access to top models, partnerships with leading research organisations and an incredibly talented, close-knit and driven team.