We are now the AI Security Institute
Please enable javascript for this website.

Recent work

Why we're working on white box control

Research

July 10, 2025

An introduction to white box control, and an update on our research so far.

LLM judges on trial: A new statistical framework to assess autograders

Research

July 9, 2025

Our new framework can assess the reliability of LLM evaluators, while simultaneously answering a primary research question.

How will AI enable the crimes of the future?

Research

July 3, 2025

How we're working to track and mitigate against criminal misuse of AI.

Our mission is to equip governments with a scientific understanding of the risks posed by advanced AI.

Empirical research

Monitoring the fast-moving landscape of AI development

Evaluating the risks AI poses to national security and public safety

Advancing the field of systemic safety to improve national resilience

Global impact

Working with AI developers to ensure responsible development

Informing policymakers about current and emerging risks from AI

Promoting global coordination on AI governance

Join us to shape the trajectory of AI

For our ambitious and urgent mission, we need top talent. We have built a unique structure within the government so we can operate like a startup. We have recruited over 50 technical staff, including senior alumni from OpenAI, Google DeepMind and the University of Oxford, and we are scaling rapidly. Our staff are supported by substantial funding and computing resources, priority access to top models, partnerships with leading research organisations and an incredibly talented, close-knit and driven team.