We are now the AI Security Institute
Please enable javascript for this website.

Recent work

From bugs to bypasses: adapting vulnerability disclosure for AI safeguards

Research

September 2, 2025

Exploring how far cyber security approaches can help mitigate risks in generative AI systems, in collaboration with the National Cyber Security Centre (NCSC).

Managing risks from increasingly capable open-weight AI systems

Research

August 29, 2025

Current methods and open problems in open-weight model risk management.

The Inspect Sandboxing Toolkit: Scalable and secure AI agent evaluations

Research

August 7, 2025

A comprehensive toolkit for safely evaluating AI agents.

Our mission is to equip governments with a scientific understanding of the risks posed by advanced AI.

Empirical research

Monitoring the fast-moving landscape of AI development

Evaluating the risks AI poses to national security and public safety

Advancing the field of systemic safety to improve national resilience

Global impact

Working with AI developers to ensure responsible development

Informing policymakers about current and emerging risks from AI

Promoting global coordination on AI governance

Join us to shape the trajectory of AI

For our ambitious and urgent mission, we need top talent. We have built a unique structure within the government so we can operate like a startup. We have recruited over 50 technical staff, including senior alumni from OpenAI, Google DeepMind and the University of Oxford, and we are scaling rapidly. Our staff are supported by substantial funding and computing resources, priority access to top models, partnerships with leading research organisations and an incredibly talented, close-knit and driven team.