Please enable javascript for this website.

The AI Safety Institute is a directorate of the UK Department for Science, Innovation, and Technology.

Rigorous AI research to enable advanced AI governance

Governments have a key role to play in ensuring advanced AI is safe and beneficial.

The AI Safety Institute is the first state-backed organisation dedicated to advancing this goal.

We are conducting research and building infrastructure to test the safety of advanced AI and to measure its impacts on people and society. We are also working with the wider research community, AI developers and other governments to affect how AI is developed and to shape global policymaking on this issue.  

Recent work

Advanced AI evaluations at AISI: May update

Research

May 20, 2024

We tested leading AI models for cyber, chemical, biological, and agent capabilities and safeguards effectiveness. Our first technical blog post shares a snapshot of our methods and results.

Fourth progress report

Organisation

May 20, 2024

Since February, we released our first technical blog post, published the International Scientific Report on the Safety of Advanced AI, open-sourced our testing platform Inspect, announced our San Francisco office, announced a partnership with the Canadian AI Safety Institute, grew our technical team to >30 researchers and appointed Jade Leung as our Chief Technology Officer.

Announcing our San Francisco office

Organisation

May 20, 2024

We are opening an office in San Francisco! This will enable us to hire more top talent, collaborate closely with the US AI Safety Institute and engage even more with the wider AI research community.

Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems.

Empirical research

Monitoring the fast-moving landscape of AI development

Evaluating the risks AI poses to national security and public welfare

Advancing the field of systemic safety to improve societal resilience

Global impact

Working with AI developers to ensure responsible development

Informing policymakers about current and emerging risks from AI

Promoting global coordination on AI governance

Join us to shape the trajectory of AI

For our ambitious and urgent mission, we need top talent. We have built a unique structure within the government so we can operate like a startup. We have recruited over 30 technical staff, including senior alumni from OpenAI, Google DeepMind and the University of Oxford, and we are scaling rapidly. Our staff are supported by substantial funding and computing resources, priority access to top models, partnerships with leading research organisations and an incredibly talented, close-knit and driven team.