Please enable javascript for this website.

Our work

Improving our understanding of advanced AI

Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems.

AISI brand artwork

Advanced AI evaluations at AISI: May update


May 20, 2024

We tested leading AI models for cyber, chemical, biological, and agent capabilities and safeguards effectiveness. Our first technical blog post shares a snapshot of our methods and results.

Fourth progress report


May 20, 2024

Since February, we released our first technical blog post, published the International Scientific Report on the Safety of Advanced AI, open-sourced our testing platform Inspect, announced our San Francisco office, announced a partnership with the Canadian AI Safety Institute, grew our technical team to >30 researchers and appointed Jade Leung as our Chief Technology Officer.

Announcing our San Francisco office


May 20, 2024

We are opening an office in San Francisco! This will enable us to hire more top talent, collaborate closely with the US AI Safety Institute and engage even more with the wider AI research community.

International Scientific Report on the Safety of Advanced AI: Interim Report


May 17, 2024

This is an up-to-date, evidence-based report on the science of advanced AI safety. It highlights findings about AI progress, risks, and areas of disagreement in the field. The report is chaired by Yoshua Bengio and coordinated by AISI.

Open sourcing our testing framework Inspect 


April 21, 2024

We open-sourced our framework for large language model evaluation, which provides facilities for prompt engineering, tool usage, multi-turn dialogue, and model-graded evaluations.

Announcing the UK and US AISI partnership


April 2, 2024

The UK and US AI Safety Institutes signed a landmark agreement to jointly test advanced AI models, share research insights, share model access and enable expert talent transfers.

Announcing the UK and France AI Research Institutes’ collaboration


February 29, 2024

The UK AI Safety Institute and France’s Inria (The National Institute for Research in Digital Science and Technology) are partnering to advance AI safety research.

Our approach to evaluations


February 9, 2024

This post offers an overview of why we are doing this work, what we are testing for, how we select models, our recent demonstrations and some plans for our future work.

Third progress report


February 5, 2024

Since October, we have recruited leaders from DeepMind and Oxford, onboarded 23 new researchers, published the principles behind the International Scientific Report on Advanced AI Safety, and began pre-deployment testing of advanced AI systems.

First AI Safety Summit


November 2, 2023

At the first AI Safety Summit at Bletchley Park, world leaders and top companies agreed on the significance of advanced AI risks and the importance of testing.

Second progress report


October 30, 2023

Since September, we have recruited leaders from OpenAI and Humane Intelligence, tripled the capacity of our research team, announced 6 new research partnerships, and helped establish the UK’s fastest supercomputer.

First Progress Report


September 7, 2023

In our first 11 weeks, we have recruited an advisory board of national security and ML leaders, including Yoshua Bengio, recruited top professors from Cambridge and Oxford and announced 4 research partnerships.