Please enable javascript for this website.

Our work

Improving our understanding of advanced AI

Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems.

AISI brand artwork
Categories

Advancing the field of systemic AI safety: grants open

Organisation

October 15, 2024

Calling researchers from academia, industry, and civil society to apply for up to £200,000 of funding.

Why I joined AISI by Geoffrey Irving

Organisation

October 3, 2024

Our Chief Scientist, Geoffrey Irving, on why he joined the UK AI Safety Institute and why he thinks other technical folk should too

Should AI systems behave like people?

Research

September 25, 2024

We studied whether people want AI to be more human-like.

Early Insights from Developing Question-Answer Evaluations for Frontier AI

Research

September 23, 2024

A common technique for quickly assessing AI capabilities is prompting models to answer hundreds of questions, then automatically scoring the answers. We share insights from months of using this method.

Conference on frontier AI safety frameworks

Research

September 19, 2024

AISI is bringing together AI companies and researchers for an invite-only conference to accelerate the design and implementation of frontier AI safety frameworks. This post shares the call for submissions that we sent to conference attendees.

Cross-post: "Interviewing AI researchers on automation of AI R&D" by Epoch AI

Research

August 27, 2024

AISI funded Epoch AI to explore AI researchers’ differing predictions on the automation of AI research and development and their suggestions for how to evaluate relevant capabilities.

Safety cases at AISI

Research

August 23, 2024

As a complement to our empirical evaluations of frontier AI models, AISI is planning a series of collaborations and research projects sketching safety cases for more advanced models than exist today, focusing on risks from loss of control and autonomy. By a safety case, we mean a structured argument that an AI system is safe within a particular training or deployment context.

Advanced AI evaluations at AISI: May update

Research

May 20, 2024

We tested leading AI models for cyber, chemical, biological, and agent capabilities and safeguards effectiveness. Our first technical blog post shares a snapshot of our methods and results.

Fourth progress report

Organisation

May 20, 2024

Since February, we released our first technical blog post, published the International Scientific Report on the Safety of Advanced AI, open-sourced our testing platform Inspect, announced our San Francisco office, announced a partnership with the Canadian AI Safety Institute, grew our technical team to >30 researchers and appointed Jade Leung as our Chief Technology Officer.

Announcing our San Francisco office

Organisation

May 20, 2024

We are opening an office in San Francisco! This will enable us to hire more top talent, collaborate closely with the US AI Safety Institute and engage even more with the wider AI research community.

International Scientific Report on the Safety of Advanced AI: Interim Report

Research

May 17, 2024

This is an up-to-date, evidence-based report on the science of advanced AI safety. It highlights findings about AI progress, risks, and areas of disagreement in the field. The report is chaired by Yoshua Bengio and coordinated by AISI.

Open sourcing our testing framework Inspect 

Research

April 21, 2024

We open-sourced our framework for large language model evaluation, which provides facilities for prompt engineering, tool usage, multi-turn dialogue, and model-graded evaluations.

Announcing the UK and US AISI partnership

Governance

April 2, 2024

The UK and US AI Safety Institutes signed a landmark agreement to jointly test advanced AI models, share research insights, share model access and enable expert talent transfers.

Announcing the UK and France AI Research Institutes’ collaboration

Governance

February 29, 2024

The UK AI Safety Institute and France’s Inria (The National Institute for Research in Digital Science and Technology) are partnering to advance AI safety research.

Our approach to evaluations

Research

February 9, 2024

This post offers an overview of why we are doing this work, what we are testing for, how we select models, our recent demonstrations and some plans for our future work.

Third progress report

Organisation

February 5, 2024

Since October, we have recruited leaders from DeepMind and Oxford, onboarded 23 new researchers, published the principles behind the International Scientific Report on Advanced AI Safety, and began pre-deployment testing of advanced AI systems.

First AI Safety Summit

Governance

November 2, 2023

At the first AI Safety Summit at Bletchley Park, world leaders and top companies agreed on the significance of advanced AI risks and the importance of testing.

Second progress report

Organisation

October 30, 2023

Since September, we have recruited leaders from OpenAI and Humane Intelligence, tripled the capacity of our research team, announced 6 new research partnerships, and helped establish the UK’s fastest supercomputer.

First Progress Report

Organisation

September 7, 2023

In our first 11 weeks, we have recruited an advisory board of national security and ML leaders, including Yoshua Bengio, recruited top professors from Cambridge and Oxford and announced 4 research partnerships.