We are now the AI Security Institute
Please enable javascript for this website.

The AISI Organisation is a startup in government with world-leading talent and an urgent mission.

Our mission is to equip governments with an empirical understanding of advanced AI. We have recruited over 30 technical staff, including senior alumni from OpenAI, Google DeepMind and the University of Oxford, and we are scaling rapidly. Our staff are supported by substantial funding and computing resources, priority access to top models, partnerships with leading research organisations and an incredibly talented, close-knit and driven team.

AISI brand artwork

UKAISI at NeurIPS 2025

Organisation

November 26, 2025

An overview of the research we’ll be presenting at this year’s NeurIPS conference.

International joint testing exercise: Agentic testing

Organisation

July 17, 2025

Advancing methodologies for agentic evaluations across domains, including leakage of sensitive Information, fraud and cybersecurity threats.

Research Agenda

Organisation

May 6, 2025

We outline our research priorities, our approach to developing technical solutions to the most pressing AI concerns, and the key risks that must be addressed as AI capabilities advance.

Strengthening AI resilience

Organisation

April 3, 2025

20 Systemic Safety Grant Awardees Announced

How we’re addressing the gap between AI capabilities and mitigations

Organisation

March 11, 2025

We outline our approach to technical solutions for misuse and loss of control.

Pre-Deployment evaluation of OpenAI’s o1 model

Organisation

December 18, 2024

The UK Artificial Intelligence Safety Institute and the U.S. Artificial Intelligence Safety Institute conducted a joint pre-deployment evaluation of OpenAI's o1 model

Pre-deployment evaluation of Anthropic’s upgraded Claude 3.5 Sonnet

Organisation

November 19, 2024

The UK Artificial Intelligence Safety Institute and U.S. Artificial Intelligence Safety Institute conducted a joint pre-deployment evaluation of Anthropic’s latest model

Our First Year

Organisation

November 13, 2024

The AI Safety Institute reflects on its first year

Announcing Inspect Evals

Organisation

November 13, 2024

We’re open-sourcing dozens of LLM evaluations to advance safety research in the field

Bounty programme for novel evaluations and agent scaffolding

Organisation

November 5, 2024

We are launching a bounty for novel evaluations and agent scaffolds to help assess dangerous capabilities in frontier AI systems.

Early lessons from evaluating frontier AI systems

Organisation

October 24, 2024

We look into the evolving role of third-party evaluators in assessing AI safety, and explore how to design robust, impactful testing frameworks.

Advancing the field of systemic AI safety: grants open

Organisation

October 15, 2024

Calling researchers from academia, industry, and civil society to apply for up to £200,000 of funding.

Why I joined AISI by Geoffrey Irving

Organisation

October 3, 2024

Our Chief Scientist, Geoffrey Irving, on why he joined the UK AI Safety Institute and why he thinks other technical folk should too

Conference on frontier AI safety frameworks

Organisation

September 19, 2024

AISI is bringing together AI companies and researchers for an invite-only conference to accelerate the design and implementation of frontier AI safety frameworks. This post shares the call for submissions that we sent to conference attendees.

Announcing our San Francisco office

Organisation

May 20, 2024

We are opening an office in San Francisco! This will enable us to hire more top talent, collaborate closely with the US AI Safety Institute and engage even more with the wider AI research community.

Fourth progress report

Organisation

May 20, 2024

Since February, we released our first technical blog post, published the International Scientific Report on the Safety of Advanced AI, open-sourced our testing platform Inspect, announced our San Francisco office, announced a partnership with the Canadian AI Safety Institute, grew our technical team to >30 researchers and appointed Jade Leung as our Chief Technology Officer.

Advanced AI evaluations at AISI: May update

Organisation

May 20, 2024

We tested leading AI models for cyber, chemical, biological, and agent capabilities and safeguards effectiveness. Our first technical blog post shares a snapshot of our methods and results.

International Scientific Report on the Safety of Advanced AI: Interim Report

Organisation

May 17, 2024

This is an up-to-date, evidence-based report on the science of advanced AI safety. It highlights findings about AI progress, risks, and areas of disagreement in the field. The report is chaired by Yoshua Bengio and coordinated by AISI.

Open sourcing our testing framework Inspect 

Organisation

April 21, 2024

We open-sourced our framework for large language model evaluation, which provides facilities for prompt engineering, tool usage, multi-turn dialogue, and model-graded evaluations.

Announcing the UK and US AISI partnership

Organisation

April 2, 2024

The UK and US AI Safety Institutes signed a landmark agreement to jointly test advanced AI models, share research insights, share model access and enable expert talent transfers.

Announcing the UK and France AI Research Institutes’ collaboration

Organisation

February 29, 2024

The UK AI Safety Institute and France’s Inria (The National Institute for Research in Digital Science and Technology) are partnering to advance AI safety research.

Our approach to evaluations

Organisation

February 9, 2024

This post offers an overview of why we are doing this work, what we are testing for, how we select models, our recent demonstrations and some plans for our future work.

Third progress report

Organisation

February 5, 2024

Since October, we have recruited leaders from DeepMind and Oxford, onboarded 23 new researchers, published the principles behind the International Scientific Report on Advanced AI Safety, and began pre-deployment testing of advanced AI systems.

First AI Safety Summit

Organisation

November 2, 2023

At the first AI Safety Summit at Bletchley Park, world leaders and top companies agreed on the significance of advanced AI risks and the importance of testing.

Second progress report

Organisation

October 30, 2023

Since September, we have recruited leaders from OpenAI and Humane Intelligence, tripled the capacity of our research team, announced 6 new research partnerships, and helped establish the UK’s fastest supercomputer.

First Progress Report

Organisation

September 7, 2023

In our first 11 weeks, we have recruited an advisory board of national security and ML leaders, including Yoshua Bengio, recruited top professors from Cambridge and Oxford and announced 4 research partnerships.