Please enable javascript for this website.

AISI Research | Developing and conducting model evaluations. Advancing foundational safety and societal resilience research.

AISI develops and conducts model evaluations to assess risks from cyber, chemical, biological misuse; autonomous capabilities and the effectiveness of safeguards. We also are also working to advance foundational safety and societal resilience research.

AISI brand artwork

Advanced AI evaluations at AISI: May update


May 20, 2024

We tested leading AI models for cyber, chemical, biological, and agent capabilities and safeguards effectiveness. Our first technical blog post shares a snapshot of our methods and results.

International Scientific Report on the Safety of Advanced AI: Interim Report


May 17, 2024

This is an up-to-date, evidence-based report on the science of advanced AI safety. It highlights findings about AI progress, risks, and areas of disagreement in the field. The report is chaired by Yoshua Bengio and coordinated by AISI.

Open sourcing our testing framework Inspect 


April 21, 2024

We open-sourced our framework for large language model evaluation, which provides facilities for prompt engineering, tool usage, multi-turn dialogue, and model-graded evaluations.

Our approach to evaluations


February 9, 2024

This post offers an overview of why we are doing this work, what we are testing for, how we select models, our recent demonstrations and some plans for our future work.