Please enable javascript for this website.

AISI Research | Developing and conducting model evaluations. Advancing foundational safety and societal resilience research.

AISI develops and conducts model evaluations to assess risks from cyber, chemical, biological misuse; autonomous capabilities and the effectiveness of safeguards. We also are also working to advance foundational safety and societal resilience research.

AISI brand artwork

Should AI systems behave like people?

Research

September 25, 2024

We studied whether people want AI to be more human-like.

Early Insights from Developing Question-Answer Evaluations for Frontier AI

Research

September 23, 2024

A common technique for quickly assessing AI capabilities is prompting models to answer hundreds of questions, then automatically scoring the answers. We share insights from months of using this method.

Conference on frontier AI safety frameworks

Research

September 19, 2024

AISI is bringing together AI companies and researchers for an invite-only conference to accelerate the design and implementation of frontier AI safety frameworks. This post shares the call for submissions that we sent to conference attendees.

Cross-post: "Interviewing AI researchers on automation of AI R&D" by Epoch AI

Research

August 27, 2024

AISI funded Epoch AI to explore AI researchers’ differing predictions on the automation of AI research and development and their suggestions for how to evaluate relevant capabilities.

Safety cases at AISI

Research

August 23, 2024

As a complement to our empirical evaluations of frontier AI models, AISI is planning a series of collaborations and research projects sketching safety cases for more advanced models than exist today, focusing on risks from loss of control and autonomy. By a safety case, we mean a structured argument that an AI system is safe within a particular training or deployment context.

Advanced AI evaluations at AISI: May update

Research

May 20, 2024

We tested leading AI models for cyber, chemical, biological, and agent capabilities and safeguards effectiveness. Our first technical blog post shares a snapshot of our methods and results.

International Scientific Report on the Safety of Advanced AI: Interim Report

Research

May 17, 2024

This is an up-to-date, evidence-based report on the science of advanced AI safety. It highlights findings about AI progress, risks, and areas of disagreement in the field. The report is chaired by Yoshua Bengio and coordinated by AISI.

Open sourcing our testing framework Inspect 

Research

April 21, 2024

We open-sourced our framework for large language model evaluation, which provides facilities for prompt engineering, tool usage, multi-turn dialogue, and model-graded evaluations.

Our approach to evaluations

Research

February 9, 2024

This post offers an overview of why we are doing this work, what we are testing for, how we select models, our recent demonstrations and some plans for our future work.