Our new papers show how safety cases can help AI developers turn plans in their safety frameworks into action
How to write part of a safety case showing a system does not have offensive cyber capabilities
As a complement to our empirical evaluations of frontier AI models, AISI is planning a series of collaborations and research projects sketching safety cases for more advanced models than exist today, focusing on risks from loss of control and autonomy. By a safety case, we mean a structured argument that an AI system is safe within a particular training or deployment context.