The AI Safety Institute is the first state-backed organisation dedicated to advancing this goal.
We are conducting research and building infrastructure to test the safety of advanced AI and to measure its impacts on people and society. We are also working with the wider research community, AI developers and other governments to affect how AI is developed and to shape global policymaking on this issue.
Our Chief Scientist, Geoffrey Irving, on why he joined the UK AI Safety Institute and why he thinks other technical folk should too
We studied whether people want AI to be more human-like.
A common technique for quickly assessing AI capabilities is prompting models to answer hundreds of questions, then automatically scoring the answers. We share insights from months of using this method.
Monitoring the fast-moving landscape of AI development
Evaluating the risks AI poses to national security and public welfare
Advancing the field of systemic safety to improve societal resilience
Working with AI developers to ensure responsible development
Informing policymakers about current and emerging risks from AI
Promoting global coordination on AI governance
For our ambitious and urgent mission, we need top talent. We have built a unique structure within the government so we can operate like a startup. We have recruited over 30 technical staff, including senior alumni from OpenAI, Google DeepMind and the University of Oxford, and we are scaling rapidly. Our staff are supported by substantial funding and computing resources, priority access to top models, partnerships with leading research organisations and an incredibly talented, close-knit and driven team.