We are now the AI Security Institute
Please enable javascript for this website.
About

Equipping governments with a scientific understanding of advanced AI risks.

We are a mission-driven research organisation in the heart of the UK government.

Organisation

Governments have a key role to play in AI's rapid progress.

AI systems are rapidly becoming more capable and widespread. Governments need to deeply understand the technology and act on emerging risks.

The AI Security Institute (AISI) is a research organisation within the UK government's Department for Science, Innovation and Technology working towards this goal: building the world's leading understanding of advanced AI risks and solutions, to inform governments so they can keep the public safe.

Our work includes:

  • Testing leading AI systems before they are released publicly and collaborating with top AI companies to improve their safety and security;
  • Informing policymakers across the UK and allied governments about emerging capabilities and risks;
  • Advancing research into solutions with in-house research and >£15 million in grant funding;

To deliver on our ambitious goals, we designed AISI like a startup in the government, combining the authority of government with the expertise and agility of the private sector.

Leaders from across the public and private sectors  

  • Our Interim Director Adam Beaumont was the UK intelligence agency GCHQ's former Chief AI Officer.
  • Our Chief Technology Officer Jade Leung is also the Prime Minister's AI Advisor, and she previously led the Governance team at OpenAI.  
  • Our Chief Scientist Geoffrey Irving and Research Director Chris Summerfield have collectively led teams at OpenAI, Google DeepMind and the University of Oxford.  
  • Our Chair Ian Hogarth brings experience as a leading tech investor and entrepreneur.  
  • Our advisory board comprises national security and machine learning leaders such as Yoshua Bengio.
  • Our 100+ technical staff bring experience from leading industry, academic and nonprofit labs.
  • Our policy, operations, and strategy teams bring experience from No. 10, the national security community, and many of the UK's best startups and large-scale companies.

Backed with the resources we need to move fast

  • £66m in funding per financial year and long-term resourcing commitments
  • Priority access to over >£1.5 billion of compute in the UK’s AI Research Resource and exascale supercomputing programme 
  • Pre-deployment access to leading AI models and close collaborations with AI companies that give us privileged access to their development approaches and enable us to support their safety-related decisions
  • Ability to mobilize >£15 million in grants and countless collaborations with external research teams
  • Backing from No. 10 and collaborations across the UK and other governments

Research

Evaluations & Solutions

To keep the public safe, governments need to understand advanced AI. This is why we rigorously test leading AI systems before and after they are launched. We also open source many of our evaluations tools such as Inspect so the wider research community can use and build upon our work.   

Governments also play a key role in advancing solutions to emerging risks. So, we also research solutions across risk domains, and we are mobilizing >£15 million in grants to accelerate breakthroughs globally.

Examples of our research areas include:

  • Cyber Misuse: How much can AI systems assist with cyber attacks?
  • Safeguards: How effective are current safety and security features at preventing misuse?
  • Alignment: How reliably do AI systems behave as intended, and how can we improve this?
  • Control: How can we oversee and control AI systems' behavior, even if they are somewhat misaligned?
  • Autonomy: How capable are AI systems at conducting their own AI research, autonomously making copies of themselves, manipulating humans, and otherwise evading attempts to control them?
  • Human Influence: How could AI systems be used to influence people's views and reduce individual autonomy?
  • Societal Resilience: How can we make society more resilient as AI is adopted widely across critical sectors?

See our most recent research agenda here and all of our technical publications and blogshere.