Please enable javascript for this website.

Careers

We are an empowered team of technical experts and operators on a mission to advance research and infrastructure for AI governance.

If our work sounds interesting, we encourage you to apply. All applications are assessed on a rolling basis, so it is best to apply as soon as possible.

Mission

Advanced AI systems offer transformative opportunities to boost economic growth, human creativity, and public education, but they also pose significant risks.   AISI was launched at the Bletchley Park AI Safety Summit in 2023 because taking responsible action on this extraordinary technology requires technical expertise about AI to be available in the heart of government.  Originally called the UK's Frontier AI Taskforce, we have evolved into a leading research institution within the UK’s Department of Science, Innovation, and Technology, from where we can:

"AISI has built a wonderful group of technical, policy, and civil service experts focused on making AI and AGI go well, and I'm finding it extremely motivating to do technical work in this interdisciplinary context."

Geoffrey Irving

Research Director

"AISI is situated at the centre of the action where true impact can be made. I'm excited about the opportunities unfolding in front of us at such a rapid pace."

Professor Yarin Gal

Research Director

Working at AISI

Working at AISI

  • We have built a unique structure within the government to ensure technical staff have the resources to work quickly and solve complex problems.  
  • Your work will have an unusually high impact. There is a lot of important work to do at AISI, and when AISI does something, the world watches.  
  • You will be part of a close-knit, driven team and work with field leaders like Geoffrey Irving, Professor Yarin Gal and Professor Chris Summerfield.
  • You will work in the heart of the UK government, which has made AI a top priority.
  • We enable hybrid work and offer a range of competitive benefits.

What we look for

  • We value collaboration, innovation and diversity. We need these qualities to drive forward the nascent science of understanding AI and mitigating AI’s risks.  
  • Our work is often fast-paced and has global impacts. We are looking for the talent, ambition and responsibility to deliver in this environment.  
  • Many of our staff previously worked at top industry and academic labs. For most technical roles, it is helpful to have substantial machine learning experience, especially large language model experience. That said, some of our most valuable hires will have more niche domain expertise, such as in cybersecurity.  
  • We encourage you to apply even if you are not in or from the UK. We may be able to explore other options, such as seconding you in from either your current employer or a third-party organisation.

Resources:

  • £100m in initial funding for the organisation‍
  • Privileged access to top AI models from leading companies
  • Priority access to over £1.5 billion of compute in the UK’s AI Research Resource and exascale supercomputing programme
  • Over 20 partnerships with top research organisers
  • Opportunities to collaborate with AI leaders around the world

Open roles

Our typical interview process includes submitting a CV and short written statement, skills assessments such as a technical interview and a take-home coding test, and 2-4 interviews, including a conversation with a senior member of our team. We tailor this process as needed for each role and candidate.

Please note that if you're applying to our technical roles, this privacy policy applies.

Research Scientist / Research Engineer – Safeguards, Controls, & Mitigations

Evaluations

£65,000 - £135,000

Design and execute evaluations to assess the susceptibility of advanced AI systems to attack and undertake research projects related to safeguarding. Candidates should have conducted technical research in machine learning, computer system security, or information security.

Research Scientist – Autonomous Systems

Evaluations

£65,000 - £135,000

Research risks such as uncontrolled self-improvement, autonomous replication, manipulation and deception. Improve the science of model evaluations with things like scaling laws for dangerous capabilities.

Research Scientist / Research Engineer – Narrow AI Tools for Bio-Chem

Evaluations

£65,000 - £135,000

Advance the evaluation of biology and chemistry specific AI models. Design experiments, test models, identify datasets, and present findings to policymakers. Candidates should have a PhD in a relevant field and experience with modern deep learning.

Research Scientist / Research Engineer – Cyber Misuse

Evaluations

£65,000 - £135,000

Build evaluations, design experiments, and create tools to measure the capabilities of AI systems against cyber security threat scenarios. Candidates should have relevant experience in machine learning, AI security, or computer security.

Cyber Security Researcher

Evaluations

£65,000 - £135,000

Help design our strategy for understanding AI-enabled cyber risks. Build benchmark challenges to evaluate AI system capabilities, coordinate red team exercises, and explore interactions between narrow cyber tools and general AI. Candidates should have expertise in areas like penetration testing, CTF design, vulnerability research, or cyber tool development.

Research Scientist - Safety Cases

Safety Cases

£65,000 - £135,000

Drive research to develop our understanding of how safety cases could be developed for advanced AI. You'll work closely with Geoffrey Irving to build out safety cases as a new pillar of AISI's work.

Software Engineer

Platforms

£65,000 - £135,000

Work with one of our evaluations workstreams or our cross-cutting platforms team to build interfaces and frontends, create fast and secure inference channels for external models, and drive ML ops projects to host and fine-tune our own models. Candidates should have experience with some of the following: frontend or tools development, backend/API design, dev/ML ops, privacy and security engineering, and/or engineering management.

Evals Technical Programme Manager & Strategy Lead

Evaluations

£65,000 - £135,000

Plan and drive the delivery of our advanced AI testing. Coordinate our evaluations workstreams, ensure reports and recommendations meet high scientific standards, and iterate to improve our strategy and evaluations over time. Candidates should have experience with complex project management, ML research, and working with technical teams.

Research Engineer - Autonomous Systems

Evaluations

£65,000 - £135,000

Build large-scale experiments, to empirically evaluate risks such as uncontrolled self-improvement, autonomous replication, manipulation and deception. Collaborate with others to push forward the state of the science on model evaluations.‍

Manager - Autonomous Systems

Evaluations

£85,000 - £135,000

As a manager, you'll be heading up a multi-disciplinary team including scientists, engineers and domain experts on the risks that we are investigating. These include autonomous replication, AI R&D, manipulation and deception.

Risk Modelling Researcher

Evaluations

£65,000 - £135,000

As a risk modelling researcher, your work will span the full space of risks from frontier AI systems. This includes breaking down the space of risks (e.g., with risk trees, custom taxonomies, STAMP and other approaches).

Mechanistic Interpretability Researcher

£65,000 - £145,000

As a mechanistic interpretability researcher (either research scientist or research engineer), you'll be focussing on pushing forward the science on detecting scheming and white-box evaluations.

Research Engineer – General Application

Evaluations

£65,000 - £135,000

This application is for those without a preference for a team. We prefer you apply to team-specific RE roles below. /// Design and build evaluations to assess the capabilities and safeguards of cutting-edge models and conduct research to better understand AI systems and mitigate their risks. Candidates should have relevant experience in ML, AI, AI security, or computer security.

Research Scientist - General Application

£65,000 - £135,000

This application is for those without a preference for a team. We prefer you apply to team-specific RS roles below. /// Design and build evaluations to assess the capabilities and safeguards of cutting-edge models and conduct research to better understand AI systems and mitigate their risks. Candidates should have relevant experience in ML, AI, AI security, or computer security.

We are excited by the amount our team has been able to accomplish — and we are just getting started.