Please enable javascript for this website.

Careers

We are an empowered team of technical experts and operators on a mission to advance research and infrastructure for AI governance.

If our work sounds interesting, we encourage you to apply. All applications are assessed on a rolling basis, so it is best to apply as soon as possible.

Mission

Advanced AI systems offer transformative opportunities to boost economic growth, human creativity, and public education, but they also pose significant risks.   AISI was launched at the Bletchley Park AI Safety Summit in 2023 because taking responsible action on this extraordinary technology requires technical expertise about AI to be available in the heart of government.  Originally called the UK's Frontier AI Taskforce, we have evolved into a leading research institution within the UK’s Department of Science, Innovation, and Technology, from where we can:

"AISI has built a wonderful group of technical, policy, and civil service experts focused on making AI and AGI go well, and I'm finding it extremely motivating to do technical work in this interdisciplinary context."

Geoffrey Irving

Chief Scientist

"AISI is situated at the centre of the action where true impact can be made. I'm excited about the opportunities unfolding in front of us at such a rapid pace."

Professor Yarin Gal

Research Director

Working at AISI

Working at AISI

  • We have built a unique structure within the government to ensure technical staff have the resources to work quickly and solve complex problems.  
  • Your work will have an unusually high impact. There is a lot of important work to do at AISI, and when AISI does something, the world watches.  
  • You will be part of a close-knit, driven team and work with field leaders like Geoffrey Irving, Professor Yarin Gal and Professor Chris Summerfield.
  • You will work in the heart of the UK government, which has made AI a top priority.
  • We enable hybrid work and offer a range of competitive benefits.

What we look for

  • We value collaboration, innovation and diversity. We need these qualities to drive forward the nascent science of understanding AI and mitigating AI’s risks.  
  • Our work is often fast-paced and has global impacts. We are looking for the talent, ambition and responsibility to deliver in this environment.  
  • Many of our staff previously worked at top industry and academic labs. For most technical roles, it is helpful to have substantial machine learning experience, especially large language model experience. That said, some of our most valuable hires will have more niche domain expertise, such as in cybersecurity.  
  • We encourage you to apply even if you are not in or from the UK. We may be able to explore other options, such as seconding you in from either your current employer or a third-party organisation.

Resources:

  • £100m in initial funding for the organisation‍
  • Privileged access to top AI models from leading companies
  • Priority access to over £1.5 billion of compute in the UK’s AI Research Resource and exascale supercomputing programme
  • Over 20 partnerships with top research organisers
  • Opportunities to collaborate with AI leaders around the world

Open roles

Our typical interview process includes submitting a CV and short written statement, skills assessments such as a technical interview and a take-home coding test, and 2-4 interviews, including a conversation with a senior member of our team. We tailor this process as needed for each role and candidate.

Please note that if you're applying to our technical roles, this privacy policy applies.

Research Engineer – General Application

Evaluations

£65,000 - £135,000

This application is for those without a preference for a team. We prefer you apply to team-specific RE roles below. /// Design and build evaluations to assess the capabilities and safety of advanced AI systems. Candidates should have relevant experience in machine learning.

Research Scientist - General Application

£65,000 - £135,000

This application is for those without a preference for a team. We prefer that you applying team-specific RS roles below. /// Lead research projects to improve our ability to assess the capabilities and safety of advanced AI systems. Candidates should have relevant experience in machine learning.

Research Engineer – Cyber Misuse Team

Evaluations

£65,000 - £135,000

Design experiments and build evaluations to assess the cyber offensive capabilities of advanced AI systems. Candidates should have relevant experience in machine learning and cybersecurity.

Research Scientist/Research Engineer/Security Researcher – Safeguards, Controls, & Mitigations

Evaluations

£65,000-145,000

Drive projects to understand advanced AI systems' vulnerability to misuse. Candidates should bring experience in ML research, ML engineering, or in security (e.g. redteaming in other domains).

Research Scientist – Cyber Misuse Team

Evaluations

£65,000 - £135,000

Lead research projects to improve our ability to assess the cyber offensive capabilities of advanced AI systems. Candidates should have relevant experience in machine learning and cybersecurity.

Manager - Autonomous Systems Team

Evaluations

£85,000 - £135,000

As a team manager, you'll be heading up a multi-disciplinary team including scientists, engineers and domain experts on the capabilities that we are evaluations. These include autonomous replication, AI R&D, manipulation and deception.

Research Engineer - Autonomous Systems Team

Evaluations

£65,000 - £135,000

Build large-scale experiments, to empirically evaluate risks such as uncontrolled self-improvement, autonomous replication, manipulation and deception. Collaborate with others to push forward the state of the science on model evaluations.‍

Research Scientist – Autonomous Systems Team

Evaluations

£65,000 - £135,000

Research risks such as uncontrolled self-improvement, autonomous replication, manipulation and deception. Improve the science of model evaluations with things like scaling laws for dangerous capabilities.

Risk Modelling Researcher – Autonomous Systems Team

Evaluations

£65,000 - £135,000

Advance our understanding of AI risk scenarios involving AI research & development, autonomous replication, deception, and resource acquisition. Candidates should have a deep understanding of machine learning.

Interpretability Researcher – Autonomous Systems Team

£65,000 - £135,000

As an interpretability research scientist or engineer, you'll lead early work to push forward the science on detecting scheming and white-box evaluations.

Research Scientist - Safety Cases Team

Safety Cases

£65,000 - £135,000

Drive research to develop our understanding of how safety cases could be developed for advanced AI. You'll work closely with Geoffrey Irving to build out safety cases as a new pillar of AISI's work.

Evaluations Technical Program Manager and Strategy Lead

£65000 - £135000

You will be a part of the Testing Team, which is responsible for our overall testing strategy, and the end-to-end preparation and delivery of individual testing exercises. You will collaborate closely with researchers and engineers from our evaluations workstreams, as well as policy and delivery teams. Your role will be broad and cross-cutting, involving project management, strategy, and scientific and policy communication.

Crime and Social Destabilisation Workstream Lead – Societal Impacts Team

Societal Impacts

£65,000 - £135,000

As workstream lead of a novel team, you will build a team to evaluate and mitigate some of the pressing societal-level risks that Frontier AI systems may exacerbate, including radicalization, misinformation, fraud, and social engineering.

Psychological and Social Risks Workstream Lead – Societal Impacts Team

Societal Impacts

£65,000 - £135,000

As workstream lead for this novel team, you will build and lead a multidisciplinary team to evaluate and mitigate the behavioural and psychological risks that emerge from AI systems. Your teams’ work will address how human interaction with advanced AI can impact human users, with a focus on identifying and preventing negative outcomes.

Systemic Safety and Responsible Innovation Workstream Lead – Societal Impacts Team

Societal Impacts

£65,000 - £135,000

AISI is expanding our Systemic Safety team. This team is focussed on identifying and catalyzing interventions which could advance the field of AI safety and strengthen the systems and infrastructure in which AI systems are deployed. As the Workstream Lead for this team, you will build and lead a multidisciplinary team focussed on pushing systemic safety forward as an agenda and creating the global environment for responsible innovation.

Technical Advisor - Protocols Team

£65,000 - £135,000

We are starting up a new team whose objective is to develop technically informed best practice for companies to develop frontier AI safely. This role may involve leading on the technical advice for producing best practice guidance for companies operationalising these commitments in the form of AI safety frameworks, working closely with the safety cases team.

Research Unit Deputy Director (UK residents only)

Strategy & Operations

£76,000 - £117,800

We are advertising for two Deputy Director roles to lead the research unit. We envisage one role focused on national security relevant evaluations, and the other focused on our grant programmes, research into the societal impacts of AI and our academic collaborations. These Deputy Directors will first and foremost be leaders of the Research Unit, focused on project delivery, managing the team and delivering timely research.

Talent and Operations Deputy Director (UK residents only)

£76,000 - £117,800

This is a fast-paced role responsible for enabling AISI’s continued growth and ensuring structures are in place as AISI begins planning for being placed onto a statutory footing. This Deputy Director will be at the heart of scaling this fast-paced start up in government, exercising vision and leadership in growing the organisation’s core operations functions from zero to one. They will also lead on an ambitious set of operational special projects, such as opening a new office in San Francisco.

Head of Information Security

Strategy & Operations

£110,000.00 - £125,000.00

As the Head of Security at the AI Safety Institute (AISI), you will lead on building a cyber resilient AISI. This will include efforts to harden our systems and protect our people, information and technologies. You think big picture about organisational risk based on mission objectives and a calibrated understanding of existing and potential attacks. You want to combine meaningful security with creative solutions rather than being limited to the compliance playbook.

Recruitment Co-ordinator

£35,720 - £38,565

The primary goal of this function is to manage all active recruitment for those looking to join the Institute and ensure a high standard of care for our hiring managers. Your aim will be to build the world’s leading AI safety team, acting as a key pillar in the world’s response to rapidly advancing technology. With direct lines into the highest seats of government we can influence significant change and protection for the citizens it will affect

Research Scientist - Science of Evaluations

Evaluations

£85,000 - £145,000

AISI’s Science of Evaluations team will conduct applied and foundational research focused on two areas at the core of our mission: (i) measuring existing frontier AI system capabilities and (ii) predicting the capabilities of a system before running an evaluation.

We are excited by the amount our team has been able to accomplish — and we are just getting started.