If our work sounds interesting, we encourage you to apply. All applications are assessed on a rolling basis, so it is best to apply as soon as possible.
"AISI has built a wonderful group of technical, policy, and civil service experts focused on making AI and AGI go well, and I'm finding it extremely motivating to do technical work in this interdisciplinary context."
Geoffrey Irving
Research Director
"AISI is situated at the centre of the action where true impact can be made. I'm excited about the opportunities unfolding in front of us at such a rapid pace."
Professor Yarin Gal
Research Director
Our typical interview process includes submitting a CV and short written statement, skills assessments such as a technical interview and a take-home coding test, and 2-4 interviews, including a conversation with a senior member of our team. We tailor this process as needed for each role and candidate.
This application is for those without a preference for a team. We prefer you apply to team-specific RS/RE roles below. /// Design and build evaluations to assess the capabilities and safeguards of cutting-edge models and conduct research to better understand AI systems and mitigate their risks. Candidates should have relevant experience in ML, AI, AI security, or computer security.
Plan and drive the delivery of our advanced AI testing. Coordinate our evaluations workstreams, ensure reports and recommendations meet high scientific standards, and iterate to improve our strategy and evaluations over time. Candidates should have experience with complex project management, ML research, and working with technical teams.
Work with one of our evaluations workstreams or our cross-cutting platforms team to build interfaces and frontends, create fast and secure inference channels for external models, and drive ML ops projects to host and fine-tune our own models. Candidates should have experience with some of the following: frontend or tools development, backend/API design, dev/ML ops, privacy and security engineering, and/or engineering management.
Elicit capabilities from the advanced AI systems and improve our agent scaffolds. Candidates should have experience working with LLMs, understand prompting techniques like chain of thought and ReAct loop, and have experience building practical AI applications.
Help design our strategy for understanding AI-enabled cyber risks. Build benchmark challenges to evaluate AI system capabilities, coordinate red team exercises, and explore interactions between narrow cyber tools and general AI. Candidates should have expertise in areas like penetration testing, CTF design, vulnerability research, or cyber tool development.
Build evaluations, design experiments, and create tools to measure the capabilities of AI systems against cyber security threat scenarios. Candidates should have relevant experience in machine learning, AI security, or computer security.
Drive research to develop our understanding of how safety cases could be developed for advanced AI. You'll work closely with Geoffrey Irving to build out safety cases as a new pillar of AISI's work.
Design and execute evaluations to assess the susceptibility of advanced AI systems to attack and undertake research projects related to safeguarding. Candidates should have conducted technical research in machine learning, computer system security, or information security.
Advance the evaluation of biology and chemistry specific AI models. Design experiments, test models, identify datasets, and present findings to policymakers. Candidates should have a PhD in a relevant field and experience with modern deep learning.
Build evaluations and research risks such as uncontrolled self-improvement, autonomous replication, and manipulation and deception. Candidates should have a strong ML engineering or research background.