This programme will fund researchers who will collaborate with the UK government to advance systemic approaches to AI Safety.
Systemic AI safety means safeguarding the societal systems and critical infrastructure into which AI is being deployed—to make our world more resilient to AI-related hazards and to enable its benefits.
The AI Safety Institute (AISI), in partnership with UK Research and Innovation (UKRI), is excited to support impactful research that takes a systemic approach to AI safety. We will begin by offering a round of seed grants, followed by future rounds with more substantial awards. We expect to provide successful applicants with ongoing support, computing resources, and access to a community of AI and sector-specific domain experts. We designed our grant process to be maximally inclusive—applying for a grant should be fast and easy.
In partnership with UKRI, the AISI will shortly invite researchers to submit grant proposals that directly address systemic AI safety problems or improve our understanding of systemic AI safety. We will prioritise applications that offer concrete, actionable approaches to significant systemic risks from AI.
We expect to offer around 20 exploratory or proof-of-concept grants and will invite future bids for more substantial proposals to develop research programmes further.
AISI will collaborate on this work with UKRI, The Alan Turing Institute and other AI Safety Institutes worldwide for this programme.
Because AI presents novel risks to all areas of society, we welcome proposals from researchers in any field. The only eligibility requirements are that you are:
We want successful proposals to translate into meaningful impact. We aim to build a community of researchers focused on systemic AI safety. To this end, we will meet regularly with grantees and connect them to other stakeholders within and beyond the UK government to ensure that ideas reach their full potential. Specifically:
Grantees with the most promising projects may be eligible to develop an application for a continuation grant. We will work closely with awardees to identify projects most likely to lead to impactful outputs and help them develop proposals for future funding. We will announce more information about future rounds in due course.
We will be launching this grant programme, which will include details about how to apply, in the next few weeks. If you want to sign up to receive updates about the grant programme, please fill out the below form here.
If you have any questions regarding the call, please email AISIgrants@dsit.gov.uk.
With this programme, we aim to stimulate a research field that focuses on understanding and intervening at the level of the systems and infrastructure in which AI systems operate.
Since AI touches every part of society, systemic AI safety needs input from researchers and developers with broad expertise drawn from academia, business and civil society. We expect applications from those interested in digital media, education, cybersecurity, the democratic process, safety science, institution and market design, and the economy, as well as those working in computer science and AI.
Here are some examples of the sort of research we are interested in, but this is not exhaustive:
Other approaches to AI safety involve understanding and modifying the capabilities of AI models themselves. These approaches complement systemic approaches but are not the focus of this grant programme.
Systemic AI safety is an emerging field, so there is not yet established literature on it. We will release more information about the scope of this grant when we make the funding call. In the meantime, here are some papers that you might find helpful: