Please enable javascript for this website.

Grants

Systemic AI Safety Fast Grants

This programme will fund researchers who will collaborate with the UK government to advance systemic approaches to AI Safety.

Fast Grants Overview

Applications will be announced soon. Subscribe for updates below, and see our privacy notice here.

Addressing AI's risks to people and society requires looking beyond AI models' capabilities. AI risk management must also include understanding the impact on the systems and infrastructure in which AI models operate.

Systemic AI safety means safeguarding the societal systems and critical infrastructure into which AI is being deployed—to make our world more resilient to AI-related hazards and to enable its benefits.  

The AI Safety Institute (AISI), in partnership with UK Research and Innovation (UKRI), is excited to support impactful research that takes a systemic approach to AI safety. We will begin by offering a round of seed grants, followed by future rounds with more substantial awards. We expect to provide successful applicants with ongoing support, computing resources, and access to a community of AI and sector-specific domain experts. We designed our grant process to be maximally inclusive—applying for a grant should be fast and easy.  

What we are funding

In partnership with UKRI, the AISI will shortly invite researchers to submit grant proposals that directly address systemic AI safety problems or improve our understanding of systemic AI safety. We will prioritise applications that offer concrete, actionable approaches to significant systemic risks from AI.  

We expect to offer around 20 exploratory or proof-of-concept grants and will invite future bids for more substantial proposals to develop research programmes further.  

AISI will collaborate on this work with UKRI, The Alan Turing Institute and other AI Safety Institutes worldwide for this programme.

Proposals

Eligibility  

Because AI presents novel risks to all areas of society, we welcome proposals from researchers in any field. The only eligibility requirements are that you are:  

  • Associated with a host organisation that can receive the awarded funds. This could be a university, a business, a civil society organisation, or part of government.  
  • Based in the UK (but teams may involve international collaborators).  

Strategic aims of the programme

We want successful proposals to translate into meaningful impact. We aim to build a community of researchers focused on systemic AI safety. To this end, we will meet regularly with grantees and connect them to other stakeholders within and beyond the UK government to ensure that ideas reach their full potential. Specifically:  

  • We want grantees to meet with us every ~3 months (at a half-day online/hybrid workshop) to discuss their research progress and network with other awardees. These meetings aim to exchange ideas across the different projects and start building a community around systemic AI safety.  
  • We will work with each grantee team to find partners to implement or test their ideas where appropriate.  

Grantees with the most promising projects may be eligible to develop an application for a continuation grant. We will work closely with awardees to identify projects most likely to lead to impactful outputs and help them develop proposals for future funding. We will announce more information about future rounds in due course.  

How to apply  

We will be launching this grant programme, which will include details about how to apply, in the next few weeks. If you want to sign up to receive updates about the grant programme, please fill out the below form here.  

Contact  

If you have any questions regarding the call, please email AISIgrants@dsit.gov.uk.  

What is systemic AI safety?  

With this programme, we aim to stimulate a research field that focuses on understanding and intervening at the level of the systems and infrastructure in which AI systems operate.   

Since AI touches every part of society, systemic AI safety needs input from researchers and developers with broad expertise drawn from academia, business and civil society. We expect applications from those interested in digital media, education, cybersecurity, the democratic process, safety science, institution and market design, and the economy, as well as those working in computer science and AI.  

Here are some examples of the sort of research we are interested in, but this is not exhaustive:  

  • A systems-informed approach for how to improve trust in authentic digital media and protect against AI-generated misinformation
  • Targeted interventions that protect critical infrastructure, for example, those providing energy or healthcare, from an AI-enabled cyberattack
  • Projects measuring, modelling, or mitigating potentially harmful secondary effects of AI systems that take autonomous actions on digital platforms

Other approaches to AI safety involve understanding and modifying the capabilities of AI models themselves. These approaches complement systemic approaches but are not the focus of this grant programme.  

Find out more about systemic AI safety

Systemic AI safety is an emerging field, so there is not yet established literature on it. We will release more information about the scope of this grant when we make the funding call. In the meantime, here are some papers that you might find helpful: