Review these example projects to better understand how to collaborate with the UK government to advance systemic approaches to AI Safety.
Systemic AI safety draws upon sociotechnical AI research, which is a broader endeavour that considers the impact of AI on people and society (Weidinger et al. 2023). Systemic AI safety is also related to safety science within engineering, which studies how to make systems and infrastructure safer (Dobbe, 2022). We hope to bring together researchers from these and other communities to tackle systemic risks from AI.
For this grants programme, we are focused on systems-focused approaches to AI safety. We distinguish systemic AI safety from interventions that focus on AI models themselves. To make this distinction clear, consider the problem of AI-generated misinformation:
We are excited about impactful, evidence-based work that addresses both ongoing and anticipated risks to societal infrastructure and systems.
We recognise that future risks from AI remain largely unknown. We are open to a range of plausible assumptions about how AI technologies will develop and be deployed in the next 2-5 years (we are less interested in highly advanced capabilities that may take much longer to develop). For example, over the near term.
Below, we provide examples of potential systemic AI safety problems to help you better understand what we are looking for. We include examples of both cross-cutting and sector specific problems.
We hope that the ideas below will serve as helpful starting points, but they are not intended to provide an exhaustive list of topics in systemic AI safety. We are sure there are many other important problems to address - if you have one in mind, then please do apply.