Read the Frontier AI Trends Report
Please enable javascript for this website.

Our approach to tackling AI-generated child sexual abuse material

How we’re partnering with government and experts to prevent the creation and spread of AI‑generated CSAM

Today, AISI and child safety organisation Thorn are publishing a new safety protocol to help prevent the creation and spread of AI‑generated child sexual abuse material (CSAM). The protocol sets out practical steps that AI developers, model hosting services and others in the AI ecosystem can take to reduce the risk that their systems are misused to generate CSAM.

This comes alongside new legislation which AISI has helped to shape that will, under strict conditions, allow authorised organisations to test whether AI models can be used to generate AI‑generated CSAM, so that they can strengthen safeguards and reduce the risk of misuse. Together, the protocol and the legislative change form an important part of the UK’s wider approach to tackling the growing threat of AI‑generated CSAM.

The growing AI CSAM threat and the role of new legislation

Possessing and generating child sexual abuse material is already illegal under UK law, regardless of how it is created. That includes material made or altered using AI, image-editing software or any other editing tools, whether it depicts real children, uses their likenesses, or fabricates fake children entirely synthetically. But rapid advances in AI image and video tools are changing the scale and nature of the problem.

AI is increasingly being used to generate realistic, synthetic child sexual abuse images which are them disseminated. A recent report from the Internet Watch Foundation (IWF) found 3,512 AI‑generated CSAM images on a single dark‑web forum in one month. This material harms real children, whose likenesses are depicted in abuse scenarios, and adds to the burden of law enforcement who, on top of protecting children and bringing offenders to justice, must process larger volumes of content and identify where images depict real children in real danger.

Because creating or possessing CSAM is currently illegal in all contexts, developers and children protection organisations cannot test AI models directly to see whether they can be misused to generate this material. As a result, most companies have had to rely on indirect and imperfect methods.  

The measures introduced in November will change this. They will enable the Government to authorise a small number of organisations – such as AI developers and third sector organisations – to test whether AI models can be used to generate CSAM under strict conditions, helping to limit the production of AI-generated CSAM in the first place.

AISI will work alongside the Home Office to consult with experts from industry, child safety organisations, law enforcement and academia to design the conditions and operational safeguards that will apply to this newly permitted testing. These safeguards will help prevent sensitive content from leaking, set clear parameters for testing activity, and protect the wellbeing of researchers.

The new AI CSAM safety protocol

The safety protocol published today has been jointly produced by AISI and Thorn, informed by input from industry and child protection organisations. The publication of this protocol complements the UK’s new legislative powers by guiding how developers and platforms should act on what testing reveals, and how they should manage risks beyond formal testing environments.

The protocol is aimed at:

  • Developers of AI models and tools.
  • Model hosting platforms and other intermediaries that make models available to users.

It:

  • Sets out principles for safe‑by‑design AI development, including measures to reduce the risk that models can be prompted or adapted to generate CSAM.
  • Suggests routes for non‑developer organisations, such as hosting and deployment platforms, to identify and mitigate misuse, including monitoring, access controls and response processes.
  • Highlights unresolved technical challenges, such as reliably detecting CSAM in training data and in model outputs, to guide further research.

Our ambition is that this protocol helps establish a baseline for how organisations involved in the development, hosting and deployment of AI models should act to combat AI‑generated CSAM. It is designed to be practical and proportionate, reflecting what is technically feasible in a landscape where offenders adapt quickly and actively try to evade safeguards.

Beyond testing: strengthening defences across the full AI lifecycle

Empowering industry with stronger legal powers is a necessary step towards a safer AI ecosystem. But we know that offenders will continue to try to misuse models post-release – particularly open-weight, locally run models.

Even with stronger legal powers for model testing, pre‑deployment checks on base models alone cannot fully prevent misuse. Interventions are needed at multiple points in the lifecycle of AI systems, including how models are trained, distributed, and used in practice. AISI will focus on understanding and addressing downstream misuse, and on strengthening defences across the AI CSAM “supply chain”.

We are:

  • Conducting detailed threat modelling to map how AI‑generated CSAM is created, exchanged and used, and to identify the most effective technical and policy intervention points.
  • Translating these insights into guidance and protocols for developers, platforms and other actors in the ecosystem, including the protocol published today with Thorn.
  • Exploring what external research efforts we can support through our funding programmes, to address key gaps in our ability to mitigate AI-generated CSAM.

Taken together, our approach will complement the legislative changes and the protocol by helping to solve some of the open technical challenges they identify and by strengthening safeguards across the AI ecosystem.

Next steps

The combination of new legal powers for model testing, a practical safety protocol developed with Thorn, and a focused programme of research and threat modelling highlights the UK’s multi-faceted response to AI‑generated child sexual abuse material.  

We will keep working with government, industry, child safety organisations and law enforcement to reduce the creation and spread of AI‑generated CSAM and to help protect children. We encourage organisations across the AI ecosystem to adopt the protocol and engage with AISI and Thorn as we refine and build on this work.