Criminal misuse of advanced AI is already here. As this technology develops, criminals may become faster, more difficult to detect, and harder to stop.
Frontier AI is an exceptionally powerful tool: for individuals, companies, and – unfortunately – for criminals too. We are already seeing AI used to support cybercrime, social engineering, impersonation scams, and more.
At the UK AI Security Institute (AISI), we’re working to understand and mitigate this growing risk. This blog sets out our approach.
To support these research efforts, we’re currently hiring for a Criminal Misuse Workstream Lead, as well as a Research Scientist and Research Engineer to build evaluations on multimodal and AI agents capabilities.
A new criminal toolkit
As frontier AI models get more powerful, AI-enabled crimes have become increasingly sophisticated and challenging to detect. Our concerns centre on three key capabilities of AI systems that are advancing rapidly:
- Multimodal generation: realistic audio, video, and images can now be generated with minimal effort – enabling more convincing deception and abuse. This could allow criminals to produce synthetic content for deception and impersonation, including in settings involving dynamic interaction with a victim.
- Advanced planning and reasoning: which can be combined with web search to help design and adapt sophisticated attack strategies.
- AI Agents: AI systems which can take actions on their own may enable persistent large-scale criminal activity without the need for human oversight or intervention.
In addition to these capabilities, widespread consumer market adoption could create new attack surfaces and reduce barriers to criminal use. For example, companies are using compression techniques to develop (and sometimes open source) models that are small enough to be run on lightweight devices like smartphones. As these consumer applications continue to grow, it is likely that criminal exploitation will grow too.
Our Approach
Broadly, we split our work into three areas: risk modelling, technical research, and interventions to address this challenge. Our research is strengthened through active collaboration with National Security partners and Serious Crime experts within His Majesty’s Government (HMG), whose expertise and insights directly inform our approach.
Risk Modelling
We start by asking two fundamental questions: how can AI add to the criminal toolkit today, and what will it be able to do tomorrow?
We have begun by identifying domains of malicious use and by pinpointing the critical stages where AI could amplify criminal success, building on the emerging capabilities highlighted above. This includes capabilities that could enhance criminal tactics. It also includes ways that AI can be used to accelerate, automate, and scale crime attempts to a wider range of targets.
Technical Research
We empirically evaluate the potential of AI systems to support criminal misuse and identify whether inflection points have been reached through our technical research. This includes:
- Multimodal evaluations: Our evaluations are formal measurements of a model’s ability to uplift criminal activity. We’re building on a pioneering evaluation suite to test AI capabilities in criminal domains across modalities.
- Derivative application analysis: We are also planning to extend our evaluation suite to test the safety and security of derivative consumer apps (such as content editing and generation apps), so that we can respond rapidly to new innovations and releases.
- Usage Data Analysis: We have built pipelines that let us analyse anonymised, open-source usage data to identify potential misuse patterns and exploitation attempts. This helps us understand how AI capabilities are being misused and how existing safeguards are being circumvented.
- Red Teaming: We conduct informal and formal red-teaming with subject matter experts who simulate criminal actors’ interactions with AI systems. These exercises help identify novel attack vectors and assess uplift from model responses to inform our automated evaluations.
Interventions
Addressing the criminal misuse of AI will require a range of policy, technical, and operational responses over time.
We are working to build the evidence base needed to support future decision-making. This includes exploring potential interventions at the model and deployment level, assessing how system design choices may influence risk, and engaging with stakeholders across government.
Given the pace of technological change, it is unlikely that any single approach will be sufficient. Our aim is to help ensure that responses are proportionate, forward-looking, and grounded in technical understanding.
Join us
We want to build the UK’s leading effort to understand and reduce the criminal misuse of AI. If you want to work on this challenge with us, we’re hiring!
Applications are open for:
Or, view our full list of vacancies.