We are now the AI Security Institute
Please enable javascript for this website.

Navigating the uncharted: Building societal resilience to frontier AI

We outline our approach to study and address AI risks in real-world applications

Highly capable AI systems are one of the most significant technological advances of our time. However, integrating them safely and beneficially into society remains a major challenge. History shows us that the greatest risks from new technologies rarely come from the technology itself - they emerge from how they are deployed and adopted.  We lack reliable data and evidence to understand how, why, and to what effect advanced AI systems are being used in different sectors.

Harms from powerful AI are already emerging. Today, individuals, businesses, and governments can easily access general-purpose AI systems capable of synthetic content creation, complex reasoning, and autonomous operation. These novel capabilities can pose serious risks when deployed into real world settings. Criminals are already using AI systems with multimodal capabilities to create more convincing forms of fraud and to generate non-consensual intimate images with minimal effort. Widely available chatbots have influenced young people toward self-harm and dangerous behaviours. Schools are grappling with widely accessible AI tools that are undermining learning and overwhelming educators.  

Other risks are just coming into view. As AI agents become more autonomous and interconnected, they could create instabilities and failures, particularly in sectors like financial services, where we are seeing increasing rates of adoption . As advanced AI systems take on high stakes decisions in contexts like critical national infrastructure, they create new vulnerabilities - from human operators relying on incorrect outputs, to cyberattacks exploiting AI weaknesses that cause serious failures.  

AISI's Societal Resilience Team has been set up in response to this challenge. The team has been tasked with monitoring the deployment of frontier AI systems in different settings and providing evidence-based risk assessments to the UK Government.  

We are developing concrete policy proposals and mitigation strategies that can be used by a wide range of actors beyond just frontier AI companies, including governments, regulators, researchers, and communities of users. We are building bespoke methodologies for risk analysis and measurement which borrow from the social sciences and technical AI research. These include threat modelling and capability elicitation, data scraping, and designing AI usage experiments to understand user experiences and behaviours in real-world settings.  

We also provide resources and support to leverage the collective effort of the wider research community, industry, and civil society.  We’ve outlined a set of four priority research areas questions we are working on and funding via the Challenge Fund. Our team prioritises risks that are enabled by current and near-term frontier AI systems and could cause serious harm. We focus on harms that we believe we can feasibly track, monitor, and mitigate against. Using these criteria, we have started with four priority research areas:

  • Critical overreliance: If we become overdependent on AI for critical decisions- managing energy grids, supply chain systems, or telecommunications networks -  we could face catastrophic vulnerabilities such as accidents or widespread system failures.
  • Emotional dependence: AI systems designed with human-like features, including AI companions, can make users susceptible to manipulation and unhealthy emotional dependencies, particularly affecting young people and vulnerable populations.  
  • Fraud and crime: Advanced AI capabilities are already being used by criminals to create more convincing scams and financial fraud at unprecedented scale and speed.  
  • Financial system instability: As AI agents make more interconnected decisions in financial markets, they risk increasing volatility and instability, which could lead to crashes or liquidity crises.

Beyond these priority areas, we are also interested in monitoring a broader range of risks. These include AI’s impact on our information ecosystem and labour market, as well as how it is being used in critical sectors such as education, healthcare and justice.  

Through our team’s research and grant-making, we’re taking four concrete steps to address societal risks caused by frontier AI deployment:

  • Mapping real-world AI use: Identifying exactly where and how frontier AI is being deployed - which sectors, which tasks, which user groups. This includes tracking ‘unapproved’ uses and emerging applications that weren't anticipated by developers.  
  • Measuring vulnerability and impact: Studying which groups and systems are most at risk, what makes them vulnerable, and how individual harms could cascade into systemic crises.  
  • Building monitoring systems: Developing data collection methods, risk indicators, and analytical frameworks to track how these risks evolve as AI capabilities advance and adoption spreads.  
  • Testing and evaluating solutions: Stress-testing potential mitigations - from technical safeguards to policy interventions - to determine what works in practice.

Government has a unique role to play here in studying questions and developing mitigations that may not otherwise happen. We bring together AISI’s world-leading expertise on frontier AI safety and security with key insights from government. This enables us to design solutions which will be maximally impactful across the UK.  

Our team are looking for researchers, technologists, and organisations working on these challenges. If your work addresses AI’s societal risks, apply for our Challenge Fund.