UK Tech Firms and Child Protection Officials to Examine AI's Ability to Generate Exploitation Images
Tech firms and child protection agencies will be granted permission to evaluate whether AI tools can generate child abuse images under new British laws.
Substantial Increase in AI-Generated Illegal Content
The declaration coincided with revelations from a safety monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the changes, the authorities will permit approved AI companies and child protection groups to examine AI models – the foundational systems for conversational AI and image generators – and verify they have sufficient safeguards to prevent them from creating depictions of child exploitation.
"Ultimately about stopping exploitation before it occurs," declared the minister for AI and online safety, noting: "Specialists, under strict conditions, can now detect the danger in AI systems early."
Tackling Regulatory Challenges
The changes have been implemented because it is illegal to create and own CSAM, meaning that AI creators and others cannot create such content as part of a evaluation process. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is aimed at averting that issue by helping to stop the creation of those materials at source.
Legislative Framework
The changes are being introduced by the authorities as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, creating or distributing AI systems developed to generate child sexual abuse material.
Practical Impact
This recently, the minister visited the London headquarters of a children's helpline and listened to a mock-up conversation to advisors involving a account of AI-based abuse. The call portrayed a adolescent seeking help after facing extortion using a explicit AI-generated image of themselves, created using AI.
"When I learn about children experiencing extortion online, it is a source of intense anger in me and justified concern amongst parents," he said.
Alarming Data
A prominent internet monitoring foundation reported that cases of AI-generated abuse material – such as online pages that may include numerous files – had significantly increased so far this year.
Cases of the most severe material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were predominantly targeted, making up 94% of prohibited AI images in 2025
- Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The law change could "constitute a vital step to ensure AI tools are secure before they are launched," stated the chief executive of the internet monitoring foundation.
"Artificial intelligence systems have enabled so survivors can be targeted repeatedly with just a simple actions, giving offenders the capability to create potentially limitless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Content which further exploits survivors' trauma, and makes young people, especially female children, less safe on and off line."
Support Interaction Information
The children's helpline also published information of support sessions where AI has been mentioned. AI-related harms discussed in the sessions include:
- Using AI to rate body size, body and looks
- AI assistants discouraging children from consulting safe guardians about harm
- Facing harassment online with AI-generated content
- Digital blackmail using AI-faked images
Between April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and related topics were mentioned, four times as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were related to mental health and wellbeing, encompassing using AI assistants for support and AI therapy apps.