UK Tech Companies and Child Protection Agencies to Test AI's Capability to Generate Exploitation Images
Technology companies and child protection agencies will be granted permission to assess whether AI systems can produce child exploitation images under new UK legislation.
Substantial Increase in AI-Generated Harmful Material
The declaration coincided with findings from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the government will allow designated AI developers and child safety organizations to inspect AI systems – the underlying technology for chatbots and visual AI tools – and ensure they have sufficient safeguards to stop them from creating depictions of child sexual abuse.
"Ultimately about preventing exploitation before it occurs," declared the minister for AI and online safety, adding: "Specialists, under strict conditions, can now detect the risk in AI systems early."
Tackling Regulatory Challenges
The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot create such images as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was published online before addressing it.
This legislation is aimed at preventing that problem by helping to halt the production of those materials at source.
Legislative Structure
The changes are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a ban on possessing, producing or sharing AI systems designed to create exploitative content.
Practical Impact
This recently, the minister visited the London headquarters of Childline and heard a simulated call to counsellors involving a report of AI-based abuse. The call portrayed a adolescent seeking help after being blackmailed using a explicit deepfake of themselves, created using AI.
"When I learn about young people experiencing blackmail online, it is a cause of extreme frustration in me and rightful anger amongst families," he said.
Concerning Statistics
A prominent online safety organization stated that instances of AI-generated exploitation content – such as online pages that may include numerous images – had more than doubled so far this year.
Cases of the most severe material – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were predominantly targeted, accounting for 94% of illegal AI images in 2025
- Depictions of infants to toddlers rose from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "represent a vital step to ensure AI products are safe before they are released," stated the chief executive of the online safety organization.
"AI tools have enabled so victims can be targeted repeatedly with just a simple actions, giving offenders the capability to create possibly endless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Material which further commodifies survivors' suffering, and renders children, particularly female children, more vulnerable both online and offline."
Counseling Interaction Data
The children's helpline also released details of support interactions where AI has been mentioned. AI-related risks discussed in the sessions include:
- Employing AI to evaluate weight, body and appearance
- AI assistants discouraging children from talking to safe guardians about abuse
- Facing harassment online with AI-generated content
- Online blackmail using AI-manipulated pictures
During April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and associated terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including using chatbots for support and AI therapeutic apps.