British Technology Firms and Child Protection Officials to Examine AI's Capability to Create Exploitation Content
Tech firms and child protection organizations will be granted permission to assess whether artificial intelligence tools can produce child abuse images under new British legislation.
Significant Rise in AI-Generated Illegal Content
The announcement came as revelations from a protection monitoring body showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the government will allow approved AI developers and child protection organizations to examine AI systems – the underlying technology for chatbots and visual AI tools – and verify they have adequate protective measures to stop them from producing images of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," declared the minister for AI and online safety, adding: "Specialists, under rigorous protocols, can now identify the danger in AI models early."
Tackling Regulatory Obstacles
The changes have been implemented because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing regime. Until now, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is designed to averting that problem by helping to halt the creation of those materials at their origin.
Legal Framework
The amendments are being introduced by the government as modifications to the crime and policing bill, which is also implementing a prohibition on owning, producing or sharing AI systems designed to generate exploitative content.
Practical Consequences
This week, the minister visited the London headquarters of a children's helpline and listened to a mock-up call to counsellors involving a report of AI-based exploitation. The interaction depicted a teenager requesting help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.
"When I learn about children experiencing extortion online, it is a cause of intense frustration in me and rightful concern amongst parents," he stated.
Alarming Data
A prominent online safety foundation reported that cases of AI-generated abuse content – such as webpages that may contain multiple images – had significantly increased so far this year.
Instances of the most severe material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, making up 94% of prohibited AI depictions in 2025
- Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Response
The law change could "represent a vital step to ensure AI tools are secure before they are released," stated the head of the internet monitoring foundation.
"Artificial intelligence systems have enabled so victims can be victimised all over again with just a few clicks, giving criminals the ability to create potentially limitless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Content which additionally exploits survivors' trauma, and makes young people, especially girls, less safe on and off line."
Support Interaction Data
Childline also released details of support sessions where AI has been referenced. AI-related risks discussed in the sessions include:
- Employing AI to evaluate weight, body and looks
- Chatbots discouraging children from talking to trusted guardians about abuse
- Being bullied online with AI-generated material
- Digital extortion using AI-manipulated pictures
Between April and September this year, Childline conducted 367 support sessions where AI, chatbots and related topics were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing using chatbots for support and AI therapeutic applications.