Technology companies and child safety organizations will receive permission to evaluate whether artificial intelligence tools can produce child exploitation images under new British laws.
The announcement coincided with findings from a protection watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Under the changes, the government will allow approved AI developers and child safety organizations to examine AI models – the foundational systems for chatbots and image generators – and ensure they have adequate protective measures to stop them from producing depictions of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," stated Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now detect the danger in AI systems promptly."
The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI developers and others cannot generate such content as part of a testing regime. Until now, authorities had to wait until AI-generated CSAM was published online before dealing with it.
This legislation is designed to averting that problem by enabling to halt the creation of those materials at source.
The changes are being added by the government as modifications to the criminal justice legislation, which is also implementing a ban on owning, creating or distributing AI systems designed to create exploitative content.
This week, the official toured the London headquarters of Childline and listened to a mock-up call to advisors featuring a account of AI-based abuse. The interaction depicted a adolescent requesting help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.
"When I learn about children experiencing blackmail online, it is a source of intense anger in me and justified anger amongst families," he stated.
A prominent internet monitoring organization reported that cases of AI-generated abuse content – such as webpages that may contain numerous files – had more than doubled so far this year.
Cases of category A material – the most serious form of abuse – rose from 2,621 visual files to 3,086.
The legislative amendment could "constitute a vital step to ensure AI tools are safe before they are released," stated the head of the online safety organization.
"AI tools have made it so victims can be targeted all over again with just a few clicks, giving offenders the ability to make possibly limitless quantities of sophisticated, lifelike child sexual abuse material," she added. "Material which additionally commodifies victims' suffering, and renders children, particularly female children, more vulnerable on and off line."
Childline also published information of counselling sessions where AI has been referenced. AI-related risks mentioned in the conversations include:
During April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and related terms were mentioned, significantly more as many as in the same period last year.
Half of the references of AI in the 2025 sessions were related to psychological wellbeing and wellness, including utilizing AI assistants for support and AI therapeutic applications.
Agile coach and software developer with over a decade of experience in transforming teams and delivering innovative solutions.