Tech firms and child safety agencies will be granted authority to assess whether AI systems can generate child exploitation material under new UK legislation.
The announcement coincided with revelations from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Under the amendments, the authorities will permit designated AI companies and child protection groups to examine AI systems – the underlying technology for conversational AI and visual AI tools – and ensure they have sufficient protective measures to stop them from producing depictions of child exploitation.
"Ultimately about stopping abuse before it happens," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now detect the risk in AI systems early."
The changes have been introduced because it is illegal to create and own CSAM, meaning that AI developers and others cannot create such content as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is designed to averting that problem by helping to stop the creation of those materials at source.
The changes are being introduced by the authorities as modifications to the criminal justice legislation, which is also implementing a ban on owning, creating or distributing AI models developed to generate exploitative content.
This recently, the minister toured the London headquarters of Childline and listened to a mock-up call to counsellors featuring a report of AI-based exploitation. The call depicted a teenager requesting help after facing extortion using a explicit deepfake of himself, created using AI.
"When I hear about children facing blackmail online, it is a cause of extreme frustration in me and rightful anger amongst parents," he stated.
A prominent online safety organization stated that cases of AI-generated abuse content – such as online pages that may contain multiple images – had significantly increased so far this year.
Cases of the most severe material – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
The legislative amendment could "constitute a crucial step to ensure AI products are secure before they are released," stated the head of the online safety foundation.
"AI tools have enabled so victims can be targeted repeatedly with just a simple actions, providing criminals the ability to make potentially endless quantities of sophisticated, photorealistic exploitative content," she continued. "Material which further commodifies survivors' suffering, and makes young people, especially girls, more vulnerable both online and offline."
Childline also published information of support interactions where AI has been mentioned. AI-related harms discussed in the conversations comprise:
Between April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing using chatbots for assistance and AI therapeutic apps.
Elara is a seasoned poker strategist with over a decade of experience in competitive tournaments and online play.