UK Technology Companies and Child Protection Agencies to Test AI's Capability to Create Abuse Images

Technology companies and child safety organizations will receive permission to assess whether AI tools can generate child abuse images under new UK legislation.

Substantial Rise in AI-Generated Harmful Content

The announcement coincided with revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the authorities will allow designated AI companies and child protection organizations to examine AI models – the underlying systems for conversational AI and image generators – and verify they have adequate protective measures to stop them from creating depictions of child exploitation.

"Ultimately about stopping exploitation before it occurs," stated Kanishka Narayan, noting: "Specialists, under strict conditions, can now identify the danger in AI models promptly."

Addressing Regulatory Challenges

The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is designed to preventing that issue by helping to stop the production of those materials at source.

Legislative Framework

The changes are being added by the authorities as modifications to the criminal justice legislation, which is also establishing a ban on owning, producing or distributing AI systems developed to create child sexual abuse material.

Real-World Impact

This week, the official visited the London base of Childline and listened to a mock-up conversation to advisors involving a report of AI-based exploitation. The interaction portrayed a teenager seeking help after facing extortion using a explicit deepfake of themselves, constructed using AI.

"When I hear about children facing extortion online, it is a cause of extreme frustration in me and justified anger amongst parents," he stated.

Alarming Data

A leading internet monitoring foundation reported that instances of AI-generated exploitation content – such as online pages that may contain multiple files – had more than doubled so far this year.

Instances of the most severe material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
  • Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "constitute a crucial step to ensure AI tools are secure before they are launched," stated the head of the internet monitoring organization.

"AI tools have enabled so survivors can be victimised all over again with just a simple actions, giving offenders the capability to create potentially limitless amounts of advanced, photorealistic child sexual abuse material," she added. "Material which additionally exploits victims' trauma, and makes children, especially girls, more vulnerable both online and offline."

Counseling Session Information

The children's helpline also published details of support interactions where AI has been mentioned. AI-related harms mentioned in the conversations include:

  • Employing AI to rate body size, body and appearance
  • AI assistants discouraging young people from talking to safe guardians about abuse
  • Facing harassment online with AI-generated content
  • Digital blackmail using AI-manipulated images

During April and September this year, Childline conducted 367 support sessions where AI, conversational AI and associated topics were discussed, significantly more as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for support and AI therapeutic applications.

William Pratt
William Pratt

Elara is a seasoned gaming enthusiast with a passion for reviewing online casinos and sharing expert tips for players.