UK Tech Firms and Child Protection Agencies to Test AI's Ability to Create Abuse Images

Technology companies and child safety agencies will receive authority to assess whether artificial intelligence systems can generate child abuse material under recently introduced British laws.

Substantial Rise in AI-Generated Harmful Material

The declaration came as findings from a protection monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the changes, the authorities will allow approved AI developers and child safety organizations to inspect AI systems – the underlying systems for conversational AI and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing depictions of child exploitation.

"Ultimately about stopping abuse before it happens," stated Kanishka Narayan, noting: "Experts, under strict conditions, can now identify the risk in AI systems early."

Tackling Legal Challenges

The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot generate such content as part of a testing process. Previously, officials had to wait until AI-generated CSAM was published online before addressing it.

This law is aimed at preventing that problem by helping to stop the production of those images at source.

Legal Structure

The changes are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a ban on possessing, creating or sharing AI systems designed to generate exploitative content.

Real-World Impact

This recently, the official toured the London base of a children's helpline and heard a simulated conversation to advisors involving a report of AI-based exploitation. The call depicted a teenager seeking help after facing extortion using a sexualised deepfake of himself, created using AI.

"When I hear about young people experiencing blackmail online, it is a cause of intense anger in me and rightful anger amongst families," he said.

Alarming Statistics

A leading online safety organization reported that instances of AI-generated abuse content – such as online pages that may include numerous files – had more than doubled so far this year.

Instances of category A content – the most serious form of exploitation – rose from 2,621 visual files to 3,086.

  • Girls were overwhelmingly victimized, making up 94% of illegal AI depictions in 2025
  • Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "represent a vital step to ensure AI tools are safe before they are released," commented the head of the online safety foundation.

"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a few clicks, giving offenders the capability to create potentially endless amounts of advanced, lifelike child sexual abuse material," she continued. "Content which further commodifies survivors' trauma, and renders children, particularly girls, more vulnerable both online and offline."

Counseling Interaction Data

Childline also published details of counselling interactions where AI has been referenced. AI-related risks discussed in the sessions comprise:

  • Employing AI to evaluate body size, body and appearance
  • Chatbots discouraging children from talking to safe adults about abuse
  • Facing harassment online with AI-generated content
  • Online blackmail using AI-manipulated pictures

During April and September this year, the helpline conducted 367 support interactions where AI, chatbots and related terms were mentioned, four times as many as in the same period last year.

Half of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing using chatbots for support and AI therapeutic apps.

Heather Reid
Heather Reid

Award-winning journalist with a focus on Central European affairs and investigative reporting.