UK Technology Firms and Child Protection Agencies to Test AI's Ability to Generate Abuse Images

Technology companies and child protection agencies will be granted permission to assess whether AI tools can generate child exploitation material under new British legislation.

Significant Rise in AI-Generated Harmful Material

The declaration coincided with revelations from a protection watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the government will allow designated AI developers and child protection groups to inspect AI systems – the foundational systems for conversational AI and visual AI tools – and verify they have sufficient protective measures to prevent them from creating images of child exploitation.

"Fundamentally about stopping abuse before it occurs," stated Kanishka Narayan, noting: "Experts, under rigorous protocols, can now detect the danger in AI models promptly."

Addressing Legal Challenges

The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.

This law is designed to preventing that issue by helping to stop the production of those images at their origin.

Legislative Framework

The amendments are being introduced by the government as revisions to the criminal justice legislation, which is also establishing a prohibition on possessing, creating or distributing AI systems developed to generate exploitative content.

Practical Impact

This recently, the minister visited the London headquarters of Childline and heard a mock-up call to advisors featuring a account of AI-based exploitation. The interaction portrayed a teenager seeking help after being blackmailed using a explicit deepfake of themselves, constructed using AI.

"When I learn about children experiencing blackmail online, it is a source of intense anger in me and rightful anger amongst parents," he stated.

Concerning Data

A leading online safety organization reported that instances of AI-generated abuse material – such as online pages that may include multiple images – had more than doubled so far this year.

Instances of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were predominantly victimized, accounting for 94% of illegal AI images in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a vital step to ensure AI products are safe before they are released," commented the chief executive of the online safety organization.

"AI tools have enabled so victims can be victimised repeatedly with just a few clicks, providing offenders the capability to make possibly endless quantities of advanced, lifelike exploitative content," she added. "Content which additionally exploits survivors' trauma, and makes young people, particularly girls, less safe on and off line."

Counseling Interaction Information

The children's helpline also released information of support sessions where AI has been mentioned. AI-related risks mentioned in the conversations comprise:

  • Using AI to rate body size, physique and appearance
  • Chatbots dissuading young people from consulting safe guardians about abuse
  • Being bullied online with AI-generated content
  • Online blackmail using AI-faked images

During April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and related terms were mentioned, four times as many as in the same period last year.

Half of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, including utilizing chatbots for support and AI therapeutic apps.

Amy Smith
Amy Smith

A seasoned IT consultant with over a decade of experience in cybersecurity and cloud computing, passionate about sharing knowledge.