UK Tech Firms and Child Protection Officials to Examine AI's Capability to Create Exploitation Images

Tech firms and child safety agencies will receive permission to assess whether AI tools can generate child exploitation material under recently introduced British legislation.

Significant Rise in AI-Generated Harmful Material

The announcement coincided with findings from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the government will allow approved AI companies and child protection organizations to inspect AI systems – the foundational systems for conversational AI and visual AI tools – and ensure they have sufficient protective measures to stop them from producing depictions of child exploitation.

"Ultimately about preventing exploitation before it happens," stated the minister for AI and online safety, noting: "Experts, under strict conditions, can now detect the risk in AI systems promptly."

Tackling Legal Obstacles

The amendments have been introduced because it is against the law to create and possess CSAM, meaning that AI developers and others cannot create such images as part of a testing regime. Until now, authorities had to wait until AI-generated CSAM was published online before dealing with it.

This law is designed to preventing that issue by helping to halt the creation of those images at source.

Legal Framework

The changes are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a ban on possessing, creating or distributing AI models designed to create child sexual abuse material.

Real-World Consequences

This recently, the official visited the London base of a children's helpline and heard a simulated conversation to counsellors involving a report of AI-based exploitation. The interaction depicted a adolescent seeking help after facing extortion using a explicit AI-generated image of himself, created using AI.

"When I learn about children experiencing blackmail online, it is a cause of intense frustration in me and rightful concern amongst parents," he stated.

Alarming Data

A leading online safety organization reported that instances of AI-generated abuse material – such as online pages that may contain numerous images – had significantly increased so far this year.

Cases of category A material – the gravest form of abuse – rose from 2,621 visual files to 3,086.

  • Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "represent a vital step to guarantee AI products are safe before they are launched," stated the head of the online safety foundation.

"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, providing offenders the capability to make possibly limitless amounts of advanced, photorealistic exploitative content," she continued. "Material which additionally exploits victims' suffering, and makes children, particularly female children, less safe on and off line."

Support Session Data

The children's helpline also published information of support interactions where AI has been referenced. AI-related risks mentioned in the conversations comprise:

  • Employing AI to evaluate weight, body and looks
  • AI assistants dissuading young people from talking to safe adults about abuse
  • Facing harassment online with AI-generated material
  • Online blackmail using AI-faked images

During April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related topics were discussed, significantly more as many as in the equivalent timeframe last year.

Half of the mentions of AI in the 2025 interactions were connected with mental health and wellness, including using AI assistants for assistance and AI therapeutic applications.

Alan Alvarez
Alan Alvarez

A tech enthusiast and lifestyle writer passionate about uncovering how innovation shapes our everyday world.