🔗 Share this article British Technology Firms and Child Protection Officials to Examine AI's Ability to Create Exploitation Images Tech firms and child protection agencies will receive permission to evaluate whether artificial intelligence systems can generate child exploitation material under recently introduced UK legislation. Significant Increase in AI-Generated Harmful Content The declaration came as findings from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025. Updated Legal Framework Under the amendments, the government will permit approved AI developers and child safety organizations to examine AI models – the foundational systems for conversational AI and visual AI tools – and ensure they have adequate safeguards to stop them from creating depictions of child sexual abuse. "Ultimately about preventing abuse before it occurs," stated the minister for AI and online safety, adding: "Specialists, under strict protocols, can now identify the risk in AI models promptly." Addressing Regulatory Challenges The changes have been introduced because it is against the law to create and own CSAM, meaning that AI creators and others cannot generate such images as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it. This law is designed to preventing that issue by enabling to stop the creation of those materials at source. Legal Structure The amendments are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on possessing, creating or sharing AI systems designed to create exploitative content. Real-World Consequences This week, the minister toured the London base of Childline and listened to a simulated conversation to counsellors involving a report of AI-based exploitation. The interaction depicted a adolescent requesting help after being blackmailed using a explicit deepfake of himself, constructed using AI. "When I learn about children experiencing blackmail online, it is a cause of extreme frustration in me and justified concern amongst parents," he said. Alarming Statistics A leading online safety organization reported that cases of AI-generated exploitation material – such as webpages that may include multiple files – had more than doubled so far this year. Instances of the most severe material – the most serious form of abuse – increased from 2,621 visual files to 3,086. Girls were predominantly targeted, making up 94% of prohibited AI depictions in 2025 Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025 Industry Reaction The legislative amendment could "constitute a crucial step to ensure AI products are safe before they are released," commented the head of the internet monitoring organization. "AI tools have enabled so victims can be victimised all over again with just a few clicks, giving criminals the capability to create possibly limitless quantities of advanced, photorealistic child sexual abuse material," she continued. "Content which further exploits survivors' trauma, and makes young people, particularly female children, more vulnerable both online and offline." Counseling Session Data The children's helpline also published details of support sessions where AI has been mentioned. AI-related risks discussed in the conversations include: Using AI to evaluate weight, body and appearance Chatbots discouraging young people from consulting safe guardians about abuse Facing harassment online with AI-generated content Digital extortion using AI-faked images During April and September this year, Childline delivered 367 support sessions where AI, chatbots and related topics were mentioned, four times as many as in the same period last year. Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including using chatbots for support and AI therapeutic apps.