Morning Overview

AI accelerates online abuse content as watchdogs struggle to respond

Analysts at the Internet Watch Foundation spent 2025 reviewing a rising tide of child sexual abuse material that did not depict real events. The images looked authentic. The videos moved like real footage. But they were generated by artificial intelligence, and the volume was growing faster than the organization’s team could process it.

The IWF’s 2026 annual assessment, published in March, confirmed what child safety researchers had feared: generative AI tools crossed a critical threshold in 2025. The material is no longer crude or easy to spot. It has become, in the foundation’s own words, significantly more realistic, and it has moved from still images to synthetic video that mimics real exploitation footage.

That shift from stills to video marks a turning point. Video content typically triggers more severe legal classifications, demands far more investigative time to assess, and inflicts deeper harm on children whose likenesses may have been scraped as training data. For the analysts, law enforcement officers, and regulators tasked with responding, the problem is not just bigger. It is fundamentally different.

A flood that outpaces existing tools

Traditional detection systems work by comparing uploaded files against hash databases of previously identified abuse images. When a known image reappears, the system flags it. But AI-generated content is new by definition. Each synthetic image or video has no prior fingerprint in any database, which means hash-matching tools are essentially blind to it.

The IWF’s year-over-year data showed not only growth in volume but acceleration in sophistication. The foundation called on governments and technology companies to take concrete steps, reflecting what it characterized as an inadequate pace of action relative to the scale of the threat.

Generative AI has also lowered the barrier to entry. Producing convincing synthetic abuse content no longer requires underground networks or specialist technical skills. Consumer-grade software can fabricate entirely fictional children or manipulate real photographs of minors to depict acts that never occurred. Both scenarios are increasingly visible in the IWF’s caseload, and both carry serious risks for child protection.

Investigators stretched thin

The operational toll on law enforcement is mounting. Susie Hargreaves, chief executive of the IWF, told The Guardian in March 2026 that the foundation’s analysts are now encountering AI-generated abuse material “on a daily basis” and warned that the technology is “outpacing the response.” Her comments underscored a concern shared across the child safety sector: that the people tasked with reviewing this content are being overwhelmed by its volume and realism.

That pressure extends beyond the IWF. Frontline analysts who review flagged material to determine whether a real child is at risk face a compounding burden. Every synthetic image or video that enters a tip line must still be individually assessed, because dismissing it without review risks missing a case involving an actual victim. The result, as child safety researchers have documented, is a triage bottleneck that diverts time and attention from cases where identified children need immediate protection.

A troubling secondary effect compounds the problem: synthetic content can fuel demand for real abuse material and complicate prosecution, because offenders may claim no actual child was harmed. While these dynamics are well recognized in the child safety field, major policing agencies such as Interpol and the FBI have not yet published operational data quantifying the scale of the strain on their teams.

Australia’s eSafety Commissioner has been among the most vocal regulators on the issue, describing how generative AI enables new forms of image-based abuse and extortion targeting children. In a detailed public analysis, the Commissioner flagged a problem that sits at the heart of the crisis: the growing difficulty of separating synthetic material from authentic recordings of abuse.

That distinction carries real consequences. A synthetic image wrongly classified as real abuse could misdirect an investigation and consume limited forensic resources. Real abuse dismissed as AI-generated could leave a child unprotected and an offender free. Getting it wrong in either direction has costs that fall on the most vulnerable people in the system.

John Carr, a longtime child safety advocate and secretary of the UK Children’s Charities’ Coalition on Internet Safety, has described the situation in blunt terms. Speaking to media outlets in early 2026, Carr said the proliferation of AI-generated abuse imagery is “creating a haystack so large that finding the real needles becomes almost impossible.” His concern reflects a fear widely held among frontline organizations: that the sheer volume of synthetic material will effectively provide cover for offenders producing and distributing recordings of real abuse.

Laws written for a different era

Legal frameworks are struggling to keep up. Several jurisdictions already have statutes that cover computer-generated abuse imagery. In the United States, federal law under 18 U.S.C. § 1466A prohibits visual depictions of minors engaged in sexually explicit conduct, including virtual or digitally created images. The United Kingdom’s Coroners and Justice Act 2009 similarly covers prohibited images of children regardless of whether they depict a real person.

But the rapid improvement in AI-generated realism is testing these laws in ways legislators did not anticipate. Prosecutors face questions about evidentiary standards when the line between synthetic and authentic material blurs. Courts must determine how existing sentencing guidelines apply to content that is photorealistic but wholly fabricated. Several countries are now working to update or clarify their statutes, but the legislative process moves far slower than the technology.

The UK’s Online Safety Act, which began enforcement rollout in 2025, places new obligations on platforms to proactively address child sexual exploitation material, including AI-generated content. The EU’s AI Act, which classifies certain AI applications as high-risk, could also provide regulatory leverage. But neither framework has been fully tested against the specific challenge of synthetic abuse content at scale, and enforcement mechanisms remain in early stages.

The silence from AI companies

No major AI developer has issued a detailed public response to the IWF’s 2025 findings. Companies building the generative models most capable of producing realistic synthetic content have not disclosed what specific safeguards they added or strengthened during the period the IWF report covers.

This is not to say the industry has done nothing. Several leading AI firms participate in coalitions such as the Tech Coalition and have partnered with organizations like the National Center for Missing & Exploited Children. Some have published safety policies describing content filters and usage restrictions. But the gap between stated policy and measurable outcomes remains wide. There are no published figures from AI companies quantifying how often their models are used to generate abuse content, or how effective their safety filters are at preventing it.

That lack of transparency leaves a significant accountability gap. It is unclear whether the industry views the problem as solvable through technical guardrails or whether it regards enforcement as primarily a government responsibility. Without independent audits of model outputs and takedown pipelines, claims about the effectiveness of internal safety measures remain impossible to verify.

Partial data and a widening gap between threat and response

The IWF’s annual data, drawn from its own detection and reporting operations, provides the most direct measurement available of AI-generated abuse material circulating on the open and dark web. Because the foundation operates as the UK’s designated hotline and works with international partners, its figures reflect a substantial share of identified material. But they should be understood as a floor, not a ceiling. Content on encrypted messaging services, private cloud storage, and closed forums is by definition absent from these statistics.

It also remains unclear how much synthetic material is being created with general-purpose text-to-image or text-to-video tools versus smaller, custom models fine-tuned specifically for abuse. Without transparency from developers or forensic access to the tools used, researchers can only infer patterns from the visual characteristics of the content and from offender discussions observed in online spaces.

The core finding is supported by converging evidence from the IWF’s operational data, regulatory analysis from multiple jurisdictions, and the accounts of child safety professionals working on the front lines. AI-generated child sexual abuse material is rising rapidly, becoming more realistic, and straining the systems designed to stop it. But the true scale of the problem, and the effectiveness of current responses, remains only partly visible. As of spring 2026, the evidence amounts to a warning signal, not a complete picture, and the distance between the threat and the response continues to grow.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.