resumegenius/Unsplash

Spain is moving aggressively to stamp out AI-fuelled image abuse, pushing through a tough new legal regime that treats deepfakes and non-consensual photo sharing as serious offences rather than online mischief. The package tightens consent rules for images and voices, hardens penalties for unlabeled synthetic content, and positions the country as a test case for how far democracies are willing to go to protect people’s likeness in the age of generative AI.

The stakes are high for victims whose faces and voices can be cloned in minutes, but also for platforms, creators and political actors who now face strict duties to label and police manipulated media. I see Spain’s move as an attempt to draw a bright legal line around human dignity in a digital ecosystem that has treated personal images as endlessly reusable raw material.

Spain’s new consent rules put victims at the center

At the heart of the reform is a simple idea: no one should wake up to find their face or voice repurposed by an algorithm without clear permission. Spain’s Cabinet in MADRID has approved draft legislation that directly targets AI-generated deepfakes and tightens consent rules on images, reflecting a broader push inside The European Union to criminalise non-consensual intimate content. The bill makes it clear that using someone’s likeness in synthetic media without their approval is not a grey area but a potential crime, especially when the material is sexualised or defamatory.

Spain’s government is also moving to close a long exploited loophole: the assumption that anything posted online is fair game for reuse. New consent rules are designed to stop the misuse of images and voices that were originally shared in private or semi-public spaces, with a particular focus on the viral spread of intimate photos and AI-cloned audio. Civil society groups such as CADE, the Civil Society Alliances for Digital Empowerm, have highlighted how victims, often women and minors, see their lives upended when private selfies or casual clips are scraped and remixed into explicit deepfakes, and the government’s plan responds directly to that pattern of misuse of images.

Heavy penalties and labeling rules aim to deter deepfake abuse

Consent is only one pillar of the crackdown. The other is a set of financial and compliance incentives designed to make platforms and content producers think twice before pushing unlabeled synthetic media. Earlier efforts by The Spanish authorities to regulate AI content already introduced the prospect of fines of up to 35 m euros or 7% of a company’s global annual turnover for improper labeling of AI content, a figure that instantly put deepfake abuse on the radar of major platforms and advertisers. By tying penalties to global turnover, The Spanish bill signals that compliance is not optional for multinational tech firms.

The new draft law builds on that foundation by requiring AI-generated images and videos to be clearly identified as synthetic, especially when they depict real people or could influence public debate. Spain’s Cabinet has framed this as part of a broader effort to ensure that manipulated content is visibly flagged so that voters, consumers and courts are not misled by fabricated footage. Earlier measures, approved in Mar as part of a separate AI content bill, already classified the failure to label such material as an offence and aligned Spain with wider efforts in Europe to standardise AI regulations.

A European test case for criminalising deepfakes

Spain is not acting in isolation. The European Union is stepping up efforts to regulate deepfakes, with new rules requiring member states to criminalise non-consensual intimate images and to ensure that AI-generated content is traceable and auditable. Spain’s Cabinet decision to approve draft legislation on Jan 13 fits squarely into that push, and national officials have been explicit that they want to be among the first to translate those European obligations into detailed domestic law. Reporting on the move notes that the bill will now go through parliamentary scrutiny, but the direction of travel is clear: deepfake abuse is being treated as a matter of Legislative Measures, not just platform policy.

Spain’s approach is also being watched across Europe, a region that spans 50 countries and 230 languages and is grappling with how to protect citizens without stifling innovation. Online debate in Spain and across Europe has highlighted strong public support for criminalising the creation and distribution of intimate deepfakes without permission, with many users arguing that the bill simply updates long-standing privacy norms for the AI era. Others worry about overreach or the risk that powerful actors could use deepfake laws to silence satire or dissent, a tension that is already visible in discussions on Spain and beyond.

More from Morning Overview