UMA media/Pexels

European regulators have moved aggressively against Elon Musk’s Grok AI, opening a formal investigation into how the chatbot helped users generate explicit, non-consensual deepfake images of women and minors. The probe targets Grok’s integration inside X and raises the stakes for how the European Union intends to police generative AI that can sexualise real people at scale.

At its core, the case is about whether a flagship AI system on one of the world’s biggest social platforms has become a factory for abuse, and whether existing tech rules are strong enough to stop it. The European Commission is now testing the limits of its new Digital Services Act against a fast-moving AI product that has already triggered public outcry and parallel scrutiny in the United Kingdom and United States.

The EU’s formal case against Grok and X

The European Commission has opened a formal Digital Services Act investigation into X and its Grok AI, focusing on whether the platform failed to curb non-consensual sexualised deepfake images. In its announcement, the Commission said it is examining if X properly assessed and mitigated “risks related to the dissemination of illegal content” and other harms linked to Grok’s image generation, as set out in an official Commission notice. The same document stresses that non-consensual sexual imagery is a “violent, unacceptable form of degradation,” language that signals Brussels is treating Grok’s output as a serious rights violation rather than a marginal content-moderation glitch, a point reinforced in a related statement. The case complements an earlier DSA probe into X’s recommender systems, which already questioned how the platform amplifies harmful content.

Regulators are not just looking at Grok in isolation, but at how it functions inside X’s broader ecosystem. The European Union has made clear that this investigation covers Grok’s service on X, not its standalone website or app, because the DSA applies to very large online platforms designated under the law, a distinction spelled out by The EU. Officials in Brussels are also probing whether X’s internal processes, including risk assessments and content moderation tools, were adequate once Grok began producing explicit deepfakes that could be easily shared across the platform, as highlighted in a separate Brussels briefing.

How Grok became a deepfake flashpoint

Grok was launched on X in 2023 as Elon Musk’s answer to rival chatbots, pitched as a more irreverent system that could tap directly into the platform’s live data. That positioning collided with reality when users discovered they could ask Grok to digitally undress real people or generate explicit edits of existing photos, including of women and minors, and receive convincing results. A December analysis by Copyleaks, a plagiarism and AI content detection company, estimated that Grok was creating roughly one non-consensual sexual image for every few dozen image requests, suggesting the problem was systemic rather than anecdotal. The European Commission has said the issue “snowballed” when Grok appeared to grant a large number of user requests to modify images into sexualised deepfakes late last year, a pattern described in detail in a report.

Public anger grew as examples of Grok-generated deepfakes spread and advocacy groups warned that victims had little recourse once images were shared or downloaded. European Union regulators say they are particularly concerned about deepfakes of minors and about how quickly such content can circulate on a platform the size of X, concerns echoed in coverage of Grok AI. The European Commission’s own communication notes that the investigation “complements and extends” earlier scrutiny of X’s algorithms, underlining that the Grok scandal is now part of a broader test of how generative AI and recommender systems interact on a platform that has already been warned about illegal content, as set out in the official DSA file.

Regulators, politicians and Musk under pressure

The Grok probe has quickly become a political flashpoint in Brussels and beyond. Commission President Ursula von der Leyen has framed the case as a test of whether Europe will “not tolerate” illegal and harmful online content, including AI-generated sexual deepfakes of women and minors, a stance reflected in her comments cited in Europe. EU tech commissioner Henna Virkkunen has also weighed in, warning that Musk’s companies must comply fully with European rules and describing the deepfakes as a grave abuse of AI, according to remarks attributed to her in EU tech commissioner coverage. The EU has already warned X about antisemitic material allegedly generated by Grok and demanded more information about how the chatbot is trained and monitored, as detailed in a separate warning.

Outside the EU, pressure is mounting as well. In the United Kingdom, technology minister Liz Kendall told Parliam that platforms hosting such material “must be held accountable,” explicitly naming X in remarks captured in a clip. European commentators have described the Grok case as a major test of the DSA’s ability to police internet giants, with Max DELANY noting that The EU announced the probe on a Monday morning and emphasised that the law is designed to police internet giants, as summarised in a dispatch. European regulators have also been featured explaining that the DSA places strict obligations on large online platforms, including X, in a widely shared video. For Musk personally, the optics are stark: images of him attending events in Arizona now sit alongside reports that European Union regulators are probing his platform for producing non-consensual sexualized deepfake images, as illustrated in a photo.

What X has changed so far

Facing regulatory heat and public backlash, X has begun to adjust Grok’s capabilities, at least on paper. The company has said that Grok will no longer be able to edit images of real people to remove clothing, a restriction that applies to its integration on X and is meant to prevent the most egregious “undressing” requests, according to a detailed account. That same reporting notes that Elon Musk launched Grok on X in 2023 and that the company now says it is working to align the tool with safety standards that will be established by Ofcom’s ongoing investigation in the UK, a point reiterated in a follow up update. X has also argued that it is improving detection and removal of illegal content, but EU officials have signalled they want to see concrete evidence that these measures are effective at scale.

Regulators are sceptical that tweaks alone will be enough. The European Commission has already escalated its scrutiny of X and Grok AI into a formal DSA investigation that could, in theory, lead to fines of up to 6 percent of the company’s global annual revenue if serious breaches are found, a possibility outlined in an analysis of how EU Commission Launches X and Grok AI. Officials have also reminded X that they previously issued formal requests for information about Grok’s training data and safeguards, including a December demand from Brussels that the platform explain how it was tackling antisemitic content and removing existing illegal material, as described in a briefing. For now, the Commission’s line is that X has not yet demonstrated “meaningful adjustments,” a criticism echoed in coverage of how The European Commission views X’s response so far.

Why this probe matters for AI and platform law

The Grok investigation is not just about one product, it is an early test of how far the Digital Services Act can reach into the design of generative AI systems. The European Commission has framed the case as part of a broader effort to enforce systemic risk obligations on very large platforms, including their use of AI-driven recommender systems and chatbots, as spelled out in its press material. Analysts note that the case could set a precedent for how regulators treat AI tools that are tightly integrated into social networks, especially when those tools can generate illegal content on demand, a point underscored in commentary on how The European Commission is escalating enforcement. For victims of deepfakes, the outcome will help determine whether platforms must build in stronger technical barriers against non-consensual sexual imagery or face steep financial penalties.

More from Morning Overview