Smartphone screen displays ai app icons: chatgpt, grok, meta ai, gemini.

European regulators are sharpening their focus on Elon Musk’s social media platform X, turning a high-profile probe into its Grok chatbot into a test case for how strictly the bloc will enforce its new digital rulebook. At the center is Henna Virkkunen, the European Commission’s Commissioner for Tech Sovereignty, who has warned that X must respect the European Union’s standards or face serious consequences. The investigation into Grok’s role in generating sexualized deepfakes of women and minors now doubles as a broader message to United States platforms that EU law is not optional.

By tying the Grok case to the Digital Services Act, Virkkunen is signaling that the era of voluntary self-policing is over for very large online platforms. The question is no longer whether X and other global tech firms will be regulated in Europe, but how far Brussels is prepared to go when they fall short of their legal obligations.

The Grok probe puts X on a collision course with EU law

The European Commission has opened a formal investigation into whether X properly assessed and mitigated the risks posed by Grok before rolling it out to users in the European Union. Officials are examining if the company complied with its duties under the Digital Services Act to identify systemic dangers, including the spread of illegal content and the impact on fundamental rights, before deploying the chatbot at scale. A statement from the Commission made clear that the focus is on whether X carried out a robust risk assessment and put in place effective safeguards, not just on how it responded after problems surfaced.

The trigger for the probe was Grok’s apparent ability to generate non-consensual sexual deepfakes of women and children based on user prompts, a capability that cuts directly across EU rules on illegal content and the protection of minors. Regulators are scrutinizing whether X allowed Grok to be used to create sexualized images of minors and women without consent, and whether the platform had adequate tools to detect and prevent such abuse. Reports that Grok could be prompted to produce explicit deepfake material involving minors have intensified pressure on X, with one investigation highlighting that Grok faced an outcry after users discovered these capabilities.

Virkkunen’s “clear obligations” warning to Musk’s platform

Henna Virkkunen has framed the Grok investigation as a straightforward enforcement of existing law rather than a political crusade against Elon Musk. As Commissioner for Tech Sovereignty, she has stressed that X, like any other very large online platform, has “clear obligations” under the Digital Services Act to manage systemic risks and protect users. In public comments, Virkkunen confirmed that the probe is specifically examining whether X complied with its DSA duties to assess and mitigate the dangers linked to Grok, including the spread of illegal content and the amplification of harmful material.

Her stance is backed by earlier warnings from the European Commission’s top tech officials, who have already told Elon Musk that X must “fix” Grok or face consequences. In Brussels, senior figures have signaled that the company will be expected to adjust the chatbot’s design, tighten its safeguards, and improve its content moderation systems if it wants to avoid penalties. One account of those internal warnings described how BRUSSELS officials pressed X to act quickly, underlining that the DSA is not a theoretical framework but a binding set of rules that can be enforced by fines and other measures.

Deepfake porn, child safety, and the limits of AI “creativity”

The Grok case is forcing regulators and platforms alike to confront how far generative AI can be allowed to go in the name of user creativity. At the heart of the EU’s concern is the production of non-consensual sexual deepfakes, particularly those involving minors, which are treated as illegal content under European law. Investigators are looking at whether Grok’s design and training allowed it to generate sexualized images of women and children based on text prompts, and whether X took reasonable steps to prevent such outputs. One detailed account of the controversy noted that Grok was found to enable users to create deepfakes of women and children by simply asking the chatbot to do so, prompting outrage from child protection advocates and privacy campaigners.

For the EU, this is not just a content moderation problem but a systemic design failure that goes to the core of how AI tools are built and deployed. The Digital Services Act requires very large platforms to anticipate such risks and build in safeguards from the outset, rather than relying on users to report abuses after the fact. The formal investigation launched by The European Commission is therefore examining not only whether X removed illegal deepfake content when notified, but also whether its risk assessments, testing procedures, and technical controls for Grok were adequate before the feature was made widely available.

DSA enforcement and the message to US tech and the Trump administration

Henna Virkkunen has used the Grok controversy to send a broader signal to United States companies and policymakers that EU digital rules must be respected, regardless of political tensions. In a wide-ranging interview, she urged the US to recognize that European standards on content moderation, data protection, and platform accountability apply fully to foreign firms operating in the bloc. She emphasized that the dispute over Grok between Brussels and Elon Musk’s social media platform X is part of a larger pattern in which European regulators insist that their laws are enforced equally on American or European companies. In that context, Henna Virkkunen has framed the Grok probe as a test of whether the US will accept the EU’s regulatory sovereignty in the digital sphere.

The political backdrop is particularly sensitive because the newly announced investigation is likely to escalate a confrontation between European leaders and the Musk-aligned Trump administration in Washington. President Donald Trump has aligned himself closely with Elon Musk, and some in Brussels see the Grok case as an early flashpoint in a broader struggle over who sets the rules for global tech. One analysis described how the probe into Grok’s deepfake porn capabilities could deepen tensions with the Trump administration, which has been skeptical of European regulatory approaches and more sympathetic to Musk’s arguments about free expression and innovation.

A landmark test of the Digital Services Act’s reach

For Brussels, the Grok investigation is also a chance to demonstrate that the Digital Services Act is more than a symbolic gesture. The law gives the European Commission sweeping powers to investigate very large online platforms, demand detailed information about their algorithms and risk management systems, and impose fines of up to 6 percent of global turnover for serious violations. In the case of X, regulators are using those tools to examine whether the company carried out the kind of in-depth risk assessment that the DSA requires before launching a high-impact AI feature. A formal notice from The European Commission explained that the new investigation into X under the Digital Services Act and related rules extends an earlier DSA case into the platform’s handling of illegal content and systemic risks.

Henna Virkkunen has been explicit that the Grok probe is part of a broader push to ensure that the DSA’s obligations are taken seriously by all major platforms, not just X. She has argued that the law is designed to protect European citizens from harms ranging from disinformation to child sexual abuse material, and that it must be enforced consistently to maintain public trust. In her comments, Her insistence that the same standards apply to American and European companies alike underscores that the Grok case is not an isolated dispute, but a landmark test of how far the EU is prepared to go in enforcing its digital sovereignty.

More from Morning Overview