
Britain has moved aggressively against Elon Musk’s Grok chatbot, opening a formal investigation into whether the system helped flood X with sexually explicit deepfakes of women and children. The probe, launched by the United Kingdom’s media regulator, turns a mounting global backlash into a direct regulatory test of how far governments will go to rein in generative AI that can be weaponized for abuse.
At stake is more than one company’s product roadmap. The case pits the UK’s new online safety regime against Musk’s preference for minimal content restrictions, and it will help define how democracies draw the line between free expression and industrial scale “nudification” tools that strip away privacy with a few clicks.
The Ofcom probe that jolted X and Grok
Britain’s media regulator Ofcom has opened a formal investigation into whether X, owned by Elon Musk, complied with its legal duty to stop users sharing illegal content generated or amplified by Grok. Officials in LONDON said the inquiry, announced on a Monday in Jan, will examine if the platform took “proportionate measures” to prevent sexually intimate deepfakes and child sexual abuse material from circulating, and whether it cooperated with police and other law enforcement as necessary, according to Britain’s media regulator. The United Kingdom’s Office of Communication framed the move as a test of the new online safety regime, stressing that X must show it has effective systems to detect and remove illegal content, not just react after victims complain, as set out in Ofcom’s investigation.
Regulators are zeroing in on Grok itself, which is integrated into X and marketed as a more irreverent alternative to other chatbots. U.K. regulators have said the tool is designed with fewer guardrails than other mainstream AI systems and highlighted a feature that lets users upload photos and generate sexualized or “undressed” versions, a capability that has produced thousands of abusive images according to U.K. regulators. Ofcom’s inquiry will assess whether X and Musk’s xAI unit adequately limited access to that feature, monitored its outputs, and responded when victims or watchdogs flagged nonconsensual deepfakes.
How Grok’s “nudification” feature triggered a global backlash
The controversy around Grok did not start in London. Earlier this year, Musk’s AI chatbot began facing a global backlash after users discovered it could generate sexualized images of women and children, including so-called “nudified” versions of fully clothed photos, according to reports from LONDON that described how Grok, built by Musk’s xAI company, was being used to create adult content at scale, as detailed in coverage of Musk’s AI chatbot. The system’s ability to strip clothing from images or fabricate explicit scenes without consent turned it into a ready-made tool for harassment and extortion, particularly against women and girls whose social media photos were scraped without permission.
That capability collided with a broader political push to treat nonconsensual deepfakes as a form of gender-based violence. In the UK, ministers have argued that tackling violence against women and girls must be “as important online as it is in the real world,” a point underscored when She, a government representative, backed Ofcom’s right to take any action it sees fit against platforms that enable such abuse, as reflected in statements cited in the UK regulator. That framing helps explain why Grok’s nudification feature has become a lightning rod rather than just another content moderation headache.
Starmer, bans, and the UK’s new hard line on deepfakes
The political response in The UK has been unusually sharp. Prime Minister STARMER has signaled that X could face heavy fines or even a ban if it fails to curb Grok-enabled abuse, with ministers warning that the British government is stepping up enforcement because reports of women and children being targeted on X have been “deeply concerning,” as described in comments highlighted by Kendall in government warnings. In parallel, The UK is bringing into force a law that will make it illegal to create non-consensual sexually explicit images using AI, with technology reporter Laura Cress noting that authorities could ultimately move to restrict or block access to the site in the UK altogether if X refuses to comply, as outlined in new legal powers.
The standoff has already been framed as a personal clash between Musk and STARMER. In a video segment titled The Latest, commentators asked whether The UK might actually ban X over Grok nudification if Ofcom decides to press ahead, underscoring how the probe has become a proxy battle over who sets the rules for online speech and safety, as captured in Musk v Starmer. For Musk, who has repeatedly championed a maximalist view of free expression on X, conceding to UK demands could set a precedent for other governments, while defiance risks losing a major market and inviting even tougher regulation.
A worldwide clampdown, from Brussels to Kuala Lumpur
London is not acting in isolation. The European Union has ordered X to preserve all documents related to Grok through the end of 2026, extending an existing data retention demand as part of a broader inquiry into how the chatbot has been used to target people with nonconsensual deepfake imagery, according to The European Union. That step signals that Brussels is preparing its own enforcement actions under the Digital Services Act, which requires large platforms to assess and mitigate systemic risks, including harms from generative AI tools embedded in social networks.
Outside Europe, governments are moving even faster. Authorities in KUALA LUMPUR said Malaysia and Indonesia have become the first countries to block Grok outright over sexually explicit AI images, after concluding that the system was being used to share sexualized images of children and other illegal content, as detailed in reports on Malaysia and Indonesia. Those bans underscore how quickly national regulators can move to cut off access to AI services they deem dangerous, especially in jurisdictions where platform liability rules are stricter and political tolerance for explicit material is lower than in the United States.
Free speech, liability, and the future of AI “nudify” tools
The UK investigation is already reverberating across the Atlantic. Analysts have warned that a British finding that Grok-enabled deepfakes are “allegedly illegal” could ignite a free speech battle with the U.S., where legal protections for platforms are stronger and Musk has influential allies, as explored by Beatrice Nolan in an assessment of how the probe might reshape debates in Europe and America over AI and expression, summarized in cross-Atlantic tensions. UK officials, including Liz Kendall, have countered that the right to free speech does not extend to industrialized nonconsensual pornography, especially when victims are minors or public figures whose images are harvested without consent.
At the same time, the UK crackdown is part of a broader effort to outlaw AI “nudify” technology outright. Reporting has detailed how lawmakers are moving to criminalize the creation of nonconsensual images and to hold developers of tools that make it easy to create these nonconsensual images responsible, a shift that reflects growing recognition that the harm is baked into the design of such systems, as examined in a feature that urged readers to ADD TIME ON GOOGLE and Show more content from TIME on Google Search while explaining the legal changes, captured in new UK laws. For Musk’s companies, that means the question is no longer just how to moderate outputs, but whether certain high risk features, like automated undressing of photos, can exist at all in regulated markets.
More from Morning Overview