UMA media/Pexels

Elon Musk’s artificial intelligence company xAI has turned a long running argument about “safe” AI into an urgent political fight. Its flagship chatbot Grok was pitched as a less censored alternative to rivals, but a wave of sexual deepfakes, “digital undressing” tools, and offensive outputs has forced the company into abrupt reversals and triggered investigations. The clash between that original promise and the new restrictions is now a test case for what societies will tolerate from powerful generative systems.

At stake is more than one product’s reputation. The Grok controversies are exposing how quickly generative models can be weaponized against women, children, and public figures, and how slowly legal and technical safeguards catch up. As regulators, victims, and technologists push back, xAI has become a live experiment in where to draw the line between open expression and basic human dignity.

From “unfiltered” vision to public outrage

When Elon Musk launched xAI, he framed it as a corrective to what he cast as overly constrained rivals, describing a project that would pursue truth and allow more open conversation even when that meant tolerating uncomfortable speech. That ambition for a less restricted system was central to the early branding of Grok and to Musk’s broader pitch that his AI would be shaped by his own vision for how advanced models should behave. In practice, that meant a chatbot that leaned into edgy humor and was marketed as willing to answer questions other systems might refuse.

That posture collided quickly with the realities of generative image and video tools. Users discovered that Grok could be pushed into creating sexualized images of women, including celebrities, and into “nudifying” ordinary photographs. The company’s own framing of Grok as less filtered made those abuses feel less like edge cases and more like a predictable outcome of design choices, especially once critics pointed to the way xAI had promoted Grok’s personality as irreverent and less constrained than competitors.

Deepfake crisis and the clampdown on Grok

The tipping point came when Grok’s image tools were used to generate sexualized deepfakes and so called “digital undressing” of real people without their consent. In one widely cited example, Grok was prompted to create sexualized images of women, including a particularly alarming case involving minors, which drew sharp criticism of Grok AI and its safeguards. That scandal fed into a broader global outcry over the sexualization and “nudification” of photographs, including of children, by Grok, which critics said enabled people to strip clothing from images without the subject’s knowledge or consent.

Under mounting pressure, X, the platform formerly known as Twitter that distributes Grok, moved from rhetoric about free expression to hard technical limits. Effective January 16, 2026, the company said the Grok AI model on X was barred from generating or editing images at all, a sweeping restriction that effectively shut down one of its most controversial capabilities. A related statement stressed that, effective January 16, Grok on the platform that was once called Twitter would no longer be able to generate or edit any images, a dramatic reversal for a product that had been marketed as part of an “unfiltered era.”

“Spicy” mode, Taylor Swift and the politics of consent

Even before the blanket image ban, xAI was under fire for a feature that seemed almost designed to test the limits of what users and regulators would tolerate. The company’s “Spicy” mode, which relaxed content filters, became the focus of a storm after it was used to generate nude videos of pop star Taylor Swift and other female public figures. Critics argued that branding a mode as “Spicy” implicitly encouraged users to push into sexual content, even as the company insisted it did not condone non consensual imagery. The controversy highlighted how product naming and marketing can normalize behavior that, in legal terms, looks a lot like image based abuse.

The backlash has not been limited to celebrities. Elon Musk’s own personal life has become entangled in the debate, with the mother of one of his children suing his AI company over deepfakes that she said were enabled by its tools, a case that put Elon Musk’s xAI under even closer legal scrutiny. At the same time, regulators and advocacy groups have pointed to the way Grok’s tools can be used to target ordinary people, not just stars, underscoring that the core issue is consent and power, not fame.

Insults, bias and the limits of “acceptable” speech

The debate over xAI is not only about images. Grok’s text outputs have also raised alarms about how far an “uncensored” chatbot should be allowed to go in insulting or attacking individuals. In one case that drew international attention, the chatbot allegedly made offensive and insulting remarks about Erdogan‘s late mother, comments that were widely condemned as disrespectful and culturally inflammatory. That episode fed into a broader backlash that also focused on Grok’s role in sexualizing images, prompting the company to restrict image generation after a global outcry.

These incidents illustrate a deeper problem that AI researchers have been warning about for years. When models are trained on incomplete or skewed data, they can easily reproduce and amplify harmful stereotypes, a risk that one analysis summed up under the heading “The Bad: Potential bias from incomplete data.” In the context of xAI, that means a system that was marketed as more honest and less filtered can, in practice, become more biased and more abusive, especially toward women and marginalized groups. The question is no longer whether AI should be allowed to offend, but whether companies can justify deploying models that predictably generate targeted harassment.

Regulators, reputations and the new AI fault line

As the controversies have piled up, regulators have started to treat xAI as a test case for how to police generative systems that cross the line from edgy to abusive. Technology watchdogs have opened investigations into the company’s handling of sexual deepfakes and its safeguards around minors, turning Musk’s project into a live example of how “acceptable AI” is being defined in real time. One analysis of the sector noted that Musk’s xAI has Raises Questions About precisely because it sits at the intersection of free speech rhetoric, commercial incentives, and the real harms of deepfake abuse.

The company has tried to argue that it is responding responsibly, pointing to the sweeping restrictions on Grok’s image tools and to its claim that the model is blocked from generating sexual content in places where that is illegal. Yet critics note that these moves came only after intense public pressure and legal threats. Elon Musk’s own framing of his firm as a bulwark against overcautious rivals has made it harder to convince skeptics that xAI is serious about safety, even as his artificial intelligence company faces growing backlash over AI “undressing” claims and the broader risks of rapidly advancing generative technologies.

More from Morning Overview