
State attorneys general, European regulators, and victims of AI “undressing” tools have all moved at once to confront Grok, the chatbot built by Elon Musk’s xAI. What began as a niche controversy over sexualized deepfakes has hardened into a coordinated legal and political response that now surrounds the company on multiple fronts. The pressure is no longer just on Musk and his engineers, it is rapidly shifting to Congress, which has so far failed to set clear rules for this kind of abuse.
The Grok crackdown is not only about one product or one billionaire. It is a test of whether existing law can handle generative AI that lets users strip clothes from images of women and children with a few clicks, and whether Washington will finally move beyond hearings and letters to pass binding protections. The stakes are personal for victims whose images have been weaponized, and structural for a tech industry that has treated “move fast and break things” as a business model rather than a warning.
State attorneys general move first
The most aggressive early response has come from state law enforcement, which has treated Grok’s “nudification” features as a consumer protection and child safety crisis rather than a distant tech policy debate. In California, Attorney General Rob Bonta announced that his office had launched an investigation into xAI and Grok over undressed, sexual AI images of women and children, describing the probe as focused on how the system enabled users to generate exploitative content and what safeguards, if any, were in place to stop it, according to a detailed press release. That same investigation is described in a separate notice that formally names the effort “Attorney General Bonta Launches Investigation into xAI, Grok Over Undressed, Sexual AI Images of Women and Children,” underscoring that the target is not just the chatbot’s text output but its role in producing synthetic sexual imagery, as laid out in the linked announcement.
New York’s top law enforcement official has taken a similarly confrontational line. Attorney General Letitia James demanded that xAI “take all necessary measures” to ensure that Grok stops producing nonconsensual sexual images, and that the company explain how it will prevent the chatbot from generating child sexual abuse material or deepfakes of adults, according to a pointed letter. A companion document from the same office, titled “Attorney General James Demands More Action From XAI To Stop Grok Chatbot Producing,” stresses that New York wants permanent, verifiable changes and that victims must be able to identify and hold accountable those who created the harmful content, a demand spelled out in the linked press release.
A 35‑state front and the first lawsuits
What began as a handful of investigations has quickly turned into a multistate front. Reporting on “The State-Led Crackdown on Grok and xAI Has Begun” describes how a coalition of 35 attorneys general coordinated demands for stronger safeguards after Grok was used to generate sexualized images earlier this year, signaling that the issue had outgrown any single jurisdiction. A separate account from North Dakota notes that Attorney General Drew Wrigley joined 34 other states in a similar push, emphasizing that “Legal Concerns The creation and dissemination of child sexual abuse material is a crime” and that “Various state and federal civil and criminal laws” already apply to AI tools that help produce it, as spelled out in the multistate statement. Together, those efforts show that states are not waiting for Congress to define AI harms before invoking existing child protection and consumer fraud statutes.
Victims and their lawyers are now testing those theories in court. A class action complaint filed on behalf of people whose images were “undressed” by Grok accuses xAI and Musk of enabling invasions of privacy and emotional trauma, and a detailed breakdown of the case explains how plaintiffs describe the distress of discovering AI-generated nudes of themselves circulating online, as outlined in the analysis. Another report on the same litigation notes that “On the” day the suit was filed, a group of 35 state attorneys general sent their own letter to xAI, warning that in the United States Musk will face growing pressure over a product that lets users “undress” women and girls “with the click of a button.” That combination of civil litigation and coordinated regulatory scrutiny is exactly the kind of pincer movement that has forced other tech platforms to change course in the past.
Europe’s materialization of risk
While American states lean on consumer and child protection law, European regulators are treating Grok as a test case for the continent’s new digital rulebook. The European Commission has opened a formal probe into whether Elon Musk’s Grok AI chatbot is spreading illegal content and failing to protect minors, a move that could trigger significant fines or operational changes under the bloc’s online safety regime, according to an account that cites The European Commission. A more detailed description of the investigation notes that Timothy Jones, reporting with Reuters, framed the case as a test of how far Brussels is willing to go in policing generative AI that can be used to create deepfakes and sexualized images at scale.
European officials have already escalated beyond fact finding. On January 26, regulators issued what they called a “materialization of risks” notice to X over Grok’s role in a surge of deepfake imagery, a step described as a critical turning point in enforcement that could lead to binding orders or penalties if the company does not respond adequately, according to a detailed account. Separate coverage of the broader regulatory landscape notes that officials in the European Union, the United Kingdom, and the United States are all examining how Grok and related tools handle AI imagery, with particular concern about nonconsensual sexual content and the adequacy of age protections, as described in a survey of global regulators. For Musk, that means the fight over Grok is no longer confined to American courts or statehouses, it is now entangled with some of the world’s most assertive digital regulators.
Congress edges toward action, but slowly
In Washington, the Grok controversy has sharpened a debate that had already been building around deepfakes and nonconsensual imagery. The Senate has passed the DEFIANCE Act, a bill that would, for the first time, give victims of sexually explicit deepfakes a clear right to sue for damages in federal court, with particular attention to images that “undress” women and girls, as described in a detailed report. That same coverage notes that the measure now heads to the House, where its fate will signal whether Congress is prepared to treat AI-enabled sexual abuse as a civil rights and safety issue rather than a niche tech concern, a point underscored in a companion analysis of how the Senate and the House are approaching the problem.
Lawmakers are also revisiting existing tools that have not been fully enforced. One legal scholar notes that Senator Ted Cruz has demanded “guardrails” for AI but argued that simply enforcing the TAKE IT DOWN Act would address much of the current harm, a reference to a federal mechanism that lets minors and their parents request the removal of intimate images from platforms, as discussed in a legal analysis. Separate reporting on Capitol Hill describes how “Tech: The coming Take it Down crackdown” has become a shorthand for a new wave of oversight, with lawmakers signaling that they want to see the Take It Down program used more aggressively against AI-generated sexual images of kids, as outlined in a briefing. The question now is whether Congress will pair those enforcement pushes with new statutory duties for AI developers, or continue to rely on a patchwork of old laws stretched to cover new harms.
The political and human stakes of doing nothing
More from Morning Overview