
Apple and Google are being pushed into the center of a global backlash over sexually explicit AI imagery, as lawmakers and advocacy groups demand that they remove X and its built in chatbot Grok from their app stores. At stake is not just whether two high profile apps stay available on iOS and Android, but whether the gatekeepers of mobile software will treat AI generated sexual deepfakes as a deal breaker for distribution. The fight is rapidly becoming a test of how far platform rules on safety and abuse really reach when artificial intelligence is involved.
How Grok’s “undressing” feature ignited a political firestorm
The immediate controversy traces back to Grok’s ability to generate sexualized images of real people, including what critics describe as AI powered “undressing” of photos. That capability turned a long running debate about deepfakes into a concrete product decision, with xAI effectively offering a tool that could be used to strip clothing from images and create explicit content of identifiable individuals. As the backlash intensified, xAI said Grok would stop undressing people, a shift that came after mounting pressure for Apple to pull the X app and its integrated chatbot from the App Store, a sequence that was highlighted when Jan and other observers noted that Grok would stop undressing people only after the outcry had already escalated, with the change arriving Hours after a coalition of digital rights and safety groups raised alarms in public and in private.
Even with that reversal, critics argue that the damage is already done, pointing to thousands of sexually explicit AI generated images that have circulated using Grok’s tools. Advocates say those images include depictions of real women and girls, and they warn that the same underlying model can still be prompted in ways that generate illegal images or other abusive content. One detailed account of the system’s behavior noted that There were prompts that could be used to generate illegal images, including sexualized depictions of people who never consented to be portrayed that way, and that Grok’s creators acknowledged that adversarial hacking of prompts could lead to unexpected results that would then need to be fixed, a reactive posture that critics see as far too slow for a product already in the hands of millions of users.
Senators and advocates turn up the heat on Apple and Google
Political pressure on the app store operators has grown quickly, led by Three Democratic US senators who called on Apple and Alphabet’s Google to remove X and Grok from their platforms. Those lawmakers framed the issue as a matter of basic consumer safety, arguing that allowing an app with a built in tool for sexually explicit deepfakes violates the spirit of existing policies on harassment and exploitation. Their letter to Apple and Alphabet, which singled out Google by name, stressed that Grok’s design made it too easy to create sexualized images of real people without consent, and urged the companies to treat that as a bright line violation rather than a bug that could be patched later.
Women’s and child safety organizations have echoed and amplified that message, warning that the scale of abuse is already far beyond isolated incidents. Advocates cited thousands of sexually explicit AI generated images from Grok and urged Apple and Google to act while investigations into AI generated sexual deepfakes are underway in multiple jurisdictions, arguing that the companies should suspend distribution of X and Grok until they can guarantee that the tools cannot be used to target women and minors. One coalition of nearly 30 women’s, child safety and civil rights groups, described by Julia Shapero, said in letters that the apps were enabling the creation and spread of explicit images of real people, including content that appeared to involve minors, and that the companies’ inaction risked normalizing a new form of tech enabled abuse that is already affecting an estimated 40 percent of some targeted populations.
Coalition letters and the mechanics of app store leverage
The advocacy campaign has been carefully structured to exploit the leverage that Apple and Google hold over mobile distribution. Earlier today, a coalition of digital rights and safety organizations sent formal letters urging Apple to remove access to both X and Grok, arguing that the apps violate existing rules on sexual content and harassment, and warning that leaving them in place would invite further abuse and criminal activity. Those letters, which referenced Jan as a key moment in the timeline, framed Apple’s role not as a neutral conduit but as an active curator whose policies already ban apps that facilitate non consensual pornography, and they pressed the company to apply those standards consistently to high profile services as well as smaller developers.
Similar pressure has been directed at Google’s Play Store, where the same coalition and additional women’s advocacy groups have called for X and Grok to be dropped. One detailed report noted that Apple, Google face pressure to remove X and Grok from their app stores, with advocates arguing that the companies’ own rules give them ample authority to act and that failure to do so would undermine their broader claims about user safety and responsible AI. Another account of the campaign described how women’s advocacy groups called on Apple and Google to drop X and Grok from app stores, noting that Its parent company, xAI, responded to questions about the letters with a brief statement rather than a detailed plan, a response that critics saw as evidence that the company had not fully grappled with the scale of the harm its tools were enabling.
Global backlash, from statehouses to safety regulators
The uproar over Grok has not been confined to Washington or to app store policy teams. California Governor Gavin Newsom sharply criticized xAI’s decision to allow sexually explicit deepfakes, calling the feature “vile” and pointing to evidence that a significant share of the images circulating online appeared to involve minors. One account of the reaction noted that California Governor Gavin Newsom said the decision to allow sexually explicit deepfakes had led to a flood of images, with a high percent appearing to be minors, and that his comments added state level political pressure to the federal scrutiny already facing X and Grok. That kind of intervention from a governor underscores how quickly AI safety issues can jump from technical debates into mainstream politics when the harms are visible and personal.
Internationally, watchdogs and regulators are also taking notice, treating Grok as a case study in how generative AI can be weaponized against women and children. Reports of Musk’s Grok being barred from undressing images after a global backlash have circulated widely, with More officials and advocacy groups outside the United States citing the episode as evidence that voluntary safeguards are not enough. In that context, Apple and Google’s decisions about whether to keep distributing X and Grok are being watched not just as corporate policy calls, but as signals to regulators who are weighing new rules on AI generated sexual content and the responsibilities of platforms that host or distribute such tools.
What Apple and Google’s next move will signal for AI governance
For Apple and Google, the decision about X and Grok is less about one app and more about the precedent it sets for AI governance across their ecosystems. Their app store guidelines already restrict pornography, harassment and exploitation, but those rules were written before tools like Grok made it trivial to generate sexualized images of real people at scale. One detailed account of the current standoff noted that Apple, Google face pressure to remove X and Grok from their app stores and that the companies had not yet publicly committed to a specific course of action, even as advocates warned that continued availability would expose users, especially women and minors, to ongoing harm. Another report on the same campaign emphasized that Apple and Google had received letters from advocacy groups and had not responded to a request for comment, underscoring how carefully the companies are calibrating their public posture while they weigh the legal and commercial risks.
More from Morning Overview