salvadorr/Unsplash

Grok’s AI image editor has become a political and cultural flashpoint, but the scandal around explicit deepfakes on X only scratches the surface of a much larger problem. App stores run by the biggest tech companies are saturated with “nudify” tools that promise to strip clothes from photos, quietly turning ordinary snapshots into nonconsensual sexual content at scale. I want to look at how those apps slipped through, what they are doing to victims, and why regulators are suddenly treating app stores as part of the deepfake supply chain rather than neutral storefronts.

The Grok flashpoint exposed a wider ecosystem

When Grok’s AI image editor on X started generating explicit deepfakes of women and girls, it was easy to treat the outrage as a story about one reckless platform. The focus on Grok and X, however, obscured the fact that similar tools were already thriving inside the supposedly curated mobile marketplaces that most people trust by default. The controversy around Grok simply made visible a type of abuse that had been normalized in the shadows of mainstream app stores.

Investigators who examined the mobile ecosystem found that Apple and Google were already hosting dozens of AI “nudify” apps that marketed themselves with promises to remove clothing or create “sexy” versions of any uploaded image. Separate reporting described how a new investigation concluded that the Apple and Google app stores had quietly become distribution hubs for tools that specialize in nonconsensual deepfakes, even as those same companies publicly touted their efforts to fight image-based abuse.

Dozens of nudify apps hiding in plain sight

The scale of the problem came into focus when a watchdog group audited the major app stores and counted how many nudifying tools were live and easy to download. In its January review, the group reported finding 47 such apps on Apple’s App Store and 55 on Google Play, all identified by TTP as tools that facilitate the non consensual sexualization of people. These apps often used coy branding and euphemistic descriptions, but their core function was to take a clothed image and algorithmically generate a nude or sexualized version, sometimes explicitly advertising that they could be used on photos of acquaintances.

Separate reporting on the same review noted that Apple and Google were not only hosting these apps but also approving updates and allowing them to monetize through subscriptions and in app purchases. Another investigation described how dozens of apps that came up in searches for “nudify” or “undress” were still live on the App Store even after Grok’s image editor drew intense scrutiny, a pattern that critics said showed Apple was willing to tolerate a thriving market in fake sexual images as long as it remained relatively quiet.

Victims, children, and the human cost of “nudify”

Behind the statistics are people whose lives are being upended by images they never posed for. Reports on the nudify ecosystem describe victims discovering that their faces had been pasted onto explicit bodies or that ordinary social media photos had been run through AI tools to create realistic nude versions. In some cases, the resulting images were shared in group chats or posted on anonymous forums, where they were treated as entertainment rather than as a form of sexual violence.

One investigation into the app store listings highlighted that some of these tools had already been used to create AI generated sexualized photos of children, a detail that appeared in the same review that counted dozens of nudify apps on Apple and Google platforms. Another report on the same wave of scrutiny noted that Removing or restricting access to Grok’s AI image editor would not be enough to stop the flood of nonconsensual sexualized images, because similar tools were already embedded across mobile marketplaces and had been linked to complaints from at least one victim who came forward to describe the harm in detail, a pattern documented in the TTP report.

Apple and Google’s moderation gap

For years, Apple and Google have marketed their app stores as safer alternatives to the open web, with review processes that screen out harmful content. The discovery of dozens of nudify apps that explicitly facilitate nonconsensual sexualization has exposed a gap between those promises and the reality of what slips through. In practice, the companies have allowed developers to use suggestive branding and vague language to mask the true purpose of their tools, while relying on reactive takedowns once public pressure mounts.

After TTP’s January review, Apple reportedly removed the apps that the watchdog had flagged, including tools that let users paste women’s faces onto nude bodies, while Google began restricting similar offerings on Google Play. Yet separate reporting on the same period noted that dozens of apps that came up in searches for “nudify” remained available on the App Store, suggesting that Apple’s enforcement was narrow and reactive rather than systemic, a criticism echoed in coverage that described the App Store as rife with AI nudify tools and argued that Apple was perpetuating harm by not decisively cutting off the category.

Political pressure to ban X and Grok

The Grok scandal did more than spark outrage, it triggered a coordinated political campaign to treat app store access as leverage over platforms that enable explicit deepfakes. A coalition of lawmakers wrote to Apple and Google urging them to remove X and Grok from their stores, warning that continuing to host the apps would signal that the companies were comfortable profiting from tools that make it easier to produce and distribute nonconsensual sexual content. The letter framed the issue as a test of whether app store policies were meaningful or simply public relations.

In one of those letters, Sens Ron Wyden of Ore and Ben Ray Luján were among the signatories who told the companies that Turning a blind eye to X’s egregious behavior would make a mockery of their moderation practices, a warning that was detailed in coverage of the Sens coalition. Women’s groups echoed that demand, with advocates telling reporters that Women were bearing the brunt of AI generated sexual images and calling on Apple and Google to ban X and Grok from app stores, a push that was described in detail by Women’s advocates.

Congress moves on deepfake liability

While activists targeted app store policies, Congress moved to give victims new legal tools. The Senate passed a bill that would allow people whose images are turned into explicit deepfakes to sue those who create or distribute them, building on an earlier law that made it a federal crime to knowingly publish nonconsensual intimate images. Lawmakers framed the new measure as a way to close gaps exposed by AI tools that can fabricate sexual content without any physical encounter.

Coverage of the vote noted that the Senate unanimously approved legislation to permit lawsuits over explicit deepfake images, and that Senators urged Apple and Google to remove Grok and X from app stores in the same burst of activity, a linkage that underscored how lawmakers now see distribution platforms as part of the problem, as described in reporting on the Senate action. Another account of the same bill explained that a key supporter said the measure would build on the passage of a law that already made it a federal crime to knowingly publish nonconsensual intimate images, and that the new bill would make platforms that knowingly host such content civilly liable for damages, a point laid out in the bill summary.

Federal “TAKE IT DOWN” rules and notice-and-removal

Alongside the new deepfake liability bill, federal policymakers have been working on a broader framework for how platforms should respond when victims discover explicit images online. The TAKE IT DOWN Act Targets Deepfakes, Are Online Platforms Caught in the Crosshairs analysis describes how the proposal would require sites and services to maintain a clear process for victims to demand removal of nonconsensual intimate images, including AI generated ones. The idea is to standardize what is currently a patchwork of voluntary reporting tools and opaque moderation queues.

According to that same legal analysis, the TAKE and DOWN framework would obligate covered platforms to implement a notice and takedown process by May 19, 2026, and would expose companies that ignore valid requests to potential penalties, a timeline and enforcement structure laid out in the Act Targets Deepfakes briefing. A separate summary of the same proposal emphasized that the TAKE, DOWN, Act Targets Deepfakes, Are Online Platforms Caught, Crosshairs framework is designed to put online services on notice that they can no longer treat intimate image abuse as a purely user to user problem, a point reinforced in the section labeled WHAT the act does in the TAKE analysis.

State laws and the Colorado AI Act raise the stakes

Even as Congress moves, states are racing ahead with their own rules for AI and app distribution. Legal analysts point out that AI companies now face a patchwork of state laws that govern how automated systems can be deployed and how quickly platforms must respond to harmful outputs. One overview of the U.S. Artificial Intelligence Law Update notes that AI companies should pay close attention to the evolving state landscape, which increasingly includes requirements for a notice and removal process when AI tools are used to generate abusive content, a trend described in detail in the AI law update.

Another legal forecast highlights The Colorado AI Act, which is scheduled to become effective in June 2026 and is expected to impose new obligations on developers and deployers of high risk AI systems, including transparency and accountability requirements that could apply to nudify tools. Although the final contours of that law may still change, the same forecast notes that in the absence of a comprehensive federal AI statute, states are increasingly willing to treat AI providers as responsible for the downstream harms of their systems, rather than allowing them to argue that users alone should be blamed, a shift described in the 2026 AI forecast.

New App Store accountability laws put platforms on the hook

Alongside AI specific statutes, states are also targeting the gatekeepers that decide which apps reach consumers. New App Store Accountability Laws in 2026, If Your Business Has an App, Read On explains that state regulators are increasingly prioritizing rules that treat app stores as accountable intermediaries rather than passive conduits. These laws can require platforms to vet apps for compliance with state privacy and safety standards, to maintain clear complaint channels, and in some cases to remove or block apps that facilitate illegal activity.

The same analysis notes that State regulators are rolling out measures such as the Louisiana ASA Law, which is pending and expected to take effect as of May 6, 2026, and that these frameworks will apply to any business that distributes software through major marketplaces, a warning spelled out in the New App Store briefing. For Apple and Google, that means the decision to host or remove nudify apps is no longer just a matter of internal policy, it is becoming a compliance question that could carry legal and financial consequences if they are seen as enabling nonconsensual sexualization through their stores.

Why removing Grok is not enough

All of these developments point to a simple conclusion: focusing on Grok alone will not solve the problem of AI powered sexual abuse. Removing or restricting access to Grok’s AI image editor might reduce one highly visible source of explicit deepfakes, but as long as dozens of nudify apps remain available in mainstream app stores, the underlying capability will continue to spread. The tools are cheap, easy to use, and marketed as entertainment, which makes them far more dangerous than a single controversial feature on a high profile platform.

More from Morning Overview