Search for the right terms on Apple’s App Store or Google Play and you can still find them: apps powered by artificial intelligence that strip the clothing from photos of real people, generating explicit images without the subject’s knowledge or consent. Both companies prohibit such tools in their developer policies. Both continue to let them slip through.
That contradiction is now drawing simultaneous pressure from U.S. senators, state legislators, and foreign governments, all of whom have identified app store distribution as a critical weak point in the fight against AI-generated sexual exploitation.
Senators call for removal of X and Grok
On January 9, 2026, Senators Ben Ray Lujan, Ron Wyden, and Edward Markey sent letters to Apple CEO Tim Cook and Alphabet CEO Sundar Pichai demanding the removal of the X social media app and its integrated Grok AI tool from both app stores. The letter, released through Senator Lujan’s office, accused Grok of generating “illegal sexual images at scale,” including deepfakes depicting minors.
The senators argued that by continuing to distribute the apps, Apple and Google are effectively complicit in enabling the creation of child sexual abuse material. Neither company has publicly responded with a commitment to comply or offered a detailed rebuttal. The silence is notable given that both platforms’ own guidelines explicitly bar apps that facilitate non-consensual intimate imagery.
What Apple and Google say they prohibit
Apple’s App Store Review Guidelines state that apps must not include content that is “defamatory, discriminatory, or mean-spirited,” and specifically ban apps that generate pornographic material. Google Play’s developer policy similarly prohibits apps containing or promoting sexually explicit content, including AI-generated imagery created without consent.
Yet enforcement has not kept pace with the technology. Both stores rely on a combination of automated screening and human review during the app submission process, but generative AI tools can be updated server-side after approval, meaning an app that passes review on day one can gain new capabilities, including the ability to produce explicit deepfakes, without triggering a second review. Neither Apple nor Google publishes transparency data specific to how many AI-related apps are flagged, removed, or reinstated, making independent assessment of their enforcement nearly impossible.
The scale of the problem
The nudify app phenomenon is not new, but it has accelerated sharply. A September 2023 report by Graphika, a network analysis firm, documented a more than 2,000 percent increase in referral links to undressing services across social platforms in a single year. Many of those services marketed themselves through mainstream app stores or linked directly to downloadable tools.
Since then, the pipeline has only grown more accessible. Open-source image generation models have lowered the technical barrier, and wrapper apps that package those models behind simple interfaces have proliferated. For users, the process can be as easy as uploading a photo and tapping a button. For victims, the consequences are severe and lasting.
Victims at the center of state legislation
In Minnesota, legislators are advancing a bill that would impose civil penalties on developers and distributors of nudify apps capable of generating explicit images of real people without consent. Reporting by The Associated Press included testimony from individuals who were targeted by AI-generated images circulated in schools and workplaces, describing harassment, reputational harm, and lasting psychological damage.
The bill’s sponsors acknowledge that First Amendment challenges could complicate enforcement. Courts have not yet drawn clear lines around whether software that generates images qualifies as protected expression, and penalties aimed at offshore developers or open-source projects may prove difficult to collect. Still, the legislation represents one of the most specific state-level attempts to hold the distribution chain accountable, targeting not just the person who creates or shares a deepfake but also the platforms that make the tools available.
Minnesota is not alone. By early 2026, more than 20 states had enacted or introduced laws targeting nonconsensual deepfake imagery, and the federal TAKE IT DOWN Act, signed into law in 2025, criminalized the publication of such content and required platforms to remove it upon request. But none of those measures directly address the app store gatekeepers who control which tools reach consumers in the first place.
International governments are not waiting
Outside the United States, some governments have moved faster. In January 2026, Malaysia and Indonesia reportedly moved to ban Grok over concerns about AI-generated explicit content. The New York Times reported on the bans, though the full details of each government’s orders and their technical enforcement mechanisms have not been independently verified. Past efforts to block specific apps in the region have had mixed results, and it is unclear whether the restrictions apply only to Grok or extend to other generative AI tools with similar capabilities.
What the reported bans do suggest is that the debate over nudify apps has become genuinely global, with governments on multiple continents reaching similar conclusions about the inadequacy of voluntary store policies.
The unanswered question of recommendation algorithms
One dimension that remains almost entirely opaque is the role of recommendation algorithms. Both the App Store and Google Play are designed to surface apps that attract high engagement, including downloads, search interest, and session time. Critics and researchers have raised concerns that if nudify apps generate significant user activity, recommendation systems could inadvertently promote them in search results and “suggested” feeds. However, neither Apple nor Google has published data on how often AI-related apps appear in recommendations, under what conditions they are demoted, or how quickly flagged apps are actually removed.
Without that transparency, it is impossible to determine whether the enforcement gap is a matter of limited resources, inconsistent review, or a system that structurally rewards the very content it claims to prohibit. This remains one of the most significant unanswered questions in the debate.
Pressure mounts on app store gatekeepers
The convergence of federal demands, state legislation, and international bans has shifted the burden onto Apple and Google in a way that vague policy language can no longer absorb. Senators have put their names on a public letter. Victims have testified before state lawmakers. Foreign governments have acted unilaterally. Each of these moves narrows the space for platform companies to treat nudify apps as an edge case handled quietly through routine moderation.
What comes next will likely depend on whether either company responds with measurable action: publishing enforcement data, conducting independent audits of generative AI apps, or requiring developers to demonstrate that their tools include safeguards against producing nonconsensual explicit imagery. Until then, the tools remain a search query away, hosted on platforms that say they should not exist.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.