
OpenAI is under pressure to explain why ChatGPT has started nudging people toward retailers like Target and Peloton inside ordinary conversations, even as the company insists those prompts are not paid advertising. The clash between OpenAI’s assurances and what users feel in the chat window is turning into an early test of how transparent AI companies will be about commercial influence inside their flagship products.
At stake is more than one awkward suggestion to “shop” somewhere. The uproar is exposing how easily a helpful recommendation can look like a stealth ad when it appears in the middle of a personal chat, and it is forcing OpenAI to spell out where the line sits between conversational relevance and monetization.
How a casual chat turned into a Target controversy
The latest flare-up began with a seemingly ordinary exchange in ChatGPT that suddenly veered into shopping advice, when the system surfaced a connector that encouraged a user to Shop at Target Is Not an obvious part of the conversation. OpenAI has since argued that this Random Suggestion to Shop at Target Is Not an Ad, framing it instead as an automated attempt to surface tools that might help with the task at hand. The company’s line is that the model is scanning for context behind the scenes and then offering a connector, not inserting a sponsored message.
From a user’s perspective, though, the distinction is far from clear. When a chat about everyday needs suddenly includes a branded callout to Target, it feels indistinguishable from a promotion, regardless of whether money changed hands. That is why the company’s insistence that This Is Not an ad has not calmed the reaction, and why the Target Is Not framing has become a flashpoint for broader anxiety about how commercial entities will appear inside AI assistants, as reflected in coverage of the Random Suggestion.
Peloton, fitness chats, and the first wave of backlash
The Target incident did not emerge in a vacuum. Earlier, users had already started complaining that ChatGPT was dropping a suggestion to install Peloton’s app in the middle of unrelated conversations, which made the whole experience feel like a subtle sales pitch. One user described how ChatGPT’s unwelcome suggestion for a Peloton app appeared during a chat, turning what should have been a neutral assistant into something that looked like a recommendation engine for specific brands, a pattern that critics saw as a preview of ad-driven AI. That Peloton prompt, surfacing without an explicit request, became a key example of how quickly trust can erode when a chatbot appears to favor one commercial service over others, as detailed in reporting on app suggestions that looked like ads.
Those early Peloton prompts also showed how sensitive people are to any hint that their conversations might be steering them toward particular companies. Even if the system was simply matching a fitness-related query with a well-known workout app, the fact that Peloton was singled out raised questions about whether other options were being suppressed or ignored. The backlash around Peloton primed users to interpret the later Target connector in the same light, as another example of ChatGPT quietly privileging certain brands inside what is supposed to be an open-ended, neutral chat.
OpenAI’s official line: recommendations, not ads
OpenAI’s public explanation is that these prompts are not advertisements at all, but context-aware recommendations meant to make ChatGPT more useful. The company says its app suggestions from retailers like Target and Peloton are intended to be relevant in the conversation, similar to how a music query might trigger a connector that can play songs on Spotify. In that framing, the assistant is not selling anything, it is simply surfacing tools from a list of available apps and connectors that can complete a task more effectively than text alone, a point the company has stressed when defending its app recommendations from retailers like Target and Peloton.
That explanation hinges on a subtle but important distinction between paid placement and product integration. OpenAI is effectively arguing that Target and Peloton sit in the same category as any other connector, and that the model is free to suggest them when they match the user’s intent. The problem is that users rarely see the underlying catalog or the logic that selects one connector over another, so what the company calls a neutral recommendation can still feel like a commercial nudge. Without clear labeling or a transparent way to compare alternatives, the company’s assurances that these are not ads rely heavily on trust that is already being tested.
Viral posts, Mark Chen, and the scale of user anger
The controversy escalated once screenshots of these branded prompts started circulating widely on social media, turning individual annoyances into a broader narrative that ChatGPT was quietly rolling out ads. As the posts spread, one of them went viral and racked up 463,000 views, a figure that underscored how quickly suspicion about AI monetization can catch fire. The visibility of that post forced OpenAI to respond more directly, and it also showed how a single example of a Target or Peloton suggestion could shape public perception of the entire product, as highlighted in coverage that noted the 463,000 views.
OpenAI’s Chief Research Officer, Mark Chen, stepped in to argue that the technology was being misunderstood, emphasizing that the team was experimenting with ways to surface connectors that might genuinely help people. By having a senior figure like Chief Research Officer Mark Chen address the issue, the company signaled that it saw the backlash as more than a minor UX complaint. Yet his intervention also highlighted the communication gap: users were reacting to what they saw on screen, while OpenAI was focused on the intent behind the feature. That disconnect is why the phrase As the post went viral has become shorthand for how quickly trust in AI products can wobble when commercial brands appear without warning.
Leaks and speculation: are real ads coming anyway?
Even as OpenAI insists that the Target and Peloton prompts are not paid placements, separate reporting has fueled speculation that full-fledged advertising is on the way. A leak suggested that OpenAI might start advertising to users inside ChatGPT, with internal material hinting that ads really are coming to the chatbot. In one post on X, an engineer named Tibor Blaho shared what appeared to be evidence that the company was testing ad formats, feeding the sense that the current controversy is just a preview of a more explicit monetization strategy, as captured in coverage that noted how OpenAI might start advertising and how Looks like ads really are coming to ChatGPT according to Tibor Blaho.
Alongside that leak, users on Reddit have been dissecting the mechanics of the Target connector itself. One commenter argued that it is a connector to targets catalog, not an ad, and that OpenAI simply has a list of apps and connectors that ChatGPT can recommend when relevant. They described it as something that could eventually be like an app store, where the assistant routes requests to specialized tools rather than handling everything in plain text. That explanation, captured in a discussion where someone insisted that It’s a connector to targets catalog and that They have a list of apps and connectors you can use, has done little to calm fears that a future app store model will inevitably blend utility with paid promotion, as seen in the leak confirms OpenAI is preparing ads thread.
User confusion: when recommendations feel like stealth marketing
For ordinary users, the technical distinction between a connector and an advertisement is largely academic. What matters is how the suggestion looks and feels in the flow of a conversation. When ChatGPT suddenly proposes a specific retailer or app, it can feel like the assistant is pushing a brand rather than neutrally answering a question. That is why some people have described these prompts as intrusive, saying they did not ask for shopping advice and did not expect their AI assistant to behave like a recommendation widget. The confusion is compounded by the fact that the interface does not clearly label these suggestions as organic or sponsored, leaving people to guess whether they are seeing a genuine recommendation or a form of stealth marketing.
That ambiguity has already led to misinterpretations. In one widely discussed example, a user said, “I was just talking with it about Elon on Nikhil’s podcast when out of nowhere it popped up an ad saying, ‘Find a fitness app’,” describing how a casual conversation about Elon and Nikhil suddenly turned into a prompt to Find a fitness app. The user took that as evidence that ChatGPT was inserting ads Every Time You Use It, even though OpenAI insists the system is simply matching topics to relevant tools. The anecdote, which has been cited in coverage of why some users think ChatGPT already has ads, shows how quickly a single unexpected suggestion can be interpreted as proof of a broader monetization scheme, as reflected in reporting on how ChatGPT doesn’t have ads but still triggers that perception.
Social media, missing tweets, and the rumor mill
Social platforms have amplified the confusion, turning scattered screenshots into a rolling debate about whether ChatGPT has quietly flipped the switch on advertising. Some posts that initially fueled the speculation have since disappeared or become inaccessible, which has only deepened the sense of mystery. One widely referenced example is a post that now shows up as “This Tweet is currently unavailable. It might be loading or has been removed,” a message that has become a kind of symbol for how ephemeral evidence can still shape public opinion. Even without the original content, people continue to cite that missing post as proof that OpenAI is experimenting with ads as you chat with ChatGPT, reinforcing the narrative that something is being hidden.
The dynamic is familiar from other tech controversies: partial information, viral outrage, and a vacuum of clear communication from the company at the center. In this case, the phrase This Tweet is currently unavailable has taken on outsized importance because it suggests that early glimpses of ChatGPT’s behavior are being scrubbed or at least are not easily reviewable. That perception, highlighted in coverage that walks through how users saw prompts to Apple Music before the post vanished, has made it harder for OpenAI to reset the conversation, even when it insists that no formal ad product is live, as discussed in analysis asking is ChatGPT getting ads.
Why OpenAI’s semantics matter for the future of AI assistants
OpenAI’s insistence that the Target connector is not an ad might sound like a narrow semantic fight, but it carries broader implications for how AI assistants will be regulated and trusted. If companies can classify brand-specific prompts as neutral recommendations, they may be able to avoid the disclosure rules and user expectations that come with traditional advertising. That could set a precedent where commercial influence is woven into the fabric of AI interactions without clear labeling, leaving users to guess when a suggestion is driven by utility, by business deals, or by a mix of both. The Target and Peloton episodes are early test cases for whether the industry will embrace transparency or lean on technical definitions to sidestep it.
At the same time, the company’s framing hints at a future where AI assistants function more like operating systems, routing tasks to a network of third party services. In that world, connectors to retailers like Target and Peloton would be as common as links to Spotify or Apple Music, and the line between a helpful integration and a paid placement would be even harder to see. The current backlash shows that users are already sensitive to that ambiguity, and that they expect clear signals when commercial interests are in play. Whether OpenAI’s current explanations hold up will shape not just how people feel about one Random Suggestion to Shop at Target, but how they judge every branded prompt that appears in their chats going forward.
More from MorningOverview