
Within days of TikTok’s U.S. spinoff launching, thousands of American users began reporting that their videos were stuck at “zero views,” comments vanished, and certain political topics seemed unusually hard to find. The company insists these are technical hiccups, not a coordinated effort to silence anyone, but the pattern has already triggered a fierce debate over whether the new U.S.-controlled TikTok is quietly shaping what people can say and see. I want to unpack what is actually known so far, what remains unverified, and why the line between a glitch and censorship is suddenly so contested.
At stake is more than one app’s reputation. TikTok has become a central arena for political speech, from immigration protests to conspiracy investigations, and the new ownership structure was sold to the public as a way to protect Americans from foreign influence. If the result is a different kind of control over speech, routed through U.S. investors and political allies, then the question of whether TikTok is censoring U.S. users right now becomes a test case for how power over social media is shifting.
What changed inside “U.S. TikTok” and why users are suspicious
The first thing I look at is the structural shakeup behind the app that Americans now open on their phones. TikTok in the United States is now majority-owned by a group of U.S.-backed investors, including Oracle, Silver Lake, and MGX, under a joint venture structure that carved the American business away from its former Chinese parent. Among its new investors is the tech company Oracle, whose executive chair Larry Ellison is described as a close affiliate of President Donald Trump, a political alignment that has fueled skepticism among users who already distrusted the app’s opaque recommendation systems. The Trump administration’s push for this deal was framed as a national security fix, but it also concentrated control of a massive speech platform in the hands of a small circle of politically connected firms.
That context matters because the rollout of the new U.S. TikTok has been rocky in ways that map uncomfortably onto political fault lines. Reporting on the joint venture describes a platform that, almost immediately after changing hands, began experiencing widespread glitches that disproportionately appeared to affect posts about topics like Immigration and Customs Enforcement and Jeffrey Epstein. At the same time, the company launched a U.S-specific algorithm, a separate system for recommending content to American users, which critics argue creates a technical channel through which domestic political interests could influence what goes viral without ever issuing an explicit censorship order. Unverified based on available sources is any claim that such orders have actually been given, but the structural capacity is clearly there.
The “zero views” problem and TikTok’s denial
From a user’s perspective, the most visible sign that something is off has been the sudden wave of videos that never seem to leave the launchpad. Just days after the ownership change, thousands of American creators reported that new posts were stuck at “zero views,” even when they had sizable followings and a history of steady engagement. Some described seeing their clips appear briefly on their own profiles, then vanish from search results and the For You feed, a pattern that feels less like a random outage and more like a silent throttle on distribution. These complaints have been especially loud among creators posting about immigration raids, protests, and high-profile abuse scandals.
TikTok U.S. has pushed back hard on the idea that this amounts to targeted suppression. The company has publicly denied claims that it is censoring content and has instead blamed the “zero views” phenomenon on technical issues that followed a major power incident affecting its infrastructure. Executives say the challenges U.S. TikTok users may be experiencing are the result of technical issues that followed the power outage, which Oracle has also acknowledged, and that engineers are working to restore normal performance. One detailed account of the disruption notes that the problems began shortly after the new U.S. algorithm went live, but the company insists that any correlation between sensitive topics and stalled view counts is coincidental and not the product of deliberate censorship.
Anti-ICE videos, Epstein content, and the “glitch” explanation
The most explosive allegations center on specific themes that appear to be hit hardest. Creators who post critical videos about Immigration and Customs Enforcement say their uploads either fail to publish, receive no views, or disappear from search, even while lighter content on the same accounts performs normally. Some of these users have tested the system by reposting identical clips with neutral captions, only to see the neutral versions circulate while the anti-ICE versions stall. This pattern has led activists to argue that the new U.S. TikTok is functionally blocking anti-ICE speech, regardless of whether anyone inside the company is using that word. Reports on why TikTokers cannot upload anti-ICE videos describe a platform where the technical systems that flag and route content are so complex that a “glitch” can look indistinguishable from a policy choice to the people affected.
Company representatives, for their part, have leaned heavily on that glitch narrative. They say that a combination of the power outage, the migration to Oracle’s cloud, and the rollout of the U.S-specific algorithm produced cascading errors in how certain keywords and hashtags were processed. In this telling, posts that mention ICE or Jeffrey Epstein are not being singled out for political reasons, but are caught in the crossfire of a messy technical transition. One account of the new U.S. TikTok notes that President Trump helped create a U.S. version of the app, which now appears to be censoring people, but the platform’s official line is that any such appearance is accidental and temporary. The same reporting highlights how videos that reference Epstein, or use related keywords, have been especially likely to stall, a pattern that critics see as evidence of topic-based filtering and defenders describe as an unfortunate side effect of automated moderation.
Political influence, Oracle’s role, and the trust gap
Even if every bug explanation is accurate, the political context around the new TikTok makes trust a scarce commodity. Among its new investors is Oracle, whose executive chair Larry Ellison is a close affiliate of President Donald Trump, and whose executives, including Safra Catz and Ellison, are described as close Trump allies in coverage of the deal. President Trump helped create a U.S. version of the app through a process that mixed national security rhetoric with open political theater, and that history colors how users interpret any anomaly in how their content performs. When a platform that was restructured under presidential pressure appears to be muting criticism of a federal agency like ICE, it is not surprising that many assume a political motive rather than a software bug.
The company’s own messaging has not fully closed that trust gap. Statements that the challenges U.S. TikTok users may be experiencing are the result of technical issues that followed the power outage, which Ora and its partners are addressing, sound plausible on their face but do little to explain why certain topics seem disproportionately affected. Analysts have pointed out that the launch of TikTok’s new U.S-specific algorithm underscores the urgency of this risk, because it creates a separate decision-making engine for American content that can be tuned differently from the global system. One detailed commentary argues that this architecture opens the door to new forms of censorship that operate through ranking and recommendation rather than outright deletion, especially around keywords such as “Epstein,” and that the current controversy is an early glimpse of how that power might be used or misused.
More from Morning Overview