A peer-reviewed field experiment published on February 18, 2026, found that X’s algorithmic “For You” feed shifted users’ political opinions in a conservative direction over roughly seven weeks. The study, which randomly assigned active U.S.-based users to either the algorithm-driven feed or a chronological “Following” feed, measured increased engagement and rightward movement on policy issues including immigration, crime, and inflation. The findings arrive as regulators in Europe prepare to enforce new transparency rules on how recommendation systems shape what people see online.
Seven Weeks on the Algorithm Changed Political Views
The experiment, conducted in 2023 and published in Nature, is the most direct causal test yet of whether X’s recommendation engine alters what people believe about politics. Researchers randomly split active U.S. users into two groups: one saw the standard algorithmic “For You” feed, while the other saw only posts from accounts they chose to follow, displayed in reverse chronological order. Over approximately seven weeks, the algorithmic group not only spent more time on the platform but also reported measurably more conservative positions on policy questions compared to the chronological group.
What makes this result hard to dismiss is the study’s design. Random assignment eliminates the usual chicken-and-egg problem in social media research, where it is unclear whether conservative users simply gravitate toward certain content or whether the platform itself is doing the pushing. Here, the answer is clearer: the feed’s ranking logic alone produced the shift. A separate analysis noted that these opinion changes did not wear off quickly after the experiment ended, with participants retaining more conservative views on topics such as crime and immigration weeks later. That persistence raises questions about the lasting influence of even short-term algorithmic exposure on democratic attitudes.
A Pattern That Predates the Musk Era
The Nature study did not emerge in a vacuum. Years before Elon Musk acquired the platform, a large-scale randomized experiment published in the Proceedings of the National Academy of Sciences found that Twitter’s algorithm systematically amplified right-leaning content over left-leaning content in six of seven countries studied. In the United States specifically, mainstream conservative news sources received greater algorithmic promotion than their left-leaning counterparts. Twitter itself acknowledged these results at the time, and the company’s own researchers were involved in the work. That 2021 finding established a baseline: the rightward tilt was not a byproduct of one owner’s politics but appeared baked into the recommendation architecture itself.
More recent evidence reinforces the pattern. An independent audit using 120 sock-puppet accounts captured personalized “For You” timelines over three weeks ahead of the 2024 U.S. presidential election and documented a right-leaning bias in the default recommendations served to new accounts. Separately, a Wall Street Journal investigation analyzed approximately 26,000 posts and found that X’s “For You” feed heavily pushed political posts even for accounts that signaled nonpolitical interests, with the most visible political accounts skewing rightward. Taken together, these findings suggest the tilt is not a fluke of one study period or one research team’s methodology but a persistent structural feature of the platform’s recommendation system.
Ranking Alone Can Shift How People Feel About Opponents
A separate line of research sharpens the stakes further. A preregistered field experiment involving 1,256 participants during the 2024 U.S. campaign season used a platform-independent tool to rerank X feeds in real time, increasing or decreasing users’ exposure to content expressing antidemocratic attitudes and partisan animosity. Over 10 days, the experiment produced causal evidence that reordered feeds alone shifted how much hostility participants felt toward political opponents, even though the underlying set of posts remained the same.
This result matters because it isolates the mechanism. Critics of algorithmic-bias research often argue that platforms merely reflect the preferences users already hold. But when researchers manipulated only the order of posts, not the content itself, they still moved the needle on affective polarization. The implication is that the sequence in which a feed presents information is itself a form of editorial influence, one that operates below most users’ awareness. Combined with the Nature study’s finding that the “For You” feed nudges policy opinions rightward, the picture that emerges is of a recommendation system that both selects which political content users encounter and shapes how intensely they react to it.
Lessons From Other Platforms’ Massive Experiments
Evidence from other social networks underscores how powerful ranking systems can be while also revealing their limits. During the 2020 U.S. election cycle, Meta partnered with outside academics on a series of randomized experiments across Facebook and Instagram that collectively enrolled tens of millions of users. One of these large-scale studies, published in Science, tested changes to the composition of political news in users’ feeds by downranking content from highly partisan sources. Another companion article in the same research program, referenced in a subsequent discussion of Facebook experiments, evaluated how exposure to different kinds of political information affected knowledge and attitudes during the same period.
These Facebook and Instagram experiments generally found that while feed changes could dramatically alter what people saw and how they engaged, effects on core vote choice and partisan identity were modest over the time frames studied. That contrast is instructive when set against X’s rightward nudges and the hostility shifts documented in reranking trials. It suggests that platform design can reliably move specific attitudes, such as views on immigration or feelings toward out-partisans, without necessarily flipping party allegiance. For regulators and platform designers, the lesson is that subtle, targeted opinion shifts may be more realistic and more consequential than wholesale conversions.
Regulatory Pressure Builds in Europe
These academic findings land at a moment when European regulators are tightening their grip on exactly this kind of algorithmic opacity. Under the Digital Services Act, very large online platforms including X must disclose information about their recommender system parameters, provide researcher data access, and follow standardized reporting templates. The European Commission has harmonized reporting rules and set July 1, 2025, as the start date for data collection under these transparency requirements. That timeline means the first round of mandated disclosures should already be underway, though no platform-specific compliance reports from X have been made public as of this writing.
The gap between what researchers can now demonstrate and what platforms are required to reveal remains wide. The Nature experiment, the PNAS amplification study, and the 2024 reranking trial all relied on external workarounds: browser extensions, sock-puppet accounts, and custom reranking tools built outside the platform. None had access to X’s internal recommendation weights or training data. If the DSA’s transparency provisions work as designed, future researchers and regulators would gain a clearer view of why the algorithm behaves the way it does (whether particular engagement signals, content categories, or network structures are responsible for the documented rightward drift). For now, the best evidence comes from carefully constructed natural experiments at the platform’s edges, all pointing in the same direction: how X chooses to rank and recommend content is not politically neutral, and even a few weeks of exposure can leave a measurable mark on what users believe.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.