Airam Dato-on/Pexels

ChatGPT’s newest model has been caught leaning on Grokipedia, an AI-written encyclopedia created by Elon Musk’s xAI, to answer some of the internet’s most obscure questions. Instead of drawing only on vetted human-edited references, the system is now quietly echoing material from another AI, raising fresh concerns about bias, accuracy, and the long term health of online knowledge. The discovery turns a technical design choice into a public trust problem for one of the world’s most widely used AI tools.

At stake is not just whether a handful of answers are wrong, but whether a flagship chatbot is starting to recycle unverified AI output as if it were established fact. As Grokipedia positions itself as a rival to Wikipedia, the revelation that ChatGPT is already ingesting and citing it shows how quickly experimental projects can seep into mainstream information flows without users realizing it.

How Grokipedia slipped into ChatGPT’s information stream

The latest version of ChatGPT, powered by the GPT-5.2 model, has been shown to pull from Grokipedia when users probe into niche topics that are poorly covered elsewhere. In tests, the system did not just paraphrase similar wording, it explicitly cited the AI-written encyclopedia as a source, confirming that Grokipedia has entered its reference stack for long tail queries. A detailed Report on GPT-5.2 notes that the model now cites Grokipedia by name, and that Tests by the Guardian found it sourcing some of its most obscure answers from that database.

Independent checks have gone further, showing that GPT-5.2 is not just occasionally influenced by Grokipedia but is actively using it as a fallback when human curated sources run thin. One technical analysis describes how ChatGPT’s latest model, identified explicitly as GPT-5.2, is sourcing data from Grokipedia, xAI’s all AI generated Wikipedia competitor, particularly for questions that sit on the fringes of mainstream coverage. That same assessment warns that this kind of recursive training, where one AI leans on another’s synthetic output, accelerates a phenomenon called “model collapse,” in which systems gradually lose touch with original human generated data, a risk laid out in detail in the GPT focused reporting.

The Guardian tests that exposed the pipeline

The alarm over Grokipedia’s influence did not start with abstract theory, it began with concrete fact checks that caught ChatGPT repeating debunked claims. In one set of experiments, testers asked the chatbot about contested historical and political topics and found that it cited Grokipedia when repeating information that earlier investigations had already shown to be false. The same testing campaign documented how the latest ChatGPT model uses Grokipedia as a source, highlighting that the system was comfortable leaning on an AI written encyclopedia even when it led straight back to material that had been publicly discredited, a pattern described in depth in the Tests that first surfaced the issue.

Those checks did more than catch a single bad answer, they mapped a repeatable path from Grokipedia entries to ChatGPT responses. In one example, the chatbot echoed a narrative about a public figure that had already been dismantled in court, yet still cited Grokipedia when pressed for its source, confirming that the AI encyclopedia had become part of its internal canon. A follow up analysis drilled into how ChatGPT cited Grokipedia when repeating information that the Guardian had debunked in coverage of the writer Jan Irving in his libel trial, showing that the system was not just generically biased but was willing to surface contested claims tied to real legal disputes, a link documented in the more granular Jan focused reporting.

What Grokipedia is, and why its design matters

To understand why this matters, it helps to look closely at what Grokipedia is trying to be. The project is described as an online encyclopedia powered by AI rather than human editors, launched by Elon Musk as part of xAI’s broader push to build alternatives to existing information institutions. Musk has framed Grokipedia as a way to counter what he calls Wikipedia’s left leaning bias, positioning his system as a corrective to the volunteer edited reference that has dominated the web for two decades. That mission statement, and the fact that Grokipedia is explicitly designed to counter Wikipedia’s left leaning bias, is laid out in coverage of Grokipedia and its ideological positioning.

Grokipedia’s architecture is just as important as its politics. Instead of relying on a global community of human editors who argue over sources and wording in public, it leans on AI systems to generate and update entries at scale, with far less transparent oversight. Earlier reporting on how Grokipedia entered the ecosystem notes that xAI launched the site in October after Musk criticized Wikipedia as biased and suggested that an AI driven alternative could be more objective. That same account of How Grokipedia Entered ChatGPT’s Information Orbit underscores that the new encyclopedia is not just another website, it is a deliberate attempt by Musk to reshape what counts as neutral knowledge online, in direct competition with Wikipedia.

A conservative leaning AI encyclopedia meets a mainstream chatbot

Once Grokipedia’s content started surfacing inside ChatGPT, the ideological stakes became impossible to ignore. Analysts who examined the encyclopedia’s entries describe it as conservative leaning, with topic choices and framing that reflect Musk’s own critiques of mainstream media and academia. When Information from the conservative leaning AI generated encyclopedia developed by Elon Musk’s xAI begins to appear in answers from a widely used chatbot, the risk is that users encounter a partisan slant without any clear label or context, a dynamic spelled out in detail in coverage of how Information from Grokipedia is already shaping responses.

Researchers who track AI bias worry that this quiet integration could normalize Grokipedia’s worldview far beyond its own user base. One assessment notes that ChatGPT now cites Grokipedia, an AI created encyclopedia, as a source, and that Researchers fear Grokipedia could spread biased info through the chatbot’s massive audience, especially because the interface makes it hard to see where each fact is coming from. That same report stresses that Grokipedia is modeled on Wikipedia but it works very differently, with automated generation taking the place of human editorial debate, a contrast highlighted in the analysis of Grokipedia and its spread through ChatGPT.

The long term risk of AI feeding on AI

Beyond immediate questions of bias, the Grokipedia episode exposes a deeper structural risk in how large language models evolve. When GPT-5.2 pulls from an AI written encyclopedia that itself was likely trained on earlier model outputs and scraped web text, the system starts to loop synthetic content back into its own diet. Technical observers warn that this feedback loop can lead to degraded performance over time, as models lose access to the messy, contradictory, but ultimately grounding signal of human generated writing. One detailed examination of ChatGPT’s latest model notes that GPT-5.2 has been found to be sourcing data from Grokipedia, xAI’s all AI generated Wikipedia competitor, and explicitly flags the danger of a phenomenon called “model collapse” if such practices continue unchecked, a warning spelled out in the 5.2 focused coverage.

Regulators and policymakers are beginning to notice the scale of the issue. One summary of the current landscape points out that a Report reveals that OpenAI’s GPT-5.2 model cites Grokipedia and that Tests conducted by the Guardian have already documented how this affects real outputs, with concerns now reaching into discussions about Government and Regional Government oversight of AI systems. That same overview notes that GPT models are being deployed in contexts that touch public services and civic information, making the prospect of a 5.2 m user base consuming answers shaped by an AI written encyclopedia more than a niche technical worry, a connection drawn explicitly in the GPT centric policy discussion.

More from Morning Overview