ChatGPT Is Quoting Elon Musk’s Grokipedia — And It’s Raising Eyebrows: It started quietly. In tests run by the Guardian, ChatGPT, running its latest GPT-5.2 model, began citing a new source: Grokipedia, Elon Musk’s AI-generated encyclopedia. At first glance, it seemed unremarkable — an AI referencing a website, as it often does. But the more researchers looked, the more concerning patterns emerged.
What Happened?
Grokipedia launched in October 2025 as a competitor to Wikipedia. Unlike Wikipedia, it doesn’t allow human editors; AI generates all content. The site has drawn criticism for favoring right-leaning viewpoints and for presenting contested claims on topics like gay marriage, the January 6th insurrection, and political figures in Iran.
GPT-5.2 began referencing Grokipedia on topics that were not widely covered elsewhere. In practical terms, this meant ChatGPT was pulling details about:
-
Iranian organizations like the Basij paramilitary force and the Mostazafan Foundation.
-
Biographical and trial-related information about Sir Richard Evans, the historian who testified against Holocaust denier David Irving.
In some cases, the information cited was debunked or exaggerated compared to other sources.
Why Experts Are Concerned
Even when the AI isn’t outright spreading falsehoods, the mere act of citation can lend credibility to unreliable sources. Disinformation researcher Nina Jankowicz warns that this subtle influence is dangerous: users might assume that anything cited by ChatGPT is fact-checked and trustworthy.
“There’s a risk that Grokipedia’s narratives will seep into AI outputs in ways most people won’t notice,” Jankowicz explained. “Once that happens, it’s very hard to remove.”
This phenomenon, known in AI research as LLM grooming, happens when large volumes of biased or misleading content are fed into models, intentionally or not, shaping what the AI “knows.”
It’s Not Just ChatGPT
Anthropic’s Claude AI has also been observed referencing Grokipedia on various topics, from Scottish ales to petroleum production. This suggests that multiple language models are absorbing content from Musk’s platform — sometimes without transparency about its reliability.
OpenAI responded by noting that its web search draws from “a broad range of publicly available sources” and that safety filters aim to reduce exposure to harmful content. But critics say that AI citing Grokipedia or similar sources could normalize misinformation, regardless of internal safeguards.
Examples That Raised Eyebrows
-
Claims that MTN-Irancell has ties to Iran’s supreme leader, a statement stronger than what is found on Wikipedia.
-
Misstatements regarding Sir Richard Evans’ role in the David Irving libel trial.
These examples illustrate how even “obscure” facts can carry hidden inaccuracies, which may then propagate as users rely on AI for research, homework, or reporting.
The Human Factor
A striking issue is how AI can perpetuate mistakes even after they’re corrected. Jankowicz shared an example where a large news outlet published a fabricated quote from her. After it was removed from the article, AI models continued to cite it as fact. “Most people won’t do the work necessary to figure out where the truth actually lies,” she said.
Why It Matters
The Grokipedia situation highlights a bigger challenge for AI:
-
Source quality is critical. If the underlying content is flawed, the AI will reflect that.
-
Citations can mislead. Users often equate a cited source with verified truth.
-
Subtle bias spreads quickly. Even without intentional misinformation, AI can reinforce certain narratives by repeating them across queries.
For developers and policymakers, this is a reminder that AI output is only as good as its inputs, and that platform design decisions — like allowing AI-only editing on a public encyclopedia — can have far-reaching consequences.
What Users Should Do
-
Cross-check information. Don’t rely solely on a single AI response.
-
Be skeptical of obscure claims. AI often draws from niche sources when popular sources are unavailable.
-
Understand AI limitations. Chatbots summarize content but do not independently verify facts.
-
Pay attention to citations. Not all cited sources are reliable — some may be AI-generated or biased.
Final Thoughts
ChatGPT citing Grokipedia may not be an immediate crisis, but it’s a wake-up call for how easily unreliable content can infiltrate AI responses. As more people turn to AI for information, the subtle amplification of questionable sources could reshape public understanding — sometimes without anyone noticing.
Transparency, critical thinking, and careful source evaluation remain more important than ever in the age of AI.
Nvidia’s Arm-Based N1/N1X Chips Could Redefine Windows Laptops by 2026 | Maya
