Morning Overview

South Africa pulls draft AI policy after fake AI-generated citations found

South Africa’s government has withdrawn its Draft National Artificial Intelligence Policy after discovering that the document’s reference list contained fabricated citations, a remarkable failure for a policy designed to govern the very technology suspected of producing the errors. Minister of Communications and Digital Technologies Solly Malatsi confirmed the retraction in April 2026, saying the government could not stand behind a document whose integrity had been compromised by fictitious sources.

The episode has stalled South Africa’s effort to establish a national AI regulatory framework and raised pointed questions about how the bogus references made it into an official government publication in the first place.

How the draft unraveled

The Department of Communications and Digital Technologies had published the draft AI policy in the Government Gazette and opened it for public comment. The document was meant to set ethical guidelines, outline risk management principles, and assign institutional responsibilities for AI oversight across the country. Stakeholders, researchers, and civil society groups began reviewing the text and preparing formal submissions.

Then an internal review flagged serious problems with the policy’s sourcing. Minister Malatsi publicly announced the withdrawal after the audit confirmed “various fictitious sources” in the reference list. His rationale was blunt: a policy document built on fabricated citations cannot credibly guide national technology strategy.

The draft text had already circulated in an official capacity before the fictitious references were caught. That means the retraction affects not just the government’s credibility but also the time and effort invested by outside parties who engaged with the consultation process in good faith.

Within the draft, the reference list was supposed to underpin sections on ethical principles, risk management, and comparative international practice. Citations to academic research and global policy frameworks are not decorative in a document like this; they signal that recommendations rest on established knowledge. Once it emerged that some of those references pointed to works that do not exist, the evidentiary foundation of the entire policy was called into question.

A policy about AI, undermined by AI’s signature flaw

The situation carries a specific and uncomfortable irony. A national policy designed to manage the risks of artificial intelligence was itself undermined by one of the best-known failure modes of AI tools: hallucinated citations. Large language models routinely generate references that look authentic, complete with author names, journal titles, and publication years, but correspond to no real work. The problem is well documented. In 2023, a New York attorney faced sanctions after submitting a legal brief to federal court containing fictitious case citations generated by ChatGPT, a case that became a global cautionary tale.

Universities and research bodies across multiple countries have since flagged similar incidents in academic submissions. But South Africa’s case stands apart because the compromised document was specifically about governing AI. The failure suggests that the drafting process lacked adequate human review at the referencing stage, precisely the kind of oversight gap that well-designed AI policy is supposed to close.

Malatsi’s decision to pull the draft rather than quietly patch it signals an awareness of the reputational stakes. A corrected version would have invited ongoing questions about what else in the document might have been generated without sufficient verification. By withdrawing the policy entirely, the government chose a clean break over a compromised repair.

Critical questions the government has not answered

Several important details remain unaddressed in the official record. The government has not disclosed which specific references were fictitious or how many entries in the bibliography were affected. Without that information, outside researchers cannot assess whether the fabricated sources were confined to a few stray entries or woven into the analytical core of the document, shaping its substantive recommendations.

Equally unclear is who conducted the internal review and what methodology was used. Malatsi’s statement confirmed that an internal process identified the fictitious sources, but the department has not named the individuals or teams responsible. The credibility of the review depends on whether it was carried out by subject-matter experts capable of verifying academic and institutional citations, or whether it was a more limited check prompted by external complaints.

The government has also not confirmed whether AI tools were deliberately used in drafting the policy. The pattern of fabricated references strongly resembles AI-generated hallucinations, but other explanations are possible: manual errors, reliance on unvetted external consultants, or the reuse of secondary material that already contained inaccurate citations. Attributing the error to AI without official confirmation remains an inference, not a verified fact.

No timeline for a replacement policy has been announced. South Africa’s AI sector, along with international partners and investors watching the country’s regulatory direction, currently has no official guidance on when a revised framework will appear.

What this means for South Africa’s AI ambitions

The withdrawal lands at a sensitive moment. Across the continent, governments are racing to establish AI governance frameworks. The African Union adopted a Continental AI Strategy in 2024, and countries including Rwanda, Kenya, and Mauritius have moved forward with national AI policies or strategies of their own. South Africa, the continent’s most industrialized economy, was expected to be a leader in this space. The retraction of its draft policy is a setback that could delay the country’s ability to attract AI investment and shape regional standards.

Domestically, South Africa already has data protection legislation in place through the Protection of Personal Information Act (POPIA), but a dedicated AI policy was seen as necessary to address issues that data privacy law alone cannot cover: algorithmic bias, automated decision-making in government services, and the responsible deployment of generative AI tools across industries.

For South African technology companies, researchers, and civil society groups that had been preparing to engage with the public comment process, the practical consequence is immediate: the consultation is suspended, and any submissions prepared for the withdrawn draft will need to be revised once a new version appears. Those stakeholders should monitor the Department of Communications and Digital Technologies for updates on a replacement timeline.

The broader signal is harder to ignore. If a government cannot verify the sources in its own AI policy, it faces a credibility problem when asking the private sector and research community to meet rigorous standards of transparency and accountability. Whatever replaces this draft will need to demonstrate not just sound policy thinking but an airtight drafting process, one that proves the lessons of this episode were absorbed rather than repeated.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.