Morning Overview

India supreme court reprimands judge for relying on fake AI rulings

India’s Andhra Pradesh High Court has reprimanded a trial judge for citing fabricated Supreme Court rulings in a property dispute, after finding the judge relied on an artificial intelligence tool for legal research without verifying the results. The order, issued by Justice Ravi Nath Tilhari on 21 January 2026 in Civil Revision Petition No. 2487 of 2025, titled Gummadi Usha Rani and Anr v. Sure Mallikarjuna Rao and Anr, set aside the lower court’s decision and exposed a growing vulnerability in how AI is being absorbed into judicial work. The case is an early, high-profile example of a sitting judge formally admitting to using an AI tool without verifying whether the cited case law actually existed.

The High Court’s ruling underscores that the problem was not the mere use of AI, but the uncritical acceptance of its output as authoritative law. By documenting the trial judge’s admission and reversing the order built on nonexistent Supreme Court decisions, the High Court effectively drew a line: technology may assist, but it cannot replace a judge’s duty to test every cited precedent against authentic records. The consequences of skipping that step played out in a concrete dispute over property rights, turning what should have been a routine evidentiary ruling into a cautionary tale for courts across the country.

Phantom Citations in a Property Dispute

The trouble began with a routine property and injunction suit in Vijayawada. On 19 August 2025, the V Additional Junior Civil Judge issued an order rejecting objections to an Advocate Commissioner’s survey report. That order cited what appeared to be two Supreme Court judgments supporting the judge’s reasoning on evidence admissibility. But when the losing party challenged the decision through a civil revision petition, the High Court traced those citations and found they did not exist. The supposed precedents could not be located in any official database, including the official online platforms referenced in the order, such as the Supreme Court’s e-Committee, which the trial court order itself had referenced as a source.

The fabricated citations were not minor references buried in a footnote; they were central to the trial court’s reasoning in dismissing objections to the Advocate Commissioner’s report. The parties had raised specific challenges to the survey and its methodology, relying on their understanding of existing case law. By invoking phantom Supreme Court rulings to sweep aside those objections, the trial court effectively created the impression that the highest court had already addressed and resolved similar issues. When the High Court later determined that those authorities were invented, it became clear that the litigants’ arguments had been evaluated under a legal framework that simply did not exist.

How the High Court Caught the Error

Justice Ravi Nath Tilhari’s verification process was straightforward but revealing. The High Court checked the trial court’s cited Supreme Court judgments against official online sources, including the Supreme Court’s e-Committee resources and other court databases. Neither citation matched any real judgment. The references appeared to be AI-generated, complete with plausible-sounding case names and legal reasoning, but with no basis in actual Indian jurisprudence. This pattern is consistent with a well-documented flaw in large language models, which can produce confident but entirely invented legal citations, a phenomenon researchers and legal professionals have termed “hallucination.”

Once the discrepancy surfaced, the High Court sought an explanation from the trial judge. According to the case file and the report placed on record, the judicial officer acknowledged using an AI tool to search for relevant Supreme Court decisions and admitted that the results were not cross-checked against authenticated databases before being incorporated into the order. By reproducing this explanation in its own ruling, the High Court transformed an internal lapse into a publicly visible example of how AI misuse can infiltrate judicial reasoning. The court then set aside the impugned order, restoring the parties’ objections to the Advocate Commissioner’s survey to be decided afresh on the basis of genuine precedent.

Why Fabricated Case Law Carries Real Consequences

The practical stakes here extend well beyond one property suit in Vijayawada. When a trial court cites nonexistent Supreme Court rulings, it creates a false record of law that opposing counsel, appellate judges, and future litigants may rely on. In a legal system where lower courts are bound by higher court precedent, a phantom citation can distort outcomes in ways that are difficult to detect without independent verification. The parties in this case, Gummadi Usha Rani and others, had their objections dismissed, as reflected in the case record accessible via the eCourts portal based on law that never existed. Their property rights were adjudicated using fabricated authority, forcing them to undertake the time and expense of appellate proceedings simply to have the case evaluated under real legal standards.

This is not an abstract risk. AI tools are increasingly accessible to judges, lawyers, and clerks across India’s sprawling court system. The appeal of using AI for research is obvious: it can surface seemingly relevant precedent in seconds rather than hours. But the Vijayawada case demonstrates what happens when speed replaces accuracy. A judge who trusts an AI tool’s output without cross-referencing it against verified databases is, in effect, delegating judicial reasoning to a system that has no capacity to distinguish real law from plausible fiction. The harm is compounded because AI-generated citations often look convincing, making it harder for parties without robust research resources to spot the error.

A Gap in Institutional Safeguards

One of the most striking aspects of the Andhra Pradesh case is the absence of any formal protocol that might have caught the error before it entered the court record. The trial judge’s admission that an AI tool was used for research, without any mention of a required verification step, suggests that no binding guidelines governed the use of such tools at the trial court level. Based on the material referenced in the High Court’s order, there is no publicly documented Supreme Court directive or e-Committee circular that mandates verification procedures when judges use AI for case-law research. This institutional gap left the Vijayawada judge relying entirely on personal judgment about whether to trust the AI’s output, even for something as critical as Supreme Court precedent.

Most discussion of AI in Indian courts has focused on the technology’s promise: faster case disposal, better access to legal information, and reduced backlogs through digitization and search tools. The Vijayawada order is a corrective to that framing. It shows that without enforceable verification requirements, AI tools can introduce errors that are harder to detect than traditional research mistakes. A judge who misreads a real case at least leaves a trail that can be checked and debated. A judge who cites a case that was never decided leaves no trail at all, only a dead end that wastes the time and resources of everyone involved in the appeal. The High Court’s intervention in Civil Revision Petition No. 2487 of 2025 thus highlights the need for institutional safeguards that keep AI in a supporting role rather than an unexamined authority.

What This Case Signals for Indian Courts

The High Court’s decision to place the trial judge’s AI admission on the public record is itself a significant act. It creates a formal precedent, not in the doctrinal sense, but as an institutional signal that reliance on unverified AI output will be treated as a serious lapse in judicial standards. Justice Ravi Nath Tilhari’s order did not merely reverse the lower court on procedural grounds. It identified the root cause of the error and documented it in a way that future courts, judicial academies, and administrative bodies can study when drafting rules on technology use. By clearly stating that fabricated citations emerged from an AI tool and that the judge failed to cross-check them, the High Court has effectively warned that reliance on unverified citations is a serious lapse in judicial standards.

Going forward, the Andhra Pradesh ruling is likely to be cited in debates over how Indian courts should integrate AI into daily work. One probable outcome is the development of explicit protocols requiring judges and court staff to verify any AI-generated citation against official sources such as the Supreme Court’s e-Committee databases and the national e-courts system before relying on it in an order or judgment. Training programs for judicial officers may also begin to emphasize the limits of AI, treating tools that generate text as starting points for research rather than substitutes for authoritative case law. The Vijayawada episode shows that the cost of ignoring those limits is not theoretical: it is borne by real litigants whose rights are decided on the basis of law that, as the High Court ultimately confirmed, was never law at all.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.