A federal judge has sanctioned a former U.S. attorney for filing a court brief that cited legal cases fabricated by ChatGPT, imposing some of the sharpest penalties yet seen in the growing wave of AI-related courtroom misconduct. The attorney, who had previously been terminated from the Department of Justice, submitted the brief without verifying whether the cases it relied on actually existed. None of them did.
The sanctions order, which has been referenced in secondary reporting but whose full text has not been independently reviewed for this article, reportedly included a public reprimand, removal from the case, and referral to bar disciplinary authorities. Each penalty carries lasting professional consequences: reprimand attaches a permanent mark to the attorney’s record, removal forces the client to find new counsel mid-litigation, and a disciplinary referral can lead to suspension or disbarment. The name of the sanctioned attorney and the presiding judge have not been confirmed through the sourcing available for this article.
A problem courts keep seeing
The case is not the first time a federal judge has punished a lawyer for presenting AI-generated fiction as binding precedent. In 2023, a New York federal judge sanctioned attorneys in Mata v. Avianca after they filed a brief containing bogus case citations produced by ChatGPT. That case, handled by Judge P. Kevin Castel in the Southern District of New York, became a cautionary tale across the legal profession and prompted dozens of federal judges to issue standing orders requiring lawyers to disclose when they use AI tools in preparing filings.
More recently, a federal judge in Alabama sanctioned lawyers defending the state’s prison system for the same kind of failure. According to AP reporting, the Alabama court imposed remedies that mirrored the penalties in the latest case: public reprimand, removal, and referral. The consistency of these rulings across different courts and circuits signals that federal judges are converging on a unified stance. Submitting unverified AI output is not a technical hiccup; it is a breach of the duty of candor every lawyer owes to the tribunal.
Why hallucinations keep slipping through
Large language models like ChatGPT generate text by predicting the next likely word in a sequence. They do not search legal databases or confirm that a case exists before citing it. When asked for supporting case law, these tools can produce citations that look convincing, complete with plausible party names, volume numbers, and reporter abbreviations, but point to opinions that were never written. The legal profession calls this “hallucination,” borrowing a term from AI research.
Established legal research platforms such as Westlaw and LexisNexis have built-in verification: if a case does not appear in their databases, it almost certainly does not exist. Lawyers who use generative AI to draft briefs can catch hallucinated citations in minutes by cross-checking against these tools. The attorneys sanctioned in the recent cases skipped that step. Courts have made clear that this omission is not excusable, regardless of how polished the AI output appeared.
What is still unclear
Several details about the former U.S. attorney’s situation remain unconfirmed through publicly available primary documents as of May 2026. The full text of the sanctions order has not been widely circulated, so the exact number of fabricated citations and how deeply they shaped the legal argument are not independently verifiable. The identities of the sanctioned attorney and the presiding judge have not been confirmed through the sources available for this article, despite these being matters of public court record that would ordinarily be accessible. Whether the attorney used ChatGPT to draft the entire brief or only to locate supporting authorities is also unknown. That distinction matters: using AI as a research shortcut without verification may be characterized as reckless, while delegating full drafting to a tool and signing without review could be treated as a more serious abdication of professional duty.
The circumstances of the attorney’s earlier departure from the DOJ add another layer of uncertainty. Reporting references a firing tied to unrelated controversies, but no official personnel records, statements, or confirmed dates of termination have surfaced to clarify the timeline. Drawing a direct connection between the attorney’s employment history and the AI misconduct would be speculative at this point.
A broader question looms behind these individual cases: how many similar filings have gone undetected? Courts typically discover fabricated citations only when opposing counsel or a judge independently checks the references. No systematic audit of AI-assisted legal filings has been conducted at the federal level, leaving the true scope of the problem unknown.
Where bar associations and courts stand on AI oversight
Bar associations in several states, including California, Florida, and New York, have issued formal guidance on AI use in legal practice. As of May 2026, no jurisdiction appears to have adopted a mandatory AI literacy certification for practicing attorneys, though this landscape is evolving and a comprehensive survey of all jurisdictions has not been conducted for this article. Whether the mounting sanctions accelerate formal rule changes will depend on how quickly state bars move from advisory opinions to binding ethical rules.
For now, the enforcement mechanism is the one that has always existed: judges punishing lawyers who present false information to the court. The added dimension is that the false information was generated by a machine the lawyer chose to trust without verification. Federal courts have treated these cases not as novel technology disputes but as straightforward failures of honesty and diligence, the same standards that governed legal practice long before anyone had heard of a large language model.
The practical lesson for any attorney using generative AI is blunt: treat the output the way you would treat a memo from an unsupervised intern. Verify every citation. Read every case. Confirm that quoted language matches the actual opinion. The attorney’s name on the filing is the attorney’s responsibility, and no court has shown any willingness to accept “the AI made it up” as a defense.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.