KATRIN BOLOVTSOVA/Pexels

Courts are starting to treat generative AI less like a marvel and more like a malfunctioning appliance, something that can be tolerated as long as it does not explode in the user’s face. In the process, judges are quietly redefining what counts as “good enough” for machine‑generated legal work, even as the technology continues to hallucinate cases, misstate facts, and tempt lawyers into cutting corners. The result is a legal system that is trying to domesticate unreliable software by regulating the humans around it, rather than demanding that the tools themselves stop making things up.

That shift is clearest in a growing body of rulings, sanctions, and ethics opinions that accept AI as part of everyday practice while punishing its worst failures. From federal judges who have apologized for citing phantom precedents to disciplinary authorities in California and the United Kingdom, the emerging consensus is that generative systems can stay in the courtroom, but only if lawyers and vendors accept a new tier of responsibility for every hallucinated word.

The courtroom learns to live with flawed machines

In just a few years, generative systems have gone from novelty to infrastructure in legal practice, handling everything from quick email drafts to first‑cut contracts and research memos. In the United Kingdom, regulators already describe AI as “embedded in everyday legal practice,” a phrase that captures how tools that once sat at the experimental fringe now sit in the center of client work, billing, and even risk management. When I talk to practitioners, they describe junior associates toggling between a large language model and a traditional database in the same way earlier generations bounced between Westlaw and Lexis.

That normalization is happening even as the same systems are known to hallucinate, inventing plausible‑sounding citations and factual narratives that collapse under scrutiny. UK commentary now warns that firms face rising regulatory and litigation exposure if they let those hallucinations seep into client advice, with some analysts flagging potential criminal liability for law firms that fail to supervise AI‑assisted work. The paradox is stark: courts and regulators are accepting that flawed AI will be used, while simultaneously raising the stakes for anyone who treats its output as self‑authenticating.

The Lindell sanctions and the price of blind trust

The most vivid warning shot came in the defamation litigation involving MyPillow and its founder. In Colorado, a federal judge ordered two attorneys representing MyPillow CEO Mike Lindell to pay thousands of dollars after they filed a brief laced with AI‑fabricated citations. The lawyers had relied on a generative tool to generate case law, then failed to verify that the authorities actually existed, a basic step that any first‑year associate would be expected to perform before signing their name to a filing.

The episode drew national attention in part because Mike Lindell is already a polarizing figure, closely associated with efforts to overturn the 2020 election and with rallies of Donald Trump supporters in places like Palm Beach, Fla., where MyPillow CEO Mike Lindell arrives at gatherings near Trump’s residence. But the real story was procedural, not political. The Colorado court treated the AI tool as irrelevant and focused instead on the lawyers’ duty of candor, making clear that delegating research to a chatbot does not dilute professional obligations. In effect, the judge accepted that AI would be used, yet drew a bright line: hallucinations are the lawyer’s problem, not the machine’s.

California’s early blueprint for disciplining AI misuse

If Colorado supplied the headline‑grabbing sanctions, California is quietly building the rulebook. Ethics analyst David Cameron Carr has cataloged a trio of opinions from the California Courts of Appeal that grapple directly with AI‑tainted filings, each one reinforcing that judges will not tolerate fabricated authorities. In one matter, the court flagged that a brief contained non‑existent cases, then referred the lawyers to state bar investigators, signaling that misuse of generative tools can trigger not just judicial scolding but full‑blown disciplinary probes.

Those opinions, discussed in detail by David Cameron Carr Uncategorized November, sketch a model that other jurisdictions are likely to copy. Judges are not banning AI or treating its use as per se misconduct. Instead, they are insisting that any lawyer who leans on a generative system must still read, understand, and independently verify the authorities that end up in a brief. The message is that AI can be a drafting assistant but never a substitute for human judgment, and that “I trusted the software” will not save anyone from a referral to the bar.

When judges themselves get fooled

The anxiety around hallucinations is not limited to advocates. In a remarkable development, two federal judges recently apologized after discovering that their own written opinions contained AI‑generated errors. According to one detailed account, the Administrative Office of the U.S. Courts convened a task force on generative AI and issued interim guidance that cautions judges against delegating core judicial functions to automated tools, a sign that the judiciary understands how tempting it can be to lean on software for drafting and research.

Legal scholar Josh Blackman has described how those apologies exposed a deeper institutional worry: if judges quietly use generative systems to summarize briefs or outline opinions, hallucinations can slip directly into binding precedent. The new guidance, discussed in an analysis of how two federal judges apologize, effectively tells the bench to treat AI as a tool for clerical support, not as a silent coauthor. It is another example of the system redefining “good enough” as human‑checked AI, rather than AI left to its own devices.

Standing orders and the new minimum standard

Across the country, courts are translating those anxieties into standing orders and local rules that set a floor for AI use. Commentators tracking these developments note that judges have begun to require lawyers to disclose when they use generative tools in drafting, to certify that all citations have been checked against traditional databases, and in some cases to refrain from uploading confidential materials into public models. The goal is not to outlaw AI, but to keep hallucinations and data leaks from entering the judicial record in the first place.

Looking ahead to 2026, one influential set of predictions for AI and the law argues that these orders are only the beginning. So far, courts have tried to manage hallucinated filings through procedural rules, but the next frontier is likely to be more substantive, including questions about whether AI vendors themselves can be held responsible when their products generate defamatory or misleading content that ends up in litigation. For now, though, the operative standard is simple: AI is allowed in the drafting room, but nothing it produces is “good enough” until a human has checked every line.

Trump’s executive order and the limits of federal preemption

Into this unsettled landscape stepped President Donald Trump, whose administration has tried to shape AI policy from the top down. An executive order from President Donald Trump seeks to challenge and preempt state artificial intelligence regulations, reflecting a broader push to keep a patchwork of local rules from constraining national innovation. The order signals the White House’s preference for a lighter regulatory touch, especially around emerging technologies that companies argue are still evolving too quickly for rigid statutory schemes.

Yet even as the executive branch asserts itself, legal analysts point out that courts will continue to define the real contours of AI accountability. A detailed commentary on how Trump’s order can’t stop courts from shaping AI accountability notes that judges are already wrestling with design liability, product defect theories, and professional negligence claims tied to generative tools. No executive order can prevent a state court from finding that a vendor’s interface encouraged overreliance on hallucination‑prone output, or that a lawyer breached a duty by failing to supervise an AI assistant. In practice, the judiciary is setting the operational definition of “good enough” AI, one sanctions order at a time.

Ethics guidance and best practices: from theory to checklists

As the case law evolves, bar associations and law firms are racing to translate abstract duties into concrete workflows. Compliance experts now urge firms to adopt written policies that spell out when and how generative tools can be used, which models are approved, and what verification steps are mandatory before any AI‑assisted work product goes to a client or a court. In the UK, that conversation is explicitly tied to the risk of regulatory enforcement and even criminal exposure if partners fail to supervise junior lawyers who rely on AI for substantive analysis.

Inside large U.S. firms, the conversation has shifted from “should we use this at all” to “how do we keep it from hurting us.” In one widely cited discussion, partners Amy Longo and Shannon Kirk walk through how courts are already cracking down on hallucinations and urge litigators to cross‑check any AI‑generated citations against trusted legal databases before filing. Their advice reads less like blue‑sky futurism and more like a safety checklist: know your tool, limit what you feed it, and never skip the human review that turns a hallucination‑prone draft into something fit for a docket.

Predictions, pessimism, and the culture of “good enough”

Despite all this activity, there is a growing sense that the legal profession is still behind the curve. One wry year‑end column, titled “7 Predictions For 2026 That Should Come True But Won’t,” captures the mood with a mix of frustration and dark humor. The author notes, “Honestly, one would’ve thought lawyers could’ve cleared this one in 2025,” referring to the basic task of stamping out AI‑hallucinated citations, before adding that skeptics “said that of 2025 too,” a line that neatly summarizes how slowly cultural change moves in law compared with technological adoption.

That pessimism is not entirely misplaced. Even after the first wave of sanctions and apologies, there are still lawyers who treat generative tools as black boxes that magically produce arguments, and judges who are only beginning to ask how often AI is lurking behind the briefs on their desks. The prediction piece, which opens with the deadpan phrase Honestly, one would’ve thought, suggests that the real obstacle is not a lack of rules but a persistent culture of “good enough” that tolerates sloppy supervision as long as nothing explodes in public. Until that culture shifts, courts will keep encountering hallucinations, and each new incident will force another incremental redefinition of what responsible AI use looks like.

Redefining “good enough” without saying it out loud

What ties these threads together is an unspoken recalibration of standards. Judges are not demanding perfection from AI‑assisted work, any more than they expect human lawyers to be infallible. Instead, they are drawing a line between acceptable error and reckless indifference, treating hallucinations as tolerable only when they are caught and corrected before they reach the record. In that sense, the system is quietly blessing a world in which flawed AI is omnipresent, as long as its flaws are managed through human diligence, disclosure, and, when necessary, discipline.

I see that as both pragmatic and risky. Pragmatic, because generative tools are not going away, and insisting on a fantasy of zero hallucinations would simply drive their use underground. Risky, because every time a court shrugs off a near miss or limits sanctions to a slap on the wrist, it reinforces the idea that a hallucination‑prone assistant is “good enough” for serious legal work. The next few years will test whether the emerging mix of standing orders, ethics opinions, and high‑profile sanctions can keep that compromise stable, or whether another spectacular AI failure will force judges to rethink how much unreliability they are willing to tolerate in the machinery of justice.

More from Morning Overview