Publications + Presentations

Publications + Presentations

Back to Publications + Presentations

Article

When AI Hallucinates, Federal Courts Are Drawing Differing Lines on Lawyer Sanctions

April 16, 2026
by Nicholas S. Bobb

Summary

  • Generative AI has become a routine part of legal practice, but its growing use has led to appellate courts encountering briefs with hallucinated citations and misstatements of law.
  • Recent Sixth and Seventh Circuit cases agree that AI changes lawyers’ workflow, not their ethical duties of competence, candor, and verification.
  • The courts diverge on sanctions, with the Sixth Circuit imposing serious consequences for AI-related errors that caused systemic harm, while the Seventh Circuit opted for admonishment where errors were limited and non-prejudicial.
  • Intent is not required for sanctions, but the scope, impact, and effect of AI-generated errors influence how courts respond.
  • Both decisions emphasize that lawyers remain fully responsible for every citation and quotation, even when errors involve real cases inaccurately summarized by AI tools.

The rapid integration of generative artificial intelligence into the practice of law has moved from novelty to inevitability. Lawyers now use AI tools to draft briefs, summarize records, and create and propose legal arguments. Every day, more lawyers offload critical thought and oversight onto AI models that they do not understand. With that shift has come a predictable backlash: federal appellate courts confronting filings that contain hallucinated quotations, fabricated citations, or misrepresented case law.

Two recent appellate decisions, one from the Sixth Circuit (United States v. Farris, No. 25-5623, 2026 LX 161289, 2026 WL 915082 (6th Cir. Apr. 3, 2026)) and one from the Seventh (Dec v. Mullin, No. 25-2417, 2026 LX 175382, 2026 WL 861530 (7th Cir. Mar. 30, 2026)), illustrate that courts are united on the principle that lawyers must verify AI‑generated work, but divided on how harshly to respond when they do not. Together, the cases signal both a warning and a roadmap for the bar.

Courts Agree, AI Changes the Workflow, Not the Duty

Both courts begin from the same premise: artificial intelligence does not dilute a lawyer’s ethical obligations. Competence, candor, and accuracy apply regardless of whether a brief was typed from scratch, produced by an associate, or generated by a large language model (even a law-specific LLM).

The Sixth Circuit made this point emphatically, tying AI use to long‑standing rules requiring lawyers to understand and supervise the tools they employ and to personally verify citations and quotations. The Seventh Circuit echoed the same idea more succinctly, reminding practitioners that citation checking is “easier now than ever” and that failures waste court resources.

In this sense, the courts are aligned. Neither treats AI hallucinations as a novel ethical category deserving special leniency. Instead, AI errors are simply another way lawyers can misquote, miscite, or mischaracterize the law, and the lawyer’s duty to be candid before the tribunal remains.

Courts React Differently, Weighing Intent and Impact

Where the courts diverge is not on principle, but on response.

In Farris, the Sixth Circuit confronted a court‑appointed criminal defense attorney whose briefs contained multiple fabricated quotations and material misstatements of circuit precedent. The errors resulted in a show‑cause proceeding, delayed resolution of a criminal appeal, and forced the court to divert substantial resources. The court also emphasized that this occurred in the context of publicly funded representation of an indigent defendant.

Those factors mattered. The Sixth Circuit framed the misconduct as “inexcusable” regardless of intent, focusing on the systemic harm rather than counsel’s contrition. That framing justified serious consequences: forfeiture of fees, removal from the case, and referrals to the Chief Judge of the Sixth Circuit for potential discipline.

By contrast, the Seventh Circuit in Dec faced a smaller number of hallucinated citations tucked into a standard‑of‑review section of a civil immigration appeal. The panel was persuaded that the errors did not affect the merits, were not strategic, and were met with genuine acceptance of responsibility. While still troubled, the Seventh Circuit opted for admonishment rather than escalation.

Neither court required a finding of bad faith. Both were clear that even negligent reliance on AI can warrant sanctions. But Dec shows that intent, scope, and effect can influence outcome.

The lesson for practitioners is subtle but critical: apologies matter, but they don’t erase downstream harm. The more an AI‑generated error affects the court’s work, the client’s rights, or the integrity of the proceeding, the less likely a court will stop at a warning.

A striking feature of Farris is the court’s insistence that citing real cases incorrectly can be just as serious as inventing cases outright. The opinion rejects any notion that hallucinations are less problematic when they are “anchored” to real authority.

This is a cautionary point for lawyers relying on AI research tools that confidently summarize or quote cases. A brief that cites a familiar reporter citation may pass casual review, increasing the risk that fabricated language will slip through. Courts are signaling that this kind of error cuts directly at candor and competence.

These cases are unlikely to be outliers. As generative AI becomes a routine part of legal drafting, courts will continue to refine how they distinguish between forgivable mistakes and sanctionable misconduct. What is already clear is that AI use is no longer novel enough to excuse carelessness.

For now, the message from the courts is growing consistent and unmistakable: AI may draft the words, but the lawyer owns every citation on the page.

Resources