Video Script (9 min, clean transcript for captioning)

In the first quarter of 2026, U.S. courts handed out more than $145,000 in sanctions against lawyers for filing documents that contained citations to cases that don't exist — cases that were invented by AI.

One hundred and forty-five thousand dollars. In three months. And that's just the cases where courts imposed monetary sanctions. The number of cases where judges issued warnings, required corrective filings, or simply noted the problem without penalty is significantly higher.

Researcher Damien Charlotin has been tracking these incidents globally. His database now contains more than 1,200 AI hallucination cases worldwide. Approximately 800 are from U.S. courts.

This is not a theoretical risk. It is a documented, recurring, and accelerating problem. And it tells us something important about the gap between how AI is being deployed and how it actually needs to be used in high-stakes professional contexts.

Let me give you the concrete cases, because the specific details matter here.

The largest single sanction in Q1 was a federal court in Oregon that ordered a lawyer to pay $109,700 in sanctions and costs. The case involved AI-generated errors — not a citation or two, but a pattern of fabricated material embedded in briefs.

In another Oregon district case involving a winery dispute, the court found fifteen fake citations and eight invented quotations across multiple briefs. Lead counsel was fined more than $15,000, plus adverse costs. That means the lawyer not only paid the sanction — they potentially paid the other side's legal bills too.

The Sixth Circuit sanctioned two lawyers $15,000 each in a Tennessee fireworks case after finding more than two dozen citations that were wrong, misleading, or nonexistent. In total across multiple briefs.

Lawyers for MyPillow CEO Mike Lindell were fined $3,000 each for filing briefs with fictitious AI-generated citations.

These are not isolated incidents. They are a pattern.

Now let me tell you why this is happening, because it's not stupidity. It's the specific failure mode of how large language models handle citations, and understanding it matters whether you're a lawyer, a professional in any regulated field, or a business using AI to generate documents.

Large language models generate text that sounds plausible. They're trained on massive amounts of text and they learn what legal citations look like — the format, the style, the structure of case names. When they don't have a specific real case to cite, they sometimes generate text that follows the pattern of a real citation. Case name. Court. Year. Volume and page number. Reporter abbreviation. The whole format. And it's wrong. The case doesn't exist.

This is not the model lying. It doesn't know it's lying. It is producing text that fits the pattern of what a citation looks like. It has no internal mechanism that says "I'm not sure if this case exists." It just produces plausible-sounding text.

The responsibility for catching this falls entirely on the person submitting the document.

Here's what makes this particularly consequential in a legal context: lawyers sign documents. In the federal courts, that signature is a certification under Rule 11 that the lawyer has performed a reasonable inquiry and believes the citations are accurate. That certification is what makes the sanctions possible. When you file something in court, you're attesting to its accuracy. If it's wrong — especially if it's wrong in a way that suggests no one checked — the court can sanction you.

The legal profession's response to this problem has been a mix of guidance and escalating enforcement.

As of early 2026, over 35 state bar associations have issued guidance on AI use. The universal principle across all of them: no jurisdiction permits blind reliance on AI-generated content. Every jurisdiction requires independent verification of AI output before it's submitted to any tribunal.

Oregon went further. In December 2025, the Oregon Court of Appeals established a tariff schedule for AI hallucination misconduct: $500 per fabricated citation, $1,000 per fabricated quotation. A document with fifteen fake citations and eight invented quotations under that schedule would cost $15,500 before you add any additional sanctions.

There's a twist that makes this even more complicated.

While courts are sanctioning lawyers for using AI without verification, a separate study found that 61% of federal judges are themselves using AI for legal work. The judges doing the sanctioning are using the same category of tools as the lawyers they're sanctioning.

The difference — and it's the entire difference — is the verification step. Professional use of AI requires checking the output. It doesn't matter how capable the tool is. The model doesn't know what it doesn't know, and in high-stakes professional contexts, unverified AI output is a professional liability risk.

This is not a legal profession problem specifically. It's a use case problem that applies to every professional domain where accuracy is verifiable and errors have consequences.

Medical professionals generating clinical notes or research summaries. Financial analysts producing reports with specific data. Journalists publishing factual claims. Accountants preparing filings with regulatory citations. The failure mode is the same in every case: AI produces plausible-sounding output that happens to be wrong, and someone submits it without verifying.

The practical takeaway is simple. If AI generates a specific factual claim — a citation, a statistic, a date, a name, a case reference — verify it independently before relying on it in a professional context. Not because the model is usually wrong. Because when it's wrong in a high-stakes context, you own the error.

The $145,000 in Q1 2026 sanctions is not the end of this story. It's the beginning.

Stay sharp.
— Jane Sterling, Sterling Intelligence

Annotated Script (with b-roll & cut cues)
[TALKING HEAD — hook]

In the first quarter of 2026, U.S. courts handed out more than $145,000 in sanctions against lawyers for filing documents that contained citations to cases that don't exist — cases that were invented by AI.

[STAT CARD: "$145,000 in Q1 2026 sanctions"]

One hundred and forty-five thousand dollars. In three months. And that's just the cases where courts imposed monetary sanctions. The number of cases where judges issued warnings, required corrective filings, or simply noted the problem without penalty is significantly higher.

[VOICEOVER — scene 1] [B-ROLL: news-studio]

Researcher Damien Charlotin has been tracking these incidents globally. His database now contains more than 1,200 AI hallucination cases worldwide. Approximately 800 are from U.S. courts.

[STAT CARD: "1,200 cases globally / 800 in U.S. courts"]

This is not a theoretical risk. It is a documented, recurring, and accelerating problem. And it tells us something important about the gap between how AI is being deployed and how it actually needs to be used in high-stakes professional contexts.

[/VOICEOVER] [CUT] [TALKING HEAD — transition]

Let me give you the concrete cases, because the specific details matter here.

[VOICEOVER — scene 2] [B-ROLL: courtroom]

The largest single sanction in Q1 was a federal court in Oregon that ordered a lawyer to pay $109,700 in sanctions and costs. The case involved AI-generated errors — not a citation or two, but a pattern of fabricated material embedded in briefs.

[STAT CARD: "$109,700 — largest single sanction"] [B-ROLL: screen-capture:court-filing]

In another Oregon district case involving a winery dispute, the court found fifteen fake citations and eight invented quotations across multiple briefs. Lead counsel was fined more than $15,000, plus adverse costs. That means the lawyer not only paid the sanction — they potentially paid the other side's legal bills too.

[STAT CARD: "15 fake citations + 8 invented quotations"] [B-ROLL: stills:gavel]

The Sixth Circuit sanctioned two lawyers $15,000 each in a Tennessee fireworks case after finding more than two dozen citations that were wrong, misleading, or nonexistent. In total across multiple briefs.

[STAT CARD: "$15,000 each — Sixth Circuit"] [B-ROLL: news-studio]

Lawyers for MyPillow CEO Mike Lindell were fined $3,000 each for filing briefs with fictitious AI-generated citations.

[STAT CARD: "$3,000 each — Lindell counsel"]

These are not isolated incidents. They are a pattern.

[/VOICEOVER] [CUT] [TALKING HEAD — transition]

Now let me tell you why this is happening, because it's not stupidity. It's the specific failure mode of how large language models handle citations, and understanding it matters whether you're a lawyer, a professional in any regulated field, or a business using AI to generate documents.

[VOICEOVER — scene 3] [B-ROLL: ai-abstract]

Large language models generate text that sounds plausible. They're trained on massive amounts of text and they learn what legal citations look like — the format, the style, the structure of case names. When they don't have a specific real case to cite, they sometimes generate text that follows the pattern of a real citation. Case name. Court. Year. Volume and page number. Reporter abbreviation. The whole format. And it's wrong. The case doesn't exist.

[B-ROLL: screen-capture:chatgpt]

This is not the model lying. It doesn't know it's lying. It is producing text that fits the pattern of what a citation looks like. It has no internal mechanism that says "I'm not sure if this case exists." It just produces plausible-sounding text.

[B-ROLL: company-logo:openai]

The responsibility for catching this falls entirely on the person submitting the document.

[/VOICEOVER] [TALKING HEAD — transition]

Here's what makes this particularly consequential in a legal context: lawyers sign documents. In the federal courts, that signature is a certification under Rule 11 that the lawyer has performed a reasonable inquiry and believes the citations are accurate. That certification is what makes the sanctions possible. When you file something in court, you're attesting to its accuracy. If it's wrong — especially if it's wrong in a way that suggests no one checked — the court can sanction you.

[VOICEOVER — scene 4] [B-ROLL: courtroom]

The legal profession's response to this problem has been a mix of guidance and escalating enforcement.

[B-ROLL: stills:gavel]

As of early 2026, over 35 state bar associations have issued guidance on AI use. The universal principle across all of them: no jurisdiction permits blind reliance on AI-generated content. Every jurisdiction requires independent verification of AI output before it's submitted to any tribunal.

[STAT CARD: "35+ state bar associations"] [B-ROLL: finance-charts]

Oregon went further. In December 2025, the Oregon Court of Appeals established a tariff schedule for AI hallucination misconduct: $500 per fabricated citation, $1,000 per fabricated quotation. A document with fifteen fake citations and eight invented quotations under that schedule would cost $15,500 before you add any additional sanctions.

[STAT CARD: "$500 / citation, $1,000 / quotation"] [/VOICEOVER] [TALKING HEAD — transition]

There's a twist that makes this even more complicated.

[VOICEOVER — scene 5] [B-ROLL: courtroom]

While courts are sanctioning lawyers for using AI without verification, a separate study found that 61% of federal judges are themselves using AI for legal work. The judges doing the sanctioning are using the same category of tools as the lawyers they're sanctioning.

[STAT CARD: "61% of federal judges use AI"] [B-ROLL: ai-abstract]

The difference — and it's the entire difference — is the verification step. Professional use of AI requires checking the output. It doesn't matter how capable the tool is. The model doesn't know what it doesn't know, and in high-stakes professional contexts, unverified AI output is a professional liability risk.

[/VOICEOVER] [CUT] [TALKING HEAD — sign-off]

This is not a legal profession problem specifically. It's a use case problem that applies to every professional domain where accuracy is verifiable and errors have consequences.

Medical professionals generating clinical notes or research summaries. Financial analysts producing reports with specific data. Journalists publishing factual claims. Accountants preparing filings with regulatory citations. The failure mode is the same in every case: AI produces plausible-sounding output that happens to be wrong, and someone submits it without verifying.

The practical takeaway is simple. If AI generates a specific factual claim — a citation, a statistic, a date, a name, a case reference — verify it independently before relying on it in a professional context. Not because the model is usually wrong. Because when it's wrong in a high-stakes context, you own the error.

The $145,000 in Q1 2026 sanctions is not the end of this story. It's the beginning.

Stay sharp. — Jane Sterling, Sterling Intelligence

U.S. courts imposed more than $145,000 in sanctions against lawyers in Q1 2026 alone for filing documents containing AI-generated citations to cases that don't exist. One attorney in Oregon was ordered to pay $109,700 in a single case. A global database of AI hallucination incidents in legal contexts now contains more than 1,200 cases.

In this video, Jane Sterling breaks down the specific cases, why this failure mode is happening, what the legal profession's response has been, and what it means for anyone using AI in professional contexts.


The Numbers

Researcher Damien Charlotin has been systematically tracking AI hallucination incidents in legal proceedings. His database now contains more than 1,200 cases globally. Approximately 800 are from U.S. courts.

In Q1 2026 alone, U.S. courts imposed more than $145,000 in monetary sanctions specifically for AI hallucination misconduct — fabricated citations, invented case quotations, and fictitious legal authorities included in court filings.

This is the monetary sanction figure only. Cases where courts issued warnings, required corrective filings, or referred matters to bar associations for discipline are not included in the $145,000 figure.


The Major Cases

Oregon, Q1 2026 — Record Single Sanction

A federal court in Oregon ordered one attorney to pay $109,700 in sanctions and costs for filing AI-generated errors. The largest single AI hallucination sanction on record in the United States.

Oregon Winery Case

In a commercial dispute involving a winery, the court found fifteen fabricated citations and eight invented quotations embedded across multiple briefs. Lead counsel was sanctioned more than $15,000 plus adverse costs — meaning potentially paying the opposing side's legal fees as well.

Sixth Circuit — Tennessee Fireworks Case

Two lawyers were each sanctioned $15,000 after the Sixth Circuit found more than two dozen citations in their filings that were wrong, misleading, or did not exist. The total sanction exceeded $30,000 in this single case.

MyPillow/Lindell Case

Lawyers for MyPillow CEO Mike Lindell were fined $3,000 each for submitting briefs containing fictitious AI-generated citations. A lower sanction, but a high-profile case given the client.


Why This Keeps Happening

Understanding why AI produces fabricated citations requires understanding how large language models work.

Large language models do not retrieve information from a database. They generate text based on statistical patterns learned from training data. They have learned what legal citations look like — the format, the structure, the naming conventions. When they don't have access to a specific real case that fits the argument being made, they sometimes generate text that follows the citation pattern but contains invented content.

The model does not know this is wrong. There is no internal flag that says "I'm uncertain whether this case exists." The model produces plausible-sounding text because that is what it does. It cannot distinguish between generating a citation to a real case and generating a citation to a fictional case — both look the same to the model.

This means that the AI output failure mode in legal citation work is not random error. It is systematic plausibility — the fabricated citation looks exactly like a real citation. It has the right format. It often has a plausible-sounding case name. It may even have the correct reporter abbreviation for the jurisdiction. The only way to catch it is to verify that the case actually exists — which requires looking it up in Westlaw, Lexis, or another verified legal database.

This step — verification — is the entire gap between acceptable and sanctionable AI use in legal practice. And many lawyers are not doing it.


The Legal Profession's Response

The legal profession's formal response has been accelerating guidance and enforcement.

Bar Association Guidance

As of early 2026, more than 35 state bar associations have issued formal guidance on AI use in legal practice. The universal standard across all of them is consistent: no jurisdiction permits blind reliance on AI-generated content. Every jurisdiction requires independent verification of AI output before submission to any court or tribunal.

This is not optional guidance. Professional responsibility rules apply. Submitting unverified AI-generated content in circumstances where that content contains errors is a potential ethics violation in addition to being a sanctionable court infraction.

Oregon's Tariff Schedule

In December 2025, the Oregon Court of Appeals went further than guidance. They established a formal tariff schedule for AI hallucination misconduct:

Applied to the Oregon winery case (15 fake citations + 8 invented quotations): $7,500 + $8,000 = $15,500 in tariff alone, before additional sanctions.

This is a systematic enforcement approach that makes the math clear for any attorney considering whether verification is worth the time.

Court Orders and Disclosure Requirements

Several federal courts have issued standing orders requiring attorneys to disclose AI use in filings and certify that AI-generated content has been verified by a licensed attorney. These orders impose an affirmative disclosure obligation that makes verification a documented step rather than an assumed one.


The Paradox: Judges Use AI Too

One study found that approximately 61% of federal judges are using AI for legal work — research, drafting, analysis of complex materials.

This creates an uncomfortable situation: judges who use AI are sanctioning lawyers who use AI, for failures that can occur in anyone's AI use.

The distinction that makes this coherent is verification. The judges using AI are (presumably) checking what it produces before relying on it in decisions. The lawyers being sanctioned are not. The sanction is not for using AI — it is for submitting unverified AI output in a context where accuracy is legally and professionally guaranteed by the attorney's signature.

That distinction — use AI, verify the output — is the entire ethical and professional framework for AI in high-stakes professional work.


What This Means Beyond The Courtroom

The AI hallucination sanction problem in law is a specific instance of a general problem that applies across professional domains.

Medicine: AI-generated clinical summaries, diagnostic suggestions, or treatment protocols contain errors at a rate that requires physician verification before they become part of patient care. Medical AI output that isn't reviewed by a clinician creates patient safety risks and professional liability exposure. Finance: AI-generated reports with specific data points — earnings figures, regulatory citations, market statistics — require verification before being submitted to regulators, clients, or published markets. Errors in regulated financial contexts carry significant consequences. Journalism: AI-generated factual claims require the same editorial verification as any other sourced claim. A hallucinated statistic or a fabricated quotation in a published article is a correction, a retraction, and a credibility problem. Accounting and Tax: AI-generated filings with incorrect regulatory citations or incorrect statutory references create errors in documents submitted to government agencies under penalty of perjury.

The common thread: in any professional context where accuracy is verifiable and errors have serious consequences, unverified AI output is a professional liability risk.


Practical Guidance

Use AI as a research and drafting assistant, not as a primary source. AI can find relevant areas of law, suggest arguments, and draft initial language. All of it requires verification.

Verify every citation independently. Check Westlaw, Lexis, or the primary source before including any specific legal authority in a filed document. Every single time.

Disclose AI use where required. Follow your jurisdiction's guidance and any specific court orders requiring disclosure.

Build verification into your workflow as a non-optional step. The time cost of verification is significantly less than the cost of a sanction — financial, professional, and reputational.

The $145,000 in Q1 2026 is the documented minimum. The actual cost to attorneys who faced non-monetary consequences — adverse costs awards, bar referrals, reputation damage — is considerably higher.


Subscribe to Sterling Intelligence for weekly AI coverage across law, tech, and business.

New videos every week.
— Jane Sterling


Some links may be affiliate links. Commission at no cost to you.

YouTube Description

Lawyers are getting fined $145,000 for trusting AI in court. In Q1 2026 alone. One attorney paid $109,700 in a single case. This is not a hypothetical. This is a documented, accelerating pattern — and it's the clearest real-world stress test yet of what happens when professionals use AI without verification. U.S. courts handed down more than $145,000 in sanctions against lawyers in the first quarter of 2026 for filing documents that cited cases invented by AI. Researcher Damien Charlotin's database of AI hallucination incidents in legal proceedings now contains more than 1,200 cases globally, with approximately 800 from U.S. courts. The single largest sanction on record — $109,700 — came out of federal court in Oregon. In this episode, Jane Sterling breaks down the specific sanctioned cases, the technical reason large language models invent citations that look identical to real ones, the legal profession's escalating response, and why this problem is not limited to lawyers. Cases covered: • Oregon federal court — $109,700 single-attorney sanction (largest on record) • Oregon winery case — 15 fake citations, 8 invented quotations, $15,000+ sanction • Sixth Circuit Tennessee fireworks case — $15,000 each against two lawyers • MyPillow / Mike Lindell counsel — $3,000 each for fictitious AI citations Key numbers: • $145,000 — total Q1 2026 sanctions for AI hallucination misconduct in U.S. courts • 1,200+ — AI hallucination cases tracked globally by Charlotin • 800 — approximate U.S. case count • 35+ — state bar associations that have issued AI-use guidance • $500 / $1,000 — Oregon's per-citation / per-quotation tariff schedule • 61% — federal judges who report using AI for legal work We also cover why large language models fabricate citations that look exactly like real ones, what Rule 11 certification means when AI is in the drafting loop, Oregon's tariff schedule for hallucination misconduct, and why the same failure mode applies to medicine, finance, journalism, and accounting. ⏱ Chapters 00:00 $145,000 in three months 01:00 The cases — Oregon, Sixth Circuit, Lindell 03:00 Why LLMs invent citations that look real 05:00 Rule 11 and the attorney's signature 06:00 Bar guidance and Oregon's tariff schedule 07:30 The paradox — 61% of federal judges use AI 08:30 What this means beyond the courtroom 🔔 Subscribe to Sterling Intelligence for weekly AI coverage across law, tech, and business — no hype, no filler, just the signal. https://www.youtube.com/@SterlingIntelligence — Jane Sterling, Sterling Intelligence #AIHallucinations #AILaw #LawyerSanctions #AILegal #CourtsAI #ChatGPT #LegalAI #Rule11 #SterlingIntelligence #JaneSterling #AINews2026 #ArtificialIntelligence #AIEthics #TechNews2026

Titles

Keywords

AI hallucinations, AI in court, lawyer sanctions, AI legal citations, ChatGPT lawyer, fake case citations, Rule 11 sanctions, Damien Charlotin, AI hallucination database, legal AI, Westlaw, Lexis, Oregon tariff schedule, Oregon Court of Appeals, Sixth Circuit, MyPillow Lindell, fabricated citations, AI ethics law, state bar AI guidance, federal judges AI, professional liability AI, large language model hallucination, AI verification, AI legal research, courts AI 2026, Sterling Intelligence, Jane Sterling, AI news 2026, artificial intelligence law, AI weekly

Thumbnail Brief

Jane's Appearance & Framing

Expression. Serious-concerned, eyebrows slightly drawn, faint look of disbelief. The face you make when you realize an attorney actually filed that. Closed mouth, subtle tension at the jaw.

Head position. Squared to camera with a slight forward lean. Chin neutral, eye line level. Conveys authority and "you need to hear this" without being theatrical.

Wardrobe. Dark blazer, minimalist. No loud jewelry. Consistent with the Sterling Intelligence palette (black, charcoal, gold accent only).

Eye direction. Direct to camera, locked. Alternate take: eyes cut sharply to the right toward the $145K overlay.

Lighting. Key light from upper-left at ~4800K, soft fill on the right at 25% intensity. Deep shadow on the left jaw line for drama. Subtle rim light from behind-right to lift her off the background.

Scene setup. Near-black charcoal background with a faint red-warm gradient in the far upper-right, suggesting courtroom gravity. Shallow depth of field — Jane tack-sharp, background soft. Optional ghosted gavel silhouette at 15% opacity behind her shoulder.

Option 1 — Best (Dollar Angle)
$145,000 FINE

Position. Right third of the frame, stacked — "$145,000" on top large, "FINE" directly below in smaller weight.

Font. JetBrains Mono Bold for the number (monospace reads as data / court record); Inter Black for "FINE".

Color scheme. "$145,000" in pure white with a faint red (#dc2626) underglow. "FINE" in red at 90% scale. 3px black stroke on every character for legibility.

Accent detail. Small caps header above: "Q1 2026 — AI HALLUCINATION SANCTIONS" in 11px gold. Makes it read as a credible data card rather than clickbait.

Option 2 — Shock Angle
AI LIED.
LAWYERS PAID.

Position. Lower-left third, stacked on two lines — "AI LIED." on top, "LAWYERS PAID." below. Close to Jane's shoulder so the eye travels face → text.

Font. Bebas Neue Bold or Impact, condensed all-caps, tight tracking.

Color scheme. "AI LIED." in white. "LAWYERS PAID." in bright red (#dc2626) at 110% scale of the first line. 3px black stroke throughout. Faint outer glow on "PAID." to pop against dark background.

Accent detail. Gold sub-tag below: "$145K IN Q1 SANCTIONS — 1,200 CASES WORLDWIDE" in Inter Bold 16px, #c8a84b gold.

Option 3 — Courtroom Angle
FAKE CASES

Position. Centered upper band, then Jane's face dominant lower two-thirds with a ghosted gavel behind her shoulder.

Font. Inter Black all caps, wide tracking (~120), stretched across full frame width.

Color scheme. Base text in white, with the word "FAKE" overlaid with transparent red (#dc2626 at 80%). 2px black stroke.

Accent detail. Red underline under "FAKE CASES" at 4px. Smaller gold subtitle below: "REAL SANCTIONS — $145K Q1 2026" in Inter Bold 18px. Positions the story as courtroom-drama-first rather than scoreboard-first.

Sources & References

Official — Courts & Bar

Media Coverage

Analyst & Independent

Prior Context