
A federal judge just sanctioned a “Tiger King” lawyer for filing court papers packed with made-up cases—an AI-era failure that’s now forcing the legal system to confront what happens when professionals stop verifying the facts.
Quick Take
- A U.S. District Court in Indiana dismissed Joe Exotic’s Endangered Species Act lawsuit for lack of Article III standing.
- The judge sanctioned attorney Roger Roots $1,500 and referred him to Rhode Island disciplinary authorities over filings that cited nonexistent cases and misstated legal authority.
- The court said the errors appeared consistent with “AI hallucinations,” but emphasized the core problem was the lawyer’s failure to do basic verification.
- The episode fits a growing national pattern: courts are increasingly punishing lawyers who outsource accuracy to tools and then submit unvetted work as truth.
What the Indiana court did—and why it matters
On April 1, 2026, the U.S. District Court for the Northern District of Indiana threw out a lawsuit brought by Joseph Maldonado—better known as Joe Exotic—against Black Pine Animal Sanctuary under the Endangered Species Act’s citizen-suit provisions. The court found Maldonado lacked Article III standing, meaning he could not show the kind of concrete, personal injury required to be in federal court. The judge also imposed a $1,500 sanction on Maldonado’s lawyer, Roger Roots, and ordered a disciplinary referral in Rhode Island.
[Jonathan H. Adler] Tiger King Attorney Sanctioned for Filing Complaint with AI Hallucinations https://t.co/C8Fgp2tpKn
— Volokh Conspiracy (@VolokhC) April 12, 2026
The standing dismissal is familiar to anyone who has watched federal courts narrow who can sue, especially in hot-button regulatory disputes. The sanctions are the headline because they turn a celebrity-adjacent case into a warning for the broader legal system: courts are signaling that “good enough” drafting is not good enough when filings affect real parties, real animals, and real judicial resources. The court’s message was simple—accuracy is not optional, no matter what software generated the text.
The filing problems: fabricated citations and misrepresented authority
The lawsuit began on August 29, 2025, when Maldonado alleged Black Pine “wounded,” “harmed,” and “harassed” four tigers formerly associated with him, including by spaying and neutering, allowing public observation, and maintaining allegedly inadequate enclosures. Over time, the court identified serious defects in the pleadings and related papers, including imaginary citations, nonexistent cases, and misstatements about legal authority. Those are not technical mistakes; they are the kind of errors that can mislead a court and force the opposing side to spend money responding to arguments that are literally not real.
After months passed, the judge issued a show-cause order on February 27, 2026 demanding an explanation for the inaccuracies. Roots responded on March 27, accepting responsibility but arguing there was no bad faith and citing a medical emergency that led him to rely on a paralegal. The court was not persuaded that the circumstances excused what it described as a lack of even “rudimentary” checking. The opinion treated the problem as a duty-of-care failure: if you sign it and file it, you own it.
AI “hallucinations” are the suspicion; negligence is the proven issue
Reporting on the decision notes that the court viewed the errors as consistent with “hallucinations” produced by AI tools—confident-sounding text that invents sources. Importantly, the judge did not need to prove which tool was used to reach the core conclusion that warranted sanctions: the filings contained fabricated authorities and misrepresentations that any competent review should have caught. That distinction matters because it keeps the focus on professional responsibility rather than turning the case into a simplistic “ban AI” morality play.
From a conservative, limited-government perspective, this is the kind of breakdown that breeds public cynicism about institutions. Courts operate on trust: judges trust lawyers not to fabricate, and citizens trust courts to sort fact from fiction. When filings contain fake cases, the public sees an “elite” professional class playing games with systems ordinary people are forced to respect. At the same time, the court’s response shows an institutional guardrail still working—accountability is being enforced through sanctions and bar oversight.
A growing pattern: courts are drawing a bright line on verification
This case is not an isolated episode. Other courts have already penalized lawyers for submitting AI-generated material containing fake citations, including a high-profile matter tied to Mike Lindell and separate reports of a large law firm facing judicial criticism over apparently fabricated or distorted authority. The pattern is straightforward: judges are not treating AI mistakes as novel “tech glitches,” but as the modern version of careless lawyering—only faster, harder to detect at a glance, and more damaging when it slips through.
Tiger King Attorney Sanctioned for Filing Complaint with AI Hallucinations https://t.co/NtihhRaqkv via @volokhc
— Jonathan H. Adler (@jadler1969) April 12, 2026
The practical takeaway for voters and taxpayers is bigger than one lawyer’s embarrassment. As government agencies, courts, and contractors adopt AI to cut costs, the temptation grows to replace human judgment with automated output while keeping the public-facing veneer of competence. That’s how errors scale. The Indiana judge’s modest $1,500 sanction is less important than the disciplinary referral and the precedent it reinforces: if professionals use labor-saving tools, they still must prove their work is grounded in reality before the state’s power gets involved.
Sources:
Tiger King Attorney Sanctioned for Filing Complaint with AI Hallucinations
Tiger King attorney sanctioned for AI hallucinations
Tiger King Attorney Sanctioned for Filing Complaint with AI Hallucinations
A recent high-profile case of AI hallucination serves as a stark warning












