General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI hallucinations in Mike Lindell case
July 10, 20251:49 PM ET
Jaclyn Diaz
A federal judge ordered two attorneys representing MyPillow CEO Mike Lindell in a Colorado defamation case to pay $3,000 each after they used artificial intelligence to prepare a court filing filled with a host of mistakes and citations of cases that didn't exist.
Christopher Kachouroff and Jennifer DeMaster violated court rules when they filed the document in February filled with more than two dozen mistakes including hallucinated cases, meaning fake cases made up by AI tools, Judge Nina Y. Wang of the U.S. District Court in Denver ruled ...
"Notwithstanding any suggestion to the contrary, this Court derives no joy from sanctioning attorneys who appear before it," Wang wrote in her decision. "Indeed, federal courts rely upon the assistance of attorneys as officers of the court for the efficient and fair administration of justice."
The use of AI by lawyers in court is not itself illegal. But Wang found that the lawyers violated a federal rule that requires lawyers to certify that claims they make in court are "well grounded" in the law. Turns out, fake cases don't meet that bar ...
https://www.npr.org/2025/07/10/nx-s1-5463512/ai-courts-lawyers-mypillow-fines

Torchlight
(5,138 posts)I'll admit, the hallucinations are often pretty amusing to watch in real time as the program begins feeding off its own output, but I'll wait and see another five years or so before I rely on it for anything critical or important in my own little world.
Ping Tung
(3,046 posts)riversedge
(76,623 posts)ThoughtCriminal
(14,596 posts)There must be some "Source Material" that the AI hoovers up from right wind cesspools.
In my experience, RW trolls will fabricate court cases in online debates, or cite real cases and claim the ruling was the opposite of the actual outcomes.. They know they are lying, they just don't care. I'm guessing that these discussions might be one of the sources for gullible AIs that do not seem to have the ability to filter out bad data.