Trouble with AI 'hallucinations' spreads to big law firms
Source: Reuters
Another large law firm was forced to explain itself to a judge this week for submitting a court filing with made-up citations generated by an artificial intelligence chatbot.
Attorneys from Mississippi-founded law firm Butler Snow apologized to U.S. District Judge Anna Manasco in Alabama after they inadvertently included case citations generated by ChatGPT in two court filings.
-snip-
The 400-lawyer firm, which did not immediately respond to a request for comment, is defending former Alabama Department of Corrections Commissioner Jeff Dunn in an inmate's lawsuit alleging he was repeatedly attacked in prison. Dunn has denied wrongdoing. The judge has not yet said whether she will impose sanctions over the filings.
-snip-
Last week a lawyer at law firm Latham & Watkins, which is defending AI company Anthropic in a copyright lawsuit related to music lyrics, apologized to a California federal judge after submitting, opens new tab an expert report that cited an article title invented by AI.
-snip-
Read more: https://www.reuters.com/legal/government/trouble-with-ai-hallucinations-spreads-big-law-firms-2025-05-23/
Just call it Abandoning Intelligence.
Btw, Latham & Watkins is one the big law firms that made deals with Trump.
And they failed to catch an AI hallucination while representing an AI company.

SheltieLover
(69,257 posts)If people are too lazy to research & write, why become a lawyer?
CrispyQ
(39,737 posts)

Karasu
(1,301 posts)becoming a fucking fascist state.
Mustellus
(381 posts).. is the implicit belief that all knowledge is already out there on the intertubes. Its not. And compared to, for example, a library, the intertubes has such knowledge as there is diluted by gigabytes of dreck.
ramapo
(4,764 posts)Why pay a lawyer when the AI can sort of do it. I suspect there would be little tolerance for a young associate making these mistakes although I bet somebody will have to answer for this.
Picaro
(2,016 posts)I think that the large language model based AI are rapidly becoming unusable. This is probably going to be something like the tulip bulb craze that almost brought down the Dutch economy in the early 1600s.
A lot of companies and a lot of people got really really excited about this stuff. Everybody forgot all the caveats. You suddenly had software that could pass the Turing test. All this really showed is that the Turing Test was completely inadequate and the development of software that could pass it really didnt mean anything. It certainly didnt mean that artificial intelligence had actually been achieved. Large language models are on a certain level just very sophisticated pattern matching. The LLMs are written to mimic the content that has been loaded into them.
That is what were seeing now. When prompted to come up with legal decisions that buttress the case thats trying to be provedwell thats what they do. Doesnt matter that the case law is fictional.
That seems to be the first law of large language models an answer has to be generated. Even if that answer doesnt exist.
What this means to all these companies that are trying to roll this out and reduce their headcount and lower their costs is that they are not going to be able to trust the output from LLMs. And that means that this wonderful new tool will rapidly become completely unusable.
The sad truth is that people will still continue to try to create real AI. If that ever happens the universe as we know it may cease to exist.