Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(56,462 posts)
Fri May 23, 2025, 04:24 PM May 23

Trouble with AI 'hallucinations' spreads to big law firms

Source: Reuters

Another large law firm was forced to explain itself to a judge this week for submitting a court filing with made-up citations generated by an artificial intelligence chatbot.

Attorneys from Mississippi-founded law firm Butler Snow apologized to U.S. District Judge Anna Manasco in Alabama after they inadvertently included case citations generated by ChatGPT in two court filings.

-snip-

The 400-lawyer firm, which did not immediately respond to a request for comment, is defending former Alabama Department of Corrections Commissioner Jeff Dunn in an inmate's lawsuit alleging he was repeatedly attacked in prison. Dunn has denied wrongdoing. The judge has not yet said whether she will impose sanctions over the filings.

-snip-

Last week a lawyer at law firm Latham & Watkins, which is defending AI company Anthropic in a copyright lawsuit related to music lyrics, apologized to a California federal judge after submitting, opens new tab an expert report that cited an article title invented by AI.

-snip-

Read more: https://www.reuters.com/legal/government/trouble-with-ai-hallucinations-spreads-big-law-firms-2025-05-23/



Just call it Abandoning Intelligence.

Btw, Latham & Watkins is one the big law firms that made deals with Trump.

And they failed to catch an AI hallucination while representing an AI company.
6 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Trouble with AI 'hallucinations' spreads to big law firms (Original Post) highplainsdem May 23 OP
Abandoning intelligence! Love it! SheltieLover May 23 #1
Abandoning Intelligence. Ha ha! I love that HPD!! CrispyQ May 23 #2
This is what happens when we still have ZERO federal regulation of this fucking shit. Hell, we needed yesterday, BEFORE Karasu May 23 #3
The fundamental flaw in AI.... Mustellus May 23 #4
It's money and profits. ramapo May 23 #5
Spot on with Abandoning Intelligence Picaro May 23 #6

SheltieLover

(69,257 posts)
1. Abandoning intelligence! Love it!
Fri May 23, 2025, 04:27 PM
May 23

If people are too lazy to research & write, why become a lawyer?

Karasu

(1,301 posts)
3. This is what happens when we still have ZERO federal regulation of this fucking shit. Hell, we needed yesterday, BEFORE
Fri May 23, 2025, 04:33 PM
May 23

becoming a fucking fascist state.

Mustellus

(381 posts)
4. The fundamental flaw in AI....
Fri May 23, 2025, 05:04 PM
May 23

.. is the implicit belief that all knowledge is already out there on the intertubes. Its not. And compared to, for example, a library, the intertubes has such knowledge as there is diluted by gigabytes of dreck.

ramapo

(4,764 posts)
5. It's money and profits.
Fri May 23, 2025, 07:47 PM
May 23

Why pay a lawyer when the AI can sort of do it. I suspect there would be little tolerance for a young associate making these mistakes although I bet somebody will have to answer for this.

Picaro

(2,016 posts)
6. Spot on with Abandoning Intelligence
Fri May 23, 2025, 11:51 PM
May 23

I think that the large language model based “AI“ are rapidly becoming unusable. This is probably going to be something like the tulip bulb craze that almost brought down the Dutch economy in the early 1600’s.

A lot of companies and a lot of people got really really excited about this stuff. Everybody forgot all the caveats. You suddenly had software that could pass the Turing test. All this really showed is that the Turing Test was completely inadequate and the development of software that could pass it really didn’t mean anything. It certainly didn’t mean that artificial intelligence had actually been achieved. Large language models are on a certain level just very sophisticated pattern matching. The LLMs are written to mimic the content that has been loaded into them.

That is what we’re seeing now. When prompted to come up with legal decisions that buttress the case that’s trying to be proved—well that’s what they do. Doesn’t matter that the case law is fictional.

That seems to be the first law of large language models— an answer has to be generated. Even if that answer doesn’t exist.

What this means to all these companies that are trying to roll this out and reduce their headcount and lower their costs— is that they are not going to be able to trust the output from LLMs. And that means that this wonderful new tool will rapidly become completely unusable.

The sad truth is that people will still continue to try to create real AI. If that ever happens the universe as we know it may cease to exist.

Latest Discussions»Latest Breaking News»Trouble with AI 'hallucin...