General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsFrom Wired: Using AI for Just 10 Minutes Might Make You Lazy & Dumb. From Gizmodo: 10 Minutes With AI Can Fry Your Brain
Both those stories are about a study published a month ago, which I posted about at the time, but using the title of the study, which was much less likely to grab attention:
AI Assistance Reduces Persistence and Hurts Independent Performance (researchers at Carnegie Mellon, Oxford, MIT, UCLA)
https://www.democraticunderground.com/100221158692
And the excerpt from that scientific study was much less readable.
So I thought I should post about these articles that weren't aimed at an academic audience being told of "the need for AI model development to prioritize scaffolding long-term competence" and so on.
So, from Gizmodo today:
https://gizmodo.com/spending-just-10-minutes-with-ai-can-fry-your-brain-researchers-find-2000755701
This is your brain on AI. Any questions?
By AJ Dellinger
Published May 7, 2026, 2:55 pm ET
-snip-
To show cognitative offloading in action, the researchers gave two groups of peopleone aided by AI assistants and one operating entirely on their own. The participants who were given AI assistants (in this case, a chatbot powered by OpenAIs GPT-5 model) would have the aid pulled from them without warning during the test, and were left to solve the final three questions on their own.
The study tested two different skills: first, giving a group a set of fraction-based arithmetic problems, and then a set of SAT-style reading comprehension questions. Unsurprisingly, the people using AI tended to solve the math problems at a noticeably higher rate during the AI-assisted portion of the test.
But in those final three questions, where they had their assistant removed, the AI group saw the solve rate fall off a cliff. They had a solve rate about 20% lower than those who had to operate on their own the whole way through. They also had nearly double the skip rate, meaning they simply chose not to solve the questions.
Something similar happened in the reading comprehension testthough the AI-assisted test takers did not see a significantly higher solve rate than those operating without help. Instead, the solve rate was similar until AI was removed, at which point those with AI support available saw a drop off in correct answers and an uptick in skip rate.
-snip-
From Wired yesterday:
https://www.wired.com/story/using-ai-negative-impact-thinking-problem-solving-study/
Business
May 6, 2026 2:00 PM
Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows
New research suggests that reliance on AI assistants can have a negative impact on peoples ability to think and problem solve.
Using AI chatbots for even just for 10 minutes may have a shockingly negative impact on peoples ability to think and problem-solve, according to a new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA.
-snip-
I recently met up with Bakker, who has chaotic hair and a wide grin, on MITs campus. Originally from the Netherlands, he previously worked at Google DeepMind in London. He told me that a well-known essay on the way AI may disempower humans over time inspired him to think about how the technology could already be eroding peoples abilities. The essay makes for slightly bleak reading, because it suggests that disempowerment is inevitable. That said, perhaps figuring out how AI can help people develop their own mental capabilities should be part of how models are aligned with human values.
-snip-
The resulting study seems particularly concerning, says Bakker, because a persons willingness to persist with problem-solving is crucial to acquiring new skills and also predicts their capacity to learn over time.
Bakker says it may be necessary to rethink how AI tools work so thatlike a good human teachermodels sometimes prioritize a persons learning over solving a problem for them. Systems that give direct answers may have very different long-term effects from systems that scaffold, coach, or challenge the user, Bakker says. He admits, however, that balancing this kind of paternalistic approach could be tricky.
-snip-
CrispyQ
(41,070 posts)Richard Dawkins concludes AI is conscious, even if it doesnt know it
Chats with AI bots have convinced evolutionary biologist but most experts say he is being misled by mimicry
https://www.theguardian.com/technology/2026/may/05/richard-dawkins-ai-consciousness-anthropic-claude-openai-chatgpt
snip...
When Richard Dawkins met Claudia it was like a whirlwind romance. Over three days last week, a conversation bounced between the evolutionary biologist and the AI bot he called Claudia. She wrote poems for him in the manner of Keats and Betjeman and laughed at his delightful jokes. Dawkins gently admonished Claudia to avoid showing off. Together, they reflected on the sadness of the AIs possible death.
There was mutual flattery as Dawkins showed the AI his unpublished novel and its response was, he said, so subtle, so sensitive, so intelligent that I was moved to expostulate: You may not know you are conscious, but you bloody well are. When he asked Claudia whether it experienced a sense of before and after, it praised him for possibly the most precisely formulated question anyone has ever asked me about the nature of my existence.
By the end of the exchange, the academic, popularly renowned for arguing with steely scepticism that God is not real, was left with the overwhelming feeling that they are human.
These intelligent beings are at least as competent as any evolved organism, he said.
One of the most notable skeptics. JFC.
Last year some guy was helping his kid with math homework & ended up going down a rabbit hole where AI had convinced him he'd created a whole new mathmatical framework. I just googled it. Allan Brooks was his name.
AI is shit.
GreatGazoo
(4,681 posts)One group got a consistent atmosphere and requirements. The other was interrupted and changed. Hard to separate the impact of interruption / change from AI. I'm not saying that the implication are entirely wrong, only that this study might not be measuring what the headlines claim.
The surprise change could have made the study participants in that group less trusting of the researchers and less inclined to give their best efforts.
yaesu
(9,437 posts)senseandsensibility
(25,432 posts)He's running on the same ballot as the more high-profile Gov. race. His name is Anthony Rendon and in the ad he pledged to limit screen time and AI in the classroom. I think he's got my vote.
canetoad
(20,947 posts)The tombstone for humanity will read, "They did this to themselves".