AI Will Make You Stupid – If You Let It
A new MIT Media Lab report, “Your Brain on ChatGPT,” found that frequent AI users not only recalled less from their writing but also showed less executive activity during tasks.
In other words, they were letting the machine ‘take over.’
Within days, the media picked up on the study with a variety of headlines in the realm of “AI Will Make You Stupid,” and the LinkedIn thinkfluencers were off to the races.
But the real story, buried beneath the clickbait, is more complex, and far more actionable for leaders: How do we intentionally collaborate with AI, rather than unconsciously offloading our most valuable skills?
Let’s dive in and break down what the MIT experiment actually found, and how to avoid the “cognitive debt” trap.
What the experiment actually showed
Let’s get the facts straight first.
Fifty-four adults from the Boston region between 18 and 39 years old wrote three timed SAT-style essays while wearing 32-channel EEG caps. (Sadly, no photos were shared.)
Groups used either only their own brains, Google Search (explicitly excluding “AI Overviews”), or ChatGPT-4o.
The AI group produced decent essays the fastest. But, also showed the weakest connectivity between their alpha- and beta-bands, the neural signature of something called “executive engagement,” higher-level cognitive functions like attention, working memory, and decision-making.
In plain English: the less those brain regions “talked” to each other, the more the heavy lifting was outsourced to the AI, not to the writer’s own mind.
These AI-fueled essayists also produced the “most formulaic language,” and the poorest recall of their own text. In other words, forgettable “AI slop.”

Four months later, the gap had widened, leading the authors to frame a “cognitive-debt spiral.”
But for full context:
- This was a small, homogeneous sample. Fifty-four educated Bostonians do not represent all of us, including students, senior specialists, or multilingual and cultural teams. For the final assignment, on which the conclusions are based, only 18 people came back.
- The researchers focused on total work. The authors “did not divide our essay writing task into subtasks like idea generation, writing, and so on, which is often done in prior work.”
- The study wasn’t peer-reviewed yet. Released in pre-print, peer reviews have not yet taken place, which may require a larger sample and an expansion of the tests conducted.
Now, none of these caveats removes the fact that under-engagement was observed.
Especially since these findings build on what’s been proven before:
- A Microsoft–Carnegie Mellon survey of 319 knowledge workers found that higher confidence in AI answers was associated with lower effort in critical thinking, while higher self-confidence was associated with the opposite.
- A December 2024 laboratory trial involving 117 university students warned of “metacognitive laziness”: learners who relied on ChatGPT to revise essays spent less time planning or monitoring their work and retained less of the material.
And let’s not forget that this isn’t specific to just AI either.
In one study, frequent GPS users showed reduced hippocampal activity and weaker spatial memory, whereas London taxi drivers, who build mental maps, displayed enlarged hippocampi.
Different technology, same principle: passive use erodes skill; active use sharpens it.
And this is exactly the point.
AI is an amplifier, not destiny
As with all things AI, my position remains that it is what you make of it.
This is a general-purpose technology that can be used for good, and bad.
For getting lazier or getting smarter.
In the study, participants in the AI group were not explicitly encouraged (nor systematically trained) to think critically after getting ChatGPT’s outputs.
Their instructions were simply to use ChatGPT as their sole resource for essay writing—no additional prompts, coaching, or requirements to critique or modify the AI’s suggestions .
This is why training is so important. Without understanding AI and how to best use it, we could risk slowing down. Business leaders need to work with AI as a senior thinking partner, not just an army of smart interns.
That’s how we combine the best of human thinking with the capabilities of AI.
In a paper, ex-MIT professor Douglas Youvan writes as such.
He frames AI as “an amplifier of human nature, intensifying pre-existing tendencies rather than equalising abilities.”
AI widens the gap between high-agency and low-agency behaviour already present in your teams.
In other words, without our active involvement, there will be winners and losers once everyone starts using AI. We need to remain forceful in training people on the right way to work with AI.
As AI in HR Today author Anthony Onesto wrote:
“It’s about augmenting human skills, not replacing them. Stay sharp, scrutinize AI outputs, and understand its limitations. Over-reliance without critical thinking is a trap.” – Anthony Onesto
Treat every prompt as a force multiplier for the mindset behind it, and the technology becomes an accelerant to expertise, not a substitute for it.
Practical ways to keep the amplifier working for you
Along the way coaching over 1,000 business leaders on AI, I’ve seen some great examples of executives who use AI as an amplifier, not a brain anaesthetic:
- Start with your own outline or hypothesis. Don’t open ChatGPT until you’ve put your thoughts down. This preserves framing and exposes true knowledge gaps.
- Run discrepancy checks. After getting an AI answer, ask: “Where could this be wrong? What assumption should we test?” This surfaces automation bias and hidden flaws.
- Force retrieval recaps. Summarize or explain your AI-assisted work, out loud or in writing. This actively re-encodes learning, instead of letting it slip away.
And to truly get the most out of AI, build these into regular workflows as team rituals. Measure not just speed, but also originality, error-catch rates, and post-project recall.
The Bottom Line: Get the Most From AI
The MIT study is an early warning: unstructured, passive use of AI leads to disengagement, skill decay, and memory loss. But “cognitive debt” isn’t a law of nature, it’s a side effect of how we design our relationship with technology.
Use AI to make yourself a more organized, more creative, more insightful leader. Those who cultivate inquiry, reflection, and transparent tooling will turn AI into a cognitive exoskeleton.
Those who accept copy-paste culture will end up trapped in the very debt spiral the research warns about. AI magnifies whatever you feed it—so feed it active curiosity, not complacency.
If you have insights or practical tips for beating the “thinking debt” trap, reply and I’ll include them in a future issue. Let us know here.