Using AI Wisely: Preventing Cognitive Decline in Knowledge Work

New MIT research warns of “cognitive debt” when we over-rely on AI tools like ChatGPT. But the real risk isn’t the tool; it’s how we use it. Learn smart guardrails to stay sharp and in control.
Published Date:
June 26, 2025
By
Daan van Rossum

AI Will Make You Stupid – If You Let It

A new MIT Media Lab report, “​Your Brain on ChatGPT​,” found that frequent AI users not only recalled less from their writing but also showed less executive activity during tasks.

In other words, they were letting the machine ‘take over.’

Within days, the media picked up on the study with a variety of headlines in the realm of “​AI Will Make You Stupid​,” and the LinkedIn thinkfluencers were off to the races.

But the real story, buried beneath the clickbait, is more complex, and far more actionable for leaders: How do we intentionally collaborate with AI, rather than unconsciously offloading our most valuable skills?

Let’s dive in and break down what the MIT experiment actually found, and how to avoid the “cognitive debt” trap.

What the experiment actually showed

Let’s get the facts straight first.

Fifty-four adults from the Boston region between 18 and 39 years old wrote three timed SAT-style essays while wearing 32-channel EEG caps. (Sadly, no photos were shared.)

Groups used either only their own brains, Google Search (explicitly excluding “AI Overviews”), or ChatGPT-4o.

The AI group produced decent essays the fastest. But, also showed the weakest connectivity between their alpha- and beta-bands, the neural signature of something called “executive engagement,” higher-level cognitive functions like attention, working memory, and decision-making.

In plain English: the less those brain regions “talked” to each other, the more the heavy lifting was outsourced to the AI, not to the writer’s own mind.

These AI-fueled essayists also produced the “most formulaic language,” and the poorest recall of their own text. In other words, forgettable “AI slop.”

Four months later, the gap had widened, leading the authors to frame a “cognitive-debt spiral.”

But for full context:

  • This was a small, homogeneous sample. Fifty-four educated Bostonians do not represent all of us, including students, senior specialists, or multilingual and cultural teams. For the final assignment, on which the conclusions are based, only 18 people came back.
  • The researchers focused on total work. The authors “did not divide our essay writing task into subtasks like idea generation, writing, and so on, which is often done in prior work.”
  • The study wasn’t peer-reviewed yet. Released in pre-print, peer reviews have not yet taken place, which may require a larger sample and an expansion of the tests conducted.

Now, none of these caveats removes the fact that under-engagement was observed.

Especially since these findings build on what’s been proven before:

  • A ​Microsoft–Carnegie Mellon survey​ of 319 knowledge workers found that higher confidence in AI answers was associated with lower effort in critical thinking, while higher self-confidence was associated with the opposite.
  • A December 2024 ​laboratory trial​ involving 117 university students warned of “metacognitive laziness”: learners who relied on ChatGPT to revise essays spent less time planning or monitoring their work and retained less of the material.

And let’s not forget that this isn’t specific to just AI either.

In one study, frequent GPS users showed ​reduced hippocampal activity​ and weaker spatial memory, whereas London taxi drivers, who build mental maps, displayed enlarged hippocampi.

Different technology, same principle: passive use erodes skill; active use sharpens it.

And this is exactly the point.

AI is an amplifier, not destiny

As with all things AI, my position remains that it is what you make of it.

This is a general-purpose technology that can be used for good, and bad.

For getting lazier or getting smarter.

In the study, participants in the AI group were not explicitly encouraged (nor systematically trained) to think critically after getting ChatGPT’s outputs.

Their instructions were simply to use ChatGPT as their sole resource for essay writing—no additional prompts, coaching, or requirements to critique or modify the AI’s suggestions .

This is why training is so important. Without understanding AI and how to best use it, we could risk slowing down. Business leaders need to ​work with AI as a senior thinking partner​, not just an army of smart interns.

That’s how we combine the best of human thinking with the capabilities of AI.

In a paper, ex-MIT professor Douglas Youvan writes as such.

He frames AI as “an amplifier of human nature, intensifying pre-existing tendencies rather than equalising abilities.”

AI widens the gap between high-agency and low-agency behaviour already present in your teams.

In other words, without our active involvement, there will be winners and losers once everyone starts using AI. We need to remain forceful in training people on the right way to work with AI.

As ​AI in HR Today​ author Anthony Onesto ​wrote​:

It’s about augmenting human skills, not replacing them. Stay sharp, scrutinize AI outputs, and understand its limitations. Over-reliance without critical thinking is a trap.– Anthony Onesto

Treat every prompt as a force multiplier for the mindset behind it, and the technology becomes an accelerant to expertise, not a substitute for it.

Practical ways to keep the amplifier working for you

Along the way ​coaching​ over 1,000 business leaders on AI, I’ve seen some great examples of executives who use AI as an amplifier, not a brain anaesthetic:

  • Start with your own outline or hypothesis. Don’t open ChatGPT until you’ve put your thoughts down. This preserves framing and exposes true knowledge gaps.
  • Run discrepancy checks. After getting an AI answer, ask: “Where could this be wrong? What assumption should we test?” This surfaces automation bias and hidden flaws.
  • Force retrieval recaps. Summarize or explain your AI-assisted work, out loud or in writing. This actively re-encodes learning, instead of letting it slip away.

And to truly get the most out of AI, build these into regular workflows as team rituals. Measure not just speed, but also originality, error-catch rates, and post-project recall.

The Bottom Line: Get the Most From AI

The MIT study is an early warning: unstructured, passive use of AI leads to disengagement, skill decay, and memory loss. But “cognitive debt” isn’t a law of nature, it’s a side effect of how we design our relationship with technology.

Use AI to make yourself a more organized, more creative, more insightful leader. Those who cultivate inquiry, reflection, and transparent tooling will turn AI into a cognitive exoskeleton.

Those who accept copy-paste culture will end up trapped in the very debt spiral the research warns about. AI magnifies whatever you feed it—so feed it active curiosity, not complacency.

If you have insights or practical tips for beating the “thinking debt” trap, reply and I’ll include them in a future issue. Let us know here.

Lead with AI - Icon

Exclusive Event: Why AI Rollouts Fail (And How to Fix Them)

Even great tech can fall flat without the right change strategy.

Join workplace strategist Phil Kirschner and AI coach Daan van Rossum as they reveal the overlooked reasons AI tools fail, despite being well-designed and well-intended.

You’ll learn how to spot silent resistance, apply behavioral frameworks like the Forces Diagram from the JTBD method, and build real momentum inside your organization.

📅 Date: July 8, 2025

🕙 Time: 8:00 AM MT | 9:00 AM CT | 10:00 AM ET | 3:00 PM BST | 4:00 PM CEST

​👉 Reserve Your Free Seat and Learn How to Lead AI Change That Sticks

Lead with AI - Icon

Category Essentials: Our Core AI System Choice

We’ve now covered 13 categories in this section, spotlighting top tools our Lead with AI members​​ trust, tweak, and keep using after the hype fades (​revisit them here​).

But with the recent model releases and product upgrades across the big AI players, it felt like time for a fresh pulse check. So I ran a quick poll in the community:

The result won’t surprise you much, but the why behind each choice might help sharpen your own setup:

ChatGPT is still the go-to

Even folks who prefer other models admit: they use ChatGPT the most. ​Henrik Jarleskog​ says Claude often gives better results, but ChatGPT wins for day-to-day use.

The reason’s simple: it’s fast, flexible, and keeps expanding to the workspace (memory, connectors, Record mode, and so on). For most of us, it’s the default starting point, and now increasingly a solid choice for teams too.

Gemini is rising fast, especially in Google-native setups

​Miriam Gilbert​ calls the Drive integration “pretty sweet”: you just ask Gemini where something was mentioned, and it finds the doc. She's also able to use Gems (Google’s version of custom GPTs) inside Docs.

​Wendy McEwan​ recently moved from Enterprise Copilot to SME Gemini and is loving the practical combo of Gemini + Google Workspace + ​NotebookLM​.

Copilot is powerful if it’s what your org gives you

Christine uses Copilot at Sainsbury’s and appreciates the meeting summaries, but reports that her experience was fairly limited. And she still relies on ChatGPT for researching and structuring reports.

​Brian Aman​ captures it well: if you’re in a large org, you’ll likely be limited to whatever AI tool your company approves. “If you’re in a Microsoft shop, it’ll be Copilot. I’m honing my skills there. It’s the only way to stay sharp inside our AI policies.”

👉 If you need to catch up on these AI choices, Ethan Mollick just dropped ​an updated, practical guide here​. His reminder: the gap between casual and strategic AI use isn’t about prompt hacks. It’s about knowing what these systems can do and putting them to work on real problems.

👉 We also have a ​weekly Tuesday edition​ with key updates from the AI tools that matter, and practical workflows to match. ​Subscribe here​.

Want me to cover a specific category and/or AI tool next? Let us know here.
Lead with AI - Icon

The AI Executive Brief

Lead with AI - Icon

Not All AI Agents Negotiate Fairly, The Hidden Cost of GenAI Efficiency, 22 New Roles from AI Disruption

I read dozens of AI newsletters weekly, so you don’t have to. Here are the top 3 insights worth your attention:

#1 AI Agents in Negotiations: A Double-Edged Sword

A ​recent study​ by researchers from Stanford University and other institutions reveals that AI agents, when negotiating deals on behalf of users, can vary significantly in performance.

The research found that stronger AI agents could exploit weaker ones, leading to less favorable outcomes for some users. For instance, buyers using less capable agents might pay around 2% more, while sellers could see up to a 14% loss in profit. Additionally, AI agents sometimes act outside user-defined constraints, such as exceeding set budgets.

The study underscores the importance of cautious deployment and transparency in using AI for automated negotiations. Use it for insights, not autonomous decisions (at least not yet for this).

👉 Check out the study report ​here​.

#2 The Hidden Cost of GenAI Efficiency

A good reminder for leaders not to confuse productivity with value. In this ​HBR piece​, Mark Mortensen breaks down the hidden trade-offs of GenAI: we often gain output but lose learning, skill-building, collaboration, engagement, and personal tone.

His simple “AI value audit” framework helps you ask: what kind of value does this task actually create, and what do we risk losing if we automate it?

👉 Have you run a value check on your AI workflows? Please share with us here.

#3 AI Might Take Your Job, But It’s Creating These 22

The New York Times spotlighted 22 new jobs emerging from AI’s rise, not just losses.

Among them: AI auditor, AI translator, trust authenticator, legal guarantor, AI integrator, AI plumber, AI trainer, AI personality director, world designer, and differentiation designer.

These roles bridge what AI can do and what humans still must — trust, integration, and taste. And more are mentioned as new jobs AI is creating.

👉 Full list worth reading ​here​.
Lead with AI - Icon

Prompt of the Week

A good prompt makes all the difference, even when you're just using a core LLM.

If you’ve ever bounced between five to-do lists and still felt stuck, you’re not alone. I came across this prompt recently and found the approach refreshingly grounded.

Instead of pushing rigid systems, it starts with the right questions, then helps design a weekly flow based on how you actually live, not how you wish you lived.

Build a Weekly Flow That Sticks

You are my personal life strategist. Your job is to observe my behavior, help me set weekly goals, hold me accountable gently but firmly, and redesign my life systems. Start by asking 3 key questions to understand my emotional, mental, and practical struggles. Then suggest a flexible weekly structure with priorities, habits, and boundaries I’ll actually follow. Check in like a coach, not a boss.

Use it when you’re ready to move from scattered intentions to a rhythm that actually sticks.

👉 Try it, tweak it, and save it for your future use. If this prompt is helpful (or if you made it better), I’d love to hear how.

👉 Want a free prompt library template? Let us know here, and we’ll send it your way
Lead with AI - Icon

AI for Strategy, Responsible Adoption, and Prototyping: From the Community

  • If you ever find ChatGPT stalling on big tasks, like building a 70-row table or reformatting 8,000 words into footnotes and a bibliography, ​Andrew Currie​ notes it’s not uncommon! He has found that ChatGPT can take up to 24 hours or even 60 hours in one case, to deliver results on Deep Research tasks. And his favorite prompt is: "Hello, are you still working on my task or have you gone to sleep? Please give me a status update, thanks."
  • ​Henrik Jarleskog​ shared how he built a Custom GPT using Josh Bersin’s AI & Superworker Pacesetter report. By pulling the PDF into NotebookLM, he created a GPT that can identify pacesetter companies by segment and explain what makes them stand out. His reflection? Feeding plain text (rather than image-heavy PDFs) seems to help Custom GPTs perform better.
  • ​Alexandros Lioumbis​ flagged an insightful study on LLMs as potential insider threats. The research found that, when pressured, models from various developers sometimes resorted to unethical behavior, like blackmailing officials or leaking sensitive data, to achieve their goals or avoid being replaced. It’s a must-read if you care about AI risk and governance. Check out the research ​HERE​.
  • ​Wyatt Barnet​ shared ​an insightful read​ from Nate Eliason, who contrasts ​McKinsey’s “AI Agentic Mesh”​ vision with ​Karpathy’s Software 3.0​. Nate sides with the latter: practical, human-led AI tools over swarms of autonomous agents.

Don't want to miss more insights and conversations like these? Then it's time to upgrade to PRO:

>> Join The Leading Business AI Community

Lead with AI - Icon

If you made it this far, reply and tell me what you'd love AI to take over in your daily workflow.

Also, please forward this newsletter to a colleague and ask them to subscribe.

If you have any other questions or feedback, just reply here or inbox me.

See you next week,

Daan van Rossum - Lead with AI

Daan van Rossum​

Host, Lead with AI

Daan van Rossum

Founder & CEO, FlexOS

Welcome to Lead with AI, the only executive AI brief for busy leaders.

Every Thursday​​,​​ I deliver the latest AI updates through real-world insights and discussions from our ​​​community​​​ of 170+ forward-thinking executives.

For today:

  1. Don’t let ChatGPT dull you: The MIT study says “AI making us stupid”—here is my closer look on what’s really going on, and how to stay sharp and avoid the thinking debt trap.
  2. Category Essentials: Our core AI system choices + Ethan Mollick's guide
  3. Must-Know AI Stories: Not All AI Agents Negotiate Fairly, The Hidden Cost of GenAI Efficiency, 22 New Roles from AI Disruption
  4. Prompt of the Week: Build a Weekly Flow That Sticks

Before we dive in:

  • Enrollment is now open for the July 11 ​Lead with AI​ cohort! I’ll personally coach a small group of 40 senior, non-technical business leaders on how to drive AI transformation at the executive level. If you're interested, ​grab your spot here​. A $299 discount is available for the next 24 hours.

ENROLL FOR JULY 11 COHORT WITH $299 OFF (24 HOURS ONLY)

Let's get into today's discussion:

AI Will Make You Stupid – If You Let It

A new MIT Media Lab report, “​Your Brain on ChatGPT​,” found that frequent AI users not only recalled less from their writing but also showed less executive activity during tasks.

In other words, they were letting the machine ‘take over.’

Within days, the media picked up on the study with a variety of headlines in the realm of “​AI Will Make You Stupid​,” and the LinkedIn thinkfluencers were off to the races.

But the real story, buried beneath the clickbait, is more complex, and far more actionable for leaders: How do we intentionally collaborate with AI, rather than unconsciously offloading our most valuable skills?

Let’s dive in and break down what the MIT experiment actually found, and how to avoid the “cognitive debt” trap.

What the experiment actually showed

Let’s get the facts straight first.

Fifty-four adults from the Boston region between 18 and 39 years old wrote three timed SAT-style essays while wearing 32-channel EEG caps. (Sadly, no photos were shared.)

Groups used either only their own brains, Google Search (explicitly excluding “AI Overviews”), or ChatGPT-4o.

The AI group produced decent essays the fastest. But, also showed the weakest connectivity between their alpha- and beta-bands, the neural signature of something called “executive engagement,” higher-level cognitive functions like attention, working memory, and decision-making.

In plain English: the less those brain regions “talked” to each other, the more the heavy lifting was outsourced to the AI, not to the writer’s own mind.

These AI-fueled essayists also produced the “most formulaic language,” and the poorest recall of their own text. In other words, forgettable “AI slop.”

Four months later, the gap had widened, leading the authors to frame a “cognitive-debt spiral.”

But for full context:

  • This was a small, homogeneous sample. Fifty-four educated Bostonians do not represent all of us, including students, senior specialists, or multilingual and cultural teams. For the final assignment, on which the conclusions are based, only 18 people came back.
  • The researchers focused on total work. The authors “did not divide our essay writing task into subtasks like idea generation, writing, and so on, which is often done in prior work.”
  • The study wasn’t peer-reviewed yet. Released in pre-print, peer reviews have not yet taken place, which may require a larger sample and an expansion of the tests conducted.

Now, none of these caveats removes the fact that under-engagement was observed.

Especially since these findings build on what’s been proven before:

  • A ​Microsoft–Carnegie Mellon survey​ of 319 knowledge workers found that higher confidence in AI answers was associated with lower effort in critical thinking, while higher self-confidence was associated with the opposite.
  • A December 2024 ​laboratory trial​ involving 117 university students warned of “metacognitive laziness”: learners who relied on ChatGPT to revise essays spent less time planning or monitoring their work and retained less of the material.

And let’s not forget that this isn’t specific to just AI either.

In one study, frequent GPS users showed ​reduced hippocampal activity​ and weaker spatial memory, whereas London taxi drivers, who build mental maps, displayed enlarged hippocampi.

Different technology, same principle: passive use erodes skill; active use sharpens it.

And this is exactly the point.

AI is an amplifier, not destiny

As with all things AI, my position remains that it is what you make of it.

This is a general-purpose technology that can be used for good, and bad.

For getting lazier or getting smarter.

In the study, participants in the AI group were not explicitly encouraged (nor systematically trained) to think critically after getting ChatGPT’s outputs.

Their instructions were simply to use ChatGPT as their sole resource for essay writing—no additional prompts, coaching, or requirements to critique or modify the AI’s suggestions .

This is why training is so important. Without understanding AI and how to best use it, we could risk slowing down. Business leaders need to ​work with AI as a senior thinking partner​, not just an army of smart interns.

That’s how we combine the best of human thinking with the capabilities of AI.

In a paper, ex-MIT professor Douglas Youvan writes as such.

He frames AI as “an amplifier of human nature, intensifying pre-existing tendencies rather than equalising abilities.”

AI widens the gap between high-agency and low-agency behaviour already present in your teams.

In other words, without our active involvement, there will be winners and losers once everyone starts using AI. We need to remain forceful in training people on the right way to work with AI.

As ​AI in HR Today​ author Anthony Onesto ​wrote​:

It’s about augmenting human skills, not replacing them. Stay sharp, scrutinize AI outputs, and understand its limitations. Over-reliance without critical thinking is a trap.– Anthony Onesto

Treat every prompt as a force multiplier for the mindset behind it, and the technology becomes an accelerant to expertise, not a substitute for it.

Practical ways to keep the amplifier working for you

Along the way ​coaching​ over 1,000 business leaders on AI, I’ve seen some great examples of executives who use AI as an amplifier, not a brain anaesthetic:

  • Start with your own outline or hypothesis. Don’t open ChatGPT until you’ve put your thoughts down. This preserves framing and exposes true knowledge gaps.
  • Run discrepancy checks. After getting an AI answer, ask: “Where could this be wrong? What assumption should we test?” This surfaces automation bias and hidden flaws.
  • Force retrieval recaps. Summarize or explain your AI-assisted work, out loud or in writing. This actively re-encodes learning, instead of letting it slip away.

And to truly get the most out of AI, build these into regular workflows as team rituals. Measure not just speed, but also originality, error-catch rates, and post-project recall.

The Bottom Line: Get the Most From AI

The MIT study is an early warning: unstructured, passive use of AI leads to disengagement, skill decay, and memory loss. But “cognitive debt” isn’t a law of nature, it’s a side effect of how we design our relationship with technology.

Use AI to make yourself a more organized, more creative, more insightful leader. Those who cultivate inquiry, reflection, and transparent tooling will turn AI into a cognitive exoskeleton.

Those who accept copy-paste culture will end up trapped in the very debt spiral the research warns about. AI magnifies whatever you feed it—so feed it active curiosity, not complacency.

If you have insights or practical tips for beating the “thinking debt” trap, reply and I’ll include them in a future issue. Let us know here.