Cognitive Sovereignty - Use It or Lose It
We’ve already arrived in the era of ‘Workslop’, and need to decide if this is just an amusing meme, or something much more perilous.

The adoption of Generative AI within businesses represents the most fundamental change to cognitive work since the Internet. And it presents us with a real paradox; it simultaneously has the potential to deaden our minds or energise them. Your cognitive capabilities (which is what you are paid for) are either going to atrophy or be augmented. And which it is is entirely down to you.
Executive Summary
Generative AI presents knowledge workers with a stark binary: cognitive atrophy or augmentation. Research shows that 40% of professionals now receive “workslop” (AI-generated content that appears substantive but lacks genuine insight) and those who produce it suffer significant reputational damage (50% are viewed as less capable, 42% as less trustworthy).
The risk isn't hypothetical. Unlike calculators or GPS - occasional tools for discrete tasks - AI is being embedded into the core daily work of nearly every knowledge professional. Persistent cognitive offloading leads to documented degradation: erosion of critical judgement, memory atrophy, automation bias, and loss of problem-solving capability.
But the same technology can augment rather than replace thinking. This requires intentional engagement: using AI as a cognitive scaffold for routine tasks while maintaining "strategic friction" to preserve deep thinking capabilities. This newsletter presents a five-pillar Intentional Intelligence Framework grounded in cognitive science:
- Generative Primacy: attempt problems independently before consulting AI.
- Strategic Friction: time-box AI access - schedule deep work without it.
- Metacognitive Monitoring: maintain awareness of your thinking processes.
- Contemplative Presence: use micro-pauses to interrupt autopilot behaviour.
- Weekly Practice: commit to analog problem-solving and digital sabbaths.
The choice between atrophy and augmentation isn't made by your employer or the technology. It's made in dozens of small decisions daily about how you engage with AI. The consequences, for your cognitive capability and professional value, are now well documented.
“Workslop”
This hideously ugly word was recently coined in a Harvard Business Review article - ‘AI-Generated “Workslop” Is Destroying Productivity’.
The authors define it as ‘AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.’ And they say that 40% of the 1,150 US based participants in the study received ‘workslop’ in the last month and that 15.4% of the content they receive at work qualifies.
It seems to be having a corrosive effect on the workplace.
“Approximately half of the people we surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than they did before receiving the output. Forty-two percent saw them as less trustworthy, and 37% saw that colleague as less intelligent.”
To be clear, AI didn't invent shallow work. Many corporate incentive structures have long rewarded visible activity over genuine insight. What has changed is that AI has now made the production of this plausible-sounding 'slop' nearly instantaneous and infinitely scalable, turning a chronic issue into an acute crisis for productivity and trust.
None of which is good. In fact this is a flashing red sign of something likely to get very much worse unless steps are taken to mitigate it. After all these are high numbers considering the still very low day to day usage of Generative AI in business. So what is going on?
Setting the Scene: The Competitive Edge
What is happening is a consequence of the fundamental purpose of AI systems. They are designed to try and remove friction and generate efficiencies. The aim is to eliminate effort and difficulty. That is their job.
Unfortunately us humans are naturally lazy and when given the opportunity to take the path of least resistance we do that. In a work environment the option to delegate thinking and accept the easy, immediate answer is too much for many to resist. And they don’t, which leads to what researchers call ‘metacognitive laziness’. And business people call loss of ‘competitive edge’.
Actions Have Consequences
This has consequences that are well known, and have been seen before many times. With the rise of calculators we lost the ability to perform mental arithmetic, when GPS became competent we lost the ability to read maps, and when Google came out we reconfigured our memory to remember where we can find facts rather than remember the facts themselves (known as the “Google” effect). All of this is supported by extensive academic research, as well as being something we can all relate to.What is happening with AI though is more pervasive and more dangerous. We all have easy access to calculators, seldom need to read a map, and know how to ‘Google’. So having diminished cognitive function in these areas has little downside.
The threat from AI is of a different magnitude entirely. Calculators automate a single, discrete task. GPS is used only when navigating unfamiliar territory. These are edge cases. Generative AI, however, is being integrated into the core daily workflow of nearly every knowledge worker. It's not an occasional tool for a minor task; it's a constant partner for our most important work: writing, analysing, strategising, and creating. The cognitive offloading isn't occasional and peripheral; it's becoming constant and central.
The trouble is that with AI, as a general purpose technology, rapidly acquiring higher level knowledge skills, if we lose our ability to think then our intrinsic value trends downwards very rapidly. Pushing out ‘workslop’ is like having a sign above your head saying ‘I’m not needed’.
The Atrophy Thesis
The actual consequences of ‘cognitive offloading’ are broad, and frankly scary. They are academically noted as the ‘atrophy thesis’ and in headline terms are:
- Erosion of Critical Judgement
- Long-term Memory Atrophy
- Loss of Problem Solving Capability
- Automation Bias (accepting AI results unconditionally)
- Reduced Trust in Own Judgement
- Reduced Capacity for Sustained, Deep Work.
- Reduced Ability at Self-Monitoring and Critical Self-evaluation
- Loss of Divergent Thinking and Creative Confidence
Which is a lot of downside in return for a spot of laziness, but overwhelmingly supported by converging evidence. Consistently offload your thinking to an AI and this will be you.
All things
#SpaceasaService
Exploring how AI and technology are reshaping real estate and cities to serve the future of work, rest, and play.

Cohort 14 starts 7 November #GenerativeAIforRealEstatePeople
Exclusively for real estate professionals looking to embrace the future and the myriad opportunities AI offers.
How to Do It Right: Augmentation as Strategy
There is though another way. The same technology that will deaden your brain if you let it can also be leveraged as a catalyst for growth. This requires pedagogical intent and a bias towards augmentation rather than automation. Using the technology as an interactive tool to support and extend thinking, rather than simply replace it.
Augmentation in Practice
1. Scaffolding and Cognitive Load Optimisation:
AI should function as a "cognitive scaffold," managing the extraneous mental effort of routine, lower-level tasks. By delegating routine components, such as data formatting, compiling standard environmental disclosures, or summarising long, uncritical market reports, you release and reallocate cognitive resources. We all have limited working memory so the more we can delegate ‘admin’ type work the more we’ll have available for hard thinking.
2. Enhanced Quality and Speed:
Mostly, human+AI outperforms AI alone. A 2025 study showed this conclusively - teams working with AI greatly outperformed teams working without. See ‘the Cybernetic Teammate’ study for more on this - the same applied for individuals with AI but they still lost out to teams with AI.
3. The Metacognitive Mirror:
This academic term refers to using an AI as a thinking partner. By intentionally engaging with an AI it can reason back to you and help with illuminating assumptions or finding blind spots. By asking it to adopt different personas you can stress test your arguments against a range of interlocutors.
Discipline and the Power of the Pause:
The single greatest barrier to using AI as an augmentation tool is our own ingrained habit of seeking the fastest, easiest answer. The technology is designed for frictionless, immediate output, which triggers our brain's reward system.
To counteract this, we need a practical method to interrupt this automatic impulse. This is where the work of Buddhist Monk Gelong Thubten becomes surprisingly relevant to corporate strategy. He teaches a method for inserting "micro-moments of meditation” - brief moments of mindful awareness - into our daily workflow. By very consciously pausing at apposite moments we can move from reactive to reflective engagement.
The crucial factor in transforming AI use from a threat into an advantage is Intentionality. This intentionality must be cultivated through discipline, precisely because the natural impulse is towards automaticity.
Based on this thinking, a useful process might be as follows:
- The Pre-Prompt Pause: Before typing a query into the AI (e.g., "Summarise this 50-page lease abstract"), take one conscious in-breath and out-breath. In that brief space, ask: "What is my intention? What do I truly seek to understand or achieve with this interaction?". This interrupts the automated, immediate impulse and introduces purpose.
- The Post-Response Pause: After the AI generates the summary or draft, take another conscious breath before copying or acting on the information. Ask: "What is my critical evaluation of this output? Does it align with my professional judgment? What is the next wise action?”.
These micro-moments are not a "wellness add-on"; they are a direct form of cognitive training, building the metacognitive muscle required to resist passive delegation.
Building Cognitive Muscle Memory
Intentional engagement for us needs to be like muscle memory for athletes. Something just baked in to how we operate.
At first glance, some of these principles might seem contradictory. How can we use AI as a 'cognitive scaffold' while also practicing 'strategic friction'? This is not a contradiction; it is a necessary duality for effective augmentation. Scaffolding helps us manage cognitive load for the task at hand, while Strategic Friction ensures the long-term health of our cognitive abilities, preventing the scaffold from becoming a permanent crutch.
Here is an Intentional Intelligence Framework:
1. Generative Primacy
What it is: This is the core principle of maintaining the cognitive effort required for learning by always generating your own answer or work before consulting AI. This counteracts the loss of the ‘generation effect’ and ensures the cognitive work that drives durable memory and skill building is performed.
What should you do: You should attempt problems independently for a set time (e.g., a 10–30 minute "try-first" period) or draft initial responses manually before using AI for refinement or comparison.
2. Strategic Friction
What it is: This principle involves deliberately re-introducing productive difficulty and effort into workflows to counteract the "frictionless" design of modern AI, which otherwise leads to skill degradation. This preserves the desirable difficulties necessary for long-term retention and transfer.
What should you do: You should implement time-boxed AI access (using it only during specific windows, not continuously) and schedule deep work blocks (e.g., 90–120 minutes daily) entirely without AI to ensure core faculties are exercised.
3. Metacognitive Monitoring
What it is: This is the practice of maintaining conscious awareness and regulation of one's own thinking, serving as the user's primary defence against automation bias and the "illusion of competence". It involves thinking about how you are thinking, assessing comprehension, and recognising habitual offloading patterns.
What should you do: You should practice pre-task intention setting by asking, "What is my intention?" before using AI, and perform post-task evaluation by asking, "What did I genuinely learn?". You should also engage in weekly pattern recognition to identify areas of over-reliance.
4. Contemplative presence
What it is: This pillar integrates mindfulness practice to prevent the user from resorting to autopilot reactivity and habitual searching. It involves cultivating awareness of the present moment and one's internal impulses, which trains the metacognitive muscle needed for intentional AI engagement.
What should you do: You should practice the "Pre-Prompt/Post-Response Pause"—taking a conscious in-breath and out-breath before initiating or acting on AI interaction—to transform reactive impulses into conscious choices.
5. Weekly Practice
What is it: This component refers to structured, routine activities designed to sustain mental fitness and prevent AI from becoming a constant cognitive crutch. These practices serve as focused workouts for the most at-risk cognitive skills.
What should you do: You should commit to at least one complex, non-trivial problem-solving session weekly using only analog tools (pen and paper), and/or schedule a Digital Sabbath (a 12–24 hour technology-free period).
Conclusion
None of the above is hard. But the downsides of not doing this are definitely harsh. As we’ve seen above this really is an important choice that each of us needs to make. If we take the easy route and offload our thinking to AIs, then our brains will atrophy, and we’ll genuinely not be of much use to any employer. I suspect an awful lot of people will go this way, unaware of just how cognitively damaging their behaviour is. And the consequences for them will be bad.
The choice is now explicit. The consequences are documented. And having read this far, you don't have the excuse of ignorance. I'd guess you either aren't offloading too much anyway, or this will prompt you to adjust your behaviour.
I knew this was a big deal, but until doing the research for this newsletter I was not aware just how much research and evidence already exists around the topic.
I hope the above is enough to highlight the issue, but if you want to deep dive into this there is a mountain of material to consult.
PS. For reference I prompted Claude, Gemini and ChatGPT to each produce Deep Research reports which I then added to NotebookLM, and from there had long discussions, produced multiple reports, and three different audio overviews. In other words, not as substitutes for thinking, but as research assistants. The difference matters.
OVER TO YOU
What’s your behaviour with AI like? How are your habits? What are you going to change? How do you remain ‘the Boss’? I would love to hear.
All things
#SpaceasaService
Exploring how AI and technology are reshaping real estate and cities to serve the future of work, rest, and play.