Lifelong Learning Club

Lifelong Learning Club

3 Mental Models from AI Research to Sharpen Your Thinking

A guide to using AI's internal logic to redesign your own cognitive workflow.

Eva Keiffenheim MSc's avatar
Eva Keiffenheim MSc
Oct 06, 2025
∙ Paid
12
2
1
Share
Credits: Landiva Weber

Welcome to issue #213 of the Lifelong Learning Club. I’m so glad you’re here. Every Monday, I share a free preview to help you learn smarter and turn “one day…” into Day One. If you want full access, join as a paid member.

There’s a strange kind of vertigo that hits when you watch AI evolve in real time.

One week, a model writes code for 30 hours straight. Next, it’s designing microchips. What used to take years of expertise is now happening in minutes. And suddenly, the mental models we’ve relied on—built through study, practice, and reflection—start to feel… incomplete.

Most people treat these tools as shortcuts. Ways to move faster, write quicker, and get more done. But speed isn’t the real opportunity here.

What if, instead of outsourcing your thinking, you upgraded it?

This edition is about cognitive augmentation—how to use AI not just to produce more, but to think better. To dissect how these tools reason, learn, and improve—so you can take what works and integrate it into your own workflow..

By the end of this piece, you’ll walk away with three concrete thinking tools inspired by the most advanced AI models, like reasoning trajectories and attentional anchoring. Tools you can apply immediately to deepen your learning and sharpen your judgment.

I’ve written before about the idea of becoming an intellectual blacksmith—someone who shapes their ideas with care and craft. But now, the hammer is reshaping itself.

Which means we have an opportunity to do the same.

Let’s explore what recursive self-improvement really means (aiming for nearly no buzzwords) and how you can redesign your own cognitive architecture for a world that’s moving fast.


When the Tool Starts Tuning Itself

At the heart of all this vertigo is a simple feedback loop that tightens with every turn.

It’s like learning. You learn a new technique. That technique helps you learn the next one faster. That next one unlocks even deeper insight. With every round, your capacity expands.

This is because long-term memory (LTM) is essentially limitless. It stores everything you’ve already mastered: facts, skills, experiences. Every time you learn something new, your brain tries to connect it to what you already know.

The more you know, the more hooks your brain has to latch new ideas onto. When information is well-learned and organised into schemas or mental models, you can pull it from LTM to make sense of new input faster. This makes learning not just easier and more effective.

So while building that foundation takes effort, it literally rewires your brain—strengthening connections, building capacity, and making deeper, more complex learning possible down the line.

That’s what’s starting to happen with AI.

The most advanced systems are no longer just completing tasks—they’re improving the way they complete tasks. Rewriting their own instructions. And in some cases, even helping humans build better versions of the tools that built them.

A tool that’s starting to tune itself, in a feedback loop that mirrors how we learn and grow.


Recursive Improvement Is No Longer Theoretical

We’re not living in a sci-fi world (yet). We don’t have fully self-improving, self-directing AI. But the signals are here—and they’re accelerating.

  • Scientific discovery: AI is proposing drug compounds that would take humans months to identify.

  • Autonomous coding: Models like Anthropic’s can now write and debug software for 30+ hours without human input.

  • Chip design: AI is creating next-gen microchips faster and more efficiently than human engineers.

  • System architecture: AutoMaAS can reconfigure its internal logic. Walmart’s “super agent” WIBEY is already orchestrating 200+ specialized AIs to run real-world operations.

  • Prompt-to-prompt loops: Developers are using AI to generate better prompts for other AIs—already forming mini feedback loops that increase speed and quality.

In other words: recursive improvement is already happening.


AI Is Hitting These 3 Physical Walls

By now, this might sound like a runaway train. So why isn’t recursive AI sweeping across every industry, every workflow, every screen?

Because intelligence is tied to the physical world, shaped by scientific limits, and riddled with friction. And the below limits say a lot about what intelligence actually is.

Let’s unpack three of them:

1. The Data Wall: AI Can’t Learn What Was Never Written

AI learns from mixed modalities (text, code, images, audio, video). There’s a finite amount of high-quality, human-generated data out there—and we’re approaching it fast.

You can’t train a smarter model just by shoveling in more Wikipedia pages.

Estimates suggest high‑quality public text could be effectively exhausted by ~2026–2032 under current practices

Future gains won’t come from more data, but from better ways to use the data we already have. That’s why researchers are now questioning whether expanding context windows (giving models more information) actually improves reasoning. So far they have found that often, it doesn’t.


2. The Energy Wall: AI Has a Massive Carbon Footprint

Training state-of-the-art AI, driven by the “bigger-is-better” scaling doctrine, requires massive computational power (or “compute”) and consumes a staggering amount of energy.

This is a hardware and resource problem, driven by the need for specialized chips (GPUs), massive data centers (infrastructure), electricity, and water for cooling, all of which necessitate the extraction of finite raw materials.

As corporate demand, fueled by intense competition, exponentially increases—doubling compute use much faster than historic technological rates—we are hitting severe physical and environmental limits, including strain on energy grids, water supplies, and climate mitigation goals.

Think of it less like building a faster brain—and more like an accelerating industrial revolution demanding unprecedented extraction and infrastructure that rapidly overwhelms local resources and sustainability pledges.


3. The Reality Wall: Knowing the Steps ≠ Dancing the Dance

AI systems, fundamentally functioning as statistical pattern matchers trained on vast text archives, can flawlessly describe a perfect tennis serve. But because they lack consciousness, physical embodiment, and subjectivity, they cannot authentically feel the shift in your weight or the grip of the racket.

True, resilient intelligence relies on abstract concepts, deep reasoning (causal relationships), common sense, and the ability to navigate unstructured environments—strengths which remain uniquely human. AI’s current ability to mimic traits like empathy and creativity is often a convincing illusion.


Rather than flawed bugs in the system, these bottlenecks are the terrain. They’re why now, maybe more than ever, we need humans who know how to think with clarity, alignment, and intention.

So the question becomes: How do you think better? And how do you use these AI tools not to replace your thinking, but to refine it?


Augment Your Thinking With These 3 Mental Models

If the last year was about learning how to use AI, the next one will be about learning with it. Not just automating tasks, but enhancing how we reason, decide, and create.

The leverage lies in how you shape your attention, refine your logic, and guide the tool to produce meaningful output (and not just more noise). Here are three mental models you can start applying this week.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Eva Keiffenheim
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture