Barbara Oakley on the #1 Mistake Learners Make With AI
And the two simple prompting methods she recommends to build deep, lasting knowledge instead.

It was the day after my birthday. March 2020. I had just said goodbye to my class, sensing something none of us could name. “Look at each other closely,” I told them. “It might be a while.”
That weekend, unsure what to do with myself, Vienna’s streets empty, I turned to Coursera. I clicked on Learning How to Learn by Barbara Oakley without much expectation, not knowing that this course would give me words for things I had always felt as a teacher and a learner. It felt like someone had turned the lights on inside my mind. It changed how I teach. It changed how I learn.
Since then, I’ve read all of Oakley’s books. So when I saw she had a course on learning with AI, I signed up. I also watched her recent talks, explaining how to turn gen AI into a tool that can deepen understanding.
Here, I will share two ideas that stayed with me and how you can make use of them. If you’re curious how to use AI not as a shortcut but as a companion on the learning path, this is for you.
“AI won't replace humans, but humans who use AI will replace humans that don't.”
— Fei-Fei Li
The Brain Learns in Two Ways
Barbara Oakley draws a beautiful parallel between generative AI and the brain’s dual learning systems. She calls them the declarative and procedural pathways.
The declarative system, managed by the hippocampus, handles conscious, effortful learning. This is where you focus on memorizing facts, solving equations, or following instructions. It’s deliberate, slow, and mentally taxing. It’s the system you engage when you're learning something entirely new.
The procedural system, tied to the basal ganglia and cerebellum, learns through repetition. It automates patterns—like typing, driving, or speaking a second language. Once trained, it operates unconsciously and quickly. You don’t think through every move; you just do it.
These systems work together. Declarative learning sets the foundation. Procedural learning builds fluency.
AI mimics this architecture. When it “hallucinates,” it's similar to what happens when your conscious system isn’t fully engaged, and the procedural one fills in the gaps based on past patterns.
Yesterday, I experienced this firsthand. I was cycling home from my co-working space. After ten minutes, I realized I was biking toward my old apartment. My procedural memory, shaped by nine years of repetition, overrode my conscious intention.
The point: automation isn’t always accurate.
When you use AI to simply retrieve answers—especially for learning—you’re bypassing declarative effort. You get speed, but you don’t build structure. No neural consolidation. No flexible understanding. You get the result. But you lose the wiring.
AI as a Conceptual Translation Engine
The most powerful mental model for what generative AI offers an expert is not "artificial intelligence" but "conceptual translation."
Not translation between languages like French and English, but translation between systems of meaning. From a field you don’t know into one you do; from complexity into familiarity.
This is how Gen AI can accelerate learning. Not by reducing difficulty, but by restructuring it.
AI allows you to take a foreign concept—like the structure of a neural network—and express it using the language of something you already understand, like ecosystems, city planning, or chess strategy. This mental re-mapping makes learning faster and more durable.
There are two practical ways to apply this:
The Metaphor Engine
A metaphor is a mental shortcut. It uses a strong, pre-existing set of neural links (something you already understand well, like how water flows) to create a "bridge" to a new, unfamiliar concept (like how electricity flows).
This is the most direct application of the translation principle. Instead of asking for a dry explanation, you force the AI to build a bridge from a concept you don't know to a domain you know intimately. The pattern is simple:
Prompt:
Explain [complex topic] to me using analogies and metaphors from the world of [your domain of expertise].
Examples:
For a global strategist: "Explain the principles of quantum entanglement using analogies from international relations theory and treaty negotiations."
For a health scholar: "Explain the architecture of an AI transformer model (like in GPT-4) using the language and concepts of epidemiology and viral transmission."
For a tech expert: "Explain the military strategy of 'defense in depth' using principles of modern cybersecurity and network architecture."
The Persona Mirror
This pattern involves telling the AI to explain something not just to you, but to a specific professional persona. This is incredibly useful for testing arguments, preparing presentations, or understanding how your ideas will land with different stakeholders.
Prompt:
Explain [topic] to a [specific professional persona]. Now, explain the same topic to [a different professional persona] and highlight the key differences in framing and emphasis.
Examples:
"Explain the new SEC climate disclosure rules to a Silicon Valley venture capitalist. Now, explain the exact same rules to a tenured professor of environmental ethics."
"Explain the business case for adopting a four-day work week to a Chief Financial Officer focused on quarterly targets. Now, explain it to a Head of People focused on talent retention and burnout."
The difference in the output—the choice of vocabulary, the focus, the assumed values—is an immediate, powerful lesson in communication and perspective.
A Final Thought
The throughline in all of this is an intent shift.
Don’t use AI to get the fastest answer. Use it to deepen your thinking and understanding.
Treat it less like a search engine, more like a sparring partner. Let it introduce friction, reveal blind spots, and translate complexity into metaphors that connect with your existing expertise.
I’m curious — how are you using AI to accelerate your learning?
This was spot on, and one of my favorite things about AI:
//“The most powerful mental model for what generative AI offers an expert is not "artificial intelligence" but "conceptual translation." Not translation between languages like French and English, but translation between systems of meaning. From a field you don’t know into one you do; from complexity into familiarity.//
I've been a big fun of Barbara Oakley and Terry Sejnowsky's teaching on teaching and learning for some years. Thank you Eva for the"spaced repetition" on how the two systems work and the examples you gave.