How AI Outputs Quietly Erode Your Expertise
It’s not hallucinations you should fear but the illusion of knowing. These 3 protocols turn AI into an Apprentice that strengthens your judgment.

Welcome to issue #211of the Lifelong Learning Club. I’m really glad you’re here. Each Monday, I send a preview of a pay-walled article to help you learn smarter and turn “one day...” into Day One. For the full suite of science-backed strategies, expert AI prompts, direct support, and a global community designed for consistent action, consider becoming a paid member.
A few weeks ago, I asked ChatGPT to alphabetize a bibliography for one of my articles. In seconds, it produced the organized list.
The problem?
Most links were invented, and journal names changed. I had been handed a confident, constructed illusion.
The experience was humiliating (see my comment here), but I didn’t fully grasp what went wrong until last week.
I was at a philosophy x AI x art convening in a French castle and saw a tarot card reader in the co-working space. The whole process was completely transparent. I watched her hands, saw the shuffle, and observed the draw. She handed me two cards, and the interpretation was left to me. My role was to assign my own meaning to a process I had witnessed from start to finish.
With AI, this is inverted and far more opaque.
We get the results without seeing the shuffle. This opacity tempts us into a transaction of faith, approaching AI as a mysterious Oracle in a digital temple.
You bring it a prompt, and it delivers a proclamation.
This Oracle model is seductive because it promises the end product of thought without the messy, effortful process of thinking.
But what if the phantom bibliography wasn’t a sign of AI’s failure, but of my own?
What if we stopped treating AI like an Oracle and started treating it like an Apprentice—a powerful but still raw partner that needs your clear direction?
This piece is a refined mental model for working with AI, backed by a 3-step framework of practical protocols. It’s a method for turning the opaque oracle you obey into a transparent cognitive partner you direct, ensuring you build—not erode—your own professional judgment.
A New Mental Model for AI
Most people’s default way of using AI is a transaction of faith. They approach it like an Oracle in a digital temple. You bring it a prompt, and in return, it delivers a proclamation. The process is mystical, the reasoning a black box, and your role is passive.
This Oracle model is seductive. It promises the end product of thought without the messy, effortful process of actually thinking. But it leaves you with answers you can’t defend, insights you haven’t earned, and a terrifying blind spot for its sophisticated errors.
The alternative is to leave the temple and enter the workshop.
In the workshop, the dynamic is not about faith, but about construction. You are the one with the vision, the context, and the critical judgment that comes from experience. The AI is your new Apprentice. It is immensely powerful and fast, but it lacks true understanding, has no sense of consequence, and needs your firm, structured direction.
In this model, your goal is not to receive a finished product from a black box. It’s to engage in a transparent process where you and your apprentice build the final thing together. The real value isn’t just the output, but the clarity you build along the way.
The Brain’s Case Against the Oracle
This shift from Oracle to Apprentice is aligned with how your brain actually learns. The Oracle model, for all its power, works directly against the way your brain builds durable knowledge.
1. You’re Mistaking Ease for Expertise
The Oracle’s output feels incredible. It’s clean, confident, and immediate. But your brain is being tricked. Cognitive psychologist Robert Bjork has shown that we consistently mistake this feeling of “cognitive fluency”—the ease of processing information—for actual learning.
Your brain uses a simple, and often wrong, shortcut: if it feels easy, I must know it.
But real learning isn’t passive. It’s an effortful process. The struggle to remember a fact, articulate a half-formed idea, or connect two concepts is the very signal your brain needs to strengthen its connections. That effort—what Bjork calls “desirable difficulty”—is the mental equivalent of lifting a weight to build muscle.
The Oracle is designed to eliminate this productive struggle. It hands you the perfect paragraph, robbing you of the hard but essential work of drafting it yourself. You’re left with the hollow feeling of knowing about something, without truly knowing it. You’ve rented the information, but you haven’t built the mental muscle.
2. You’re Collecting Bricks, Not Building a House
Real expertise isn’t a list of facts you can look up; it’s a network of connections you’ve built inside your own mind. A new idea only becomes useful when it hooks into what you already know.
Information that isn’t connected is what’s known as “inert knowledge.” It’s like a perfectly made brick sitting in a box. You technically have it, but because it isn’t part of a larger structure, it provides no support. You can’t build with it.
The Oracle is a factory for these clean, unconnected bricks. Its answers are syntactically perfect but contextually isolated. It doesn’t know what you know, so it cannot help you make the connections that are the basis of genuine understanding. It gives you the brick, but the mortar—the real work of learning—is still up to you.
The Oracle model is a shortcut that bypasses how your brain is built to learn. It offers the illusion of knowledge without the underlying neural structure. If you want to use AI to get smarter—and not just to generate outputs that make you seem so—you have to trade the Oracle’s temple for the craftsman’s workshop.
You have to put the friction back in.
Three Protocols for the Workshop
If the Oracle model removes the friction your brain needs to learn, the question is: how do you put it back in? How do you force the apprentice to show its work, turning its black-box magic into a transparent, collaborative process?
You need a new set of workshop protocols.
These aren’t just “prompts” for getting a better output. They are structured routines designed to force the entire thinking process—the assumptions, the logic, the weak spots—out into the open before you ever see a final product. They make you the director of the thinking, not just the recipient of the answer.

