Lifelong Learning Club

Lifelong Learning Club

Share this post

Lifelong Learning Club
Lifelong Learning Club
How to Treat AI Like an Unhinged Intern (And Become a Rigorous Thinker)

How to Treat AI Like an Unhinged Intern (And Become a Rigorous Thinker)

The day ChatGPT almost broke my rental car.

Eva Keiffenheim MSc's avatar
Eva Keiffenheim MSc
Jun 24, 2025
∙ Paid
6

Share this post

Lifelong Learning Club
Lifelong Learning Club
How to Treat AI Like an Unhinged Intern (And Become a Rigorous Thinker)
4
1
Share
Source: Diana

I was driving a rental car on my way to a conflict transformation training in a future-ready German village, pulling into a Tankstelle, when I realized I had no idea how to open the fuel tank.

The car looked like a spaceship. There was no obvious button or lever. So I did what I do these days, snapped a photo, and asked my ChatGPT app for help.

I got a confident step-by-step answer.

Oh wow, I thought, here we are in the future.

But the answer didn't work.

So I prompted again, this time with more precision. GPT gave me a reply so specific and persuasive that I almost broke the fuel cap off the car.

Frustrated, I finally opened the glovebox and did something radical: I read the manual.

In 30 seconds, I had the answer.

Standing there with the nozzle in hand, I laughed at myself, and something clicked into place.

If I hadn’t been grounded in physical reality—struggling with an unforgiving chunk of plastic—I’d never have realized the AI was wrong.

In the physical world, bad advice breaks things. In knowledge work, it just seeps in unnoticed. You don’t see the crack in the foundation. You just build on it.

In physical space, bad advice breaks things. Or simply doesn’t work. In knowledge work, it just seeps in unnoticed. You don’t see the crack. You just follow it.

Since that day, I've started paying closer attention. Not just to the answers AI gives, but to the nature of the technology itself.

I believe the default ways in which many of us interact with AI are flawed.

In this article, I share four principles I now follow to think better with these systems.

Level 1: Treat AI as an Unhinged Intern

Our instinct when working with AI should be healthy skepticism. A generative large language model, like ChatGPT, Gemini, or Claude, is a dream engine, designed to generate the most plausible continuation of a pattern. It has no concept of “truth,” only of “coherence.”

This isn't an accidental design flaw but a business model.

Our daily interactions with AI are shaped by the commercial goals of a few tech giants. These tools are designed for seamless use, not thoughtful reflection, favoring confident, plausible outputs to keep us engaged.

But this smooth experience hides how the system actually works: it draws on unvetted internet data and the invisible labor of low-paid annotators who make it more agreeable. To use AI without being used by it, we need to see through this illusion.

That means building a mindset and framework around it, one that reintroduces cognitive friction and questions not just the answers, but the data, systems, and motives behind them.

Assume everything is a confident hallucination until proven otherwise. Treat AI outputs as drafts from a brilliant, unsupervised, and occasionally unhinged intern.

Always verify critical information. Confidence is not accuracy. If it cites a study, find the PDF and check for accuracy. If it mentions a name, check it. Respect the AI’s output, but verify before you act on it.

And most importantly, force it to argue with itself. Don't accept the first answer.

To start, use this prompt boundary:

For this conversation, I need you to act as a critical and honest thinking partner. Your goal is to make my thinking better, not to make me feel good. Please:

1. Acknowledge your potential biases based on your training data and alignment (e.g., your tendency towards agreeableness or specific ethical frameworks).

2. Instead of giving one answer, present at least two distinct and well-reasoned viewpoints on this topic.

3. Do not just affirm my ideas. Actively challenge them. Ask probing questions, point out potential flaws, and help me see what I'm missing.

4. Explain the reasoning behind your challenges and suggestions, referencing the core principles or data that inform your perspective.

Level 2: Build a Specialized Cabinet

Would you hire one person to be your quantitative analyst, empathetic speechwriter, and avant-garde graphic designer? Of course not.

Yet we treat a single AI model as a one-size-fits-all assistant, leading to generic outputs

"AI" is no magic monolith, but a pool of specialized talent. Clarify your task first, then select the right specialist for the job. Here’s how I currently think of my AI team.

The Power-Tier Thinker:

Systems have junior-level models (like GPT-4o, Claude Sonnet, Gemini Flash) and senior-level expert models (like upcoming GPT o3, Claude Opus, Gemini 2.5 Pro).

Default models prioritize speed and cost savings. For key tasks that require critical analysis, always manually switch to the most powerful model. Before important tasks, always select the powerful "sports car" model, not the default "pickup truck." Here’s how:

self-recorded video from Google AI studio

The Deep Researcher:

Keep reading with a 7-day free trial

Subscribe to Lifelong Learning Club to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Eva Keiffenheim
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share