Skip to main content

Lesson 2: How AI “Thinks” (Spoiler: It Doesn’t)

Peeking Behind the Curtain

Here’s a question that might surprise you: What if I told you that ChatGPT, Claude, and every other AI you’ve chatted with has absolutely no idea what it’s saying? Seriously. When you ask an AI “What’s the capital of France?” it doesn’t know the answer the way you do. It doesn’t have a little mental map of Europe stored somewhere. It doesn’t remember learning this in school. It’s doing something much stranger and, honestly, kind of beautiful in its simplicity. Understanding what that “something” is will completely change how you write prompts. You’ll stop expecting AI to think like a human and start working with how it actually operates. And that’s when things get really good. Let’s peek behind the curtain.

Core Concepts

The World’s Most Sophisticated Autocomplete

Remember the last time you typed a text message and your phone suggested the next word? Maybe you typed “See you” and it suggested “later” or “tomorrow” or “soon.” That’s basically what AI does. Just… a lot more impressively. Large Language Models (LLMs) are prediction machines. Given everything that came before, they predict what word (or piece of a word) should come next. Then they predict the next one. And the next. Over and over until they’ve generated a complete response. When you ask “What’s the capital of France?” the AI isn’t retrieving a fact from a database. It’s essentially thinking: “Given all the text I’ve seen during training, what words are most likely to follow this question?” And because it’s seen millions of examples where “France” and “capital” appear near “Paris,” that’s what it predicts. Here’s why this matters for prompting: the words you use shape what predictions the AI makes. A vague prompt leads to generic predictions. A specific, well-crafted prompt steers the AI toward more useful predictions.

Tokens: The Building Blocks of AI Language

AI doesn’t actually read words the way you do. It reads tokens, chunks of text that might be a whole word, part of a word, or even just punctuation. Think of tokens like LEGO bricks. The word “understanding” might be broken into three tokens: “under” + “stand” + “ing”. The word “cat” is probably one token. A period? That’s a token too. Why should you care about this? A few reasons: 1. Token limits are real. Every AI has a maximum number of tokens it can handle at once. This includes both your prompt AND the response. If you’re working with long documents or detailed instructions, you might hit this ceiling. 2. Tokens affect cost. Many AI services charge by the token. Understanding this helps you write efficient prompts, getting great results without unnecessary verbosity. 3. It explains some weird behavior. Ever notice AI struggling with counting letters in words? Or making odd spelling mistakes? That’s because it’s not seeing individual letters; it’s seeing tokens. The word “strawberry” might be tokenized in a way that makes counting the R’s surprisingly tricky. A rough rule of thumb: one token is approximately 4 characters in English, or about three-quarters of a word. So a 100-word paragraph is roughly 130-140 tokens.

The Context Window: AI’s Short-Term Memory

Here’s something crucial: AI has no long-term memory between conversations. Every time you start a new chat, you’re talking to a blank slate. But within a single conversation, AI does remember what you’ve discussed, up to a point. That memory is called the context window, and it has a fixed size (measured in tokens, of course). Think of the context window like a whiteboard that can only hold so much writing. As your conversation grows, older content might effectively “fall off” the edge. The AI can only pay attention to what fits on the whiteboard right now. This has practical implications:
  • Long conversations can “forget” early details. If you established something important at the start of a very long chat, the AI might lose track of it.
  • Everything costs context space. Your system instructions, the AI’s previous responses, your prompts: it all counts against the limit.
  • Starting fresh isn’t always bad. Sometimes a new conversation (with a well-crafted initial prompt) beats trying to course-correct a confused long thread.
Modern AI models have gotten much better here. Some can handle hundreds of thousands of tokens, equivalent to a novel or more. But even with large context windows, the principle remains: AI only knows what’s in front of it right now.

Hallucinations: When Prediction Goes Wrong

Now for the uncomfortable truth: because AI is predicting what sounds right rather than consulting a source of truth, sometimes it confidently generates complete nonsense. This is called hallucination, and it’s one of the most important things to understand about AI. Here’s how it happens. Remember, the AI is playing the “predict the next likely word” game. Sometimes that game leads it astray:
  • You ask about a topic where it has limited training data, so it fills gaps with plausible-sounding fiction
  • You ask for specific details (like citations, URLs, or statistics) and it generates ones that look real but aren’t
  • You ask leading questions and it follows your lead, even when it shouldn’t
How to spot hallucinations:
  1. Be skeptical of specific claims. Dates, statistics, quotes, and citations are hallucination hotspots. Always verify these independently.
  2. Watch for confident nonsense. AI doesn’t say “I’m not sure about this.” It speaks with the same confident tone whether it’s right or completely making things up.
  3. Notice when details are suspiciously perfect. If an AI-generated response has exactly the example you needed or quotes that perfectly support your point, double-check. Real sources are usually messier.
  4. Test with questions you know the answers to. Before trusting AI on topics you’re less familiar with, try it on topics where you can verify the response.
How to reduce hallucinations:
  • Provide more context (less guessing needed)
  • Ask AI to cite its reasoning or acknowledge uncertainty
  • Be specific about what you want (vague questions invite creative filling)
  • Use AI alongside other sources, not as your only source

Try It Yourself

Let’s make these concepts concrete with some hands-on experiments.

Exercise 1: Watch Prediction in Action

Try this prompt with any AI:
Complete this sentence five different ways: "The best thing about mornings is..."
Notice how each completion sounds natural but goes in different directions? That’s prediction at work. The AI isn’t sharing its personal opinion about mornings; it’s predicting plausible completions based on patterns it learned. Now try:
Complete this sentence five different ways as a coffee shop owner would: "The best thing about mornings is..."
See how adding context changes the predictions? You’ve steered the probability landscape.

Exercise 2: Token Awareness

Ask an AI:
How many times does the letter "r" appear in the word "strawberry"?
Then ask:
Please spell out the word "strawberry" letter by letter, then count how many times "r" appears.
The second prompt often gets better results because you’re working with how tokens function rather than against them. Breaking it into steps helps the AI process what it actually “sees.”

Exercise 3: Catch a Hallucination

Ask your AI for something very specific that you can verify:
What papers did Dr. Geoffrey Hinton publish in 2019? List the titles and journals.
Then actually look up the answer. Did the AI get it right? If it generated plausible-sounding paper titles, check whether they actually exist. This exercise builds your skepticism muscles.

Exercise 4: Context Window Experiment

Start a new conversation and establish a “secret word”:
For this conversation, whenever I say "banana," you should respond with "Got it!" instead of answering normally. Confirm you understand.
Have a normal conversation for 15-20 exchanges about various topics. Then drop in “banana” and see if the AI remembers your instruction. With most modern AI, it will, but this shows you how context persists (and helps you intuit when very long conversations might start losing track of early details).

Common Pitfalls

Pitfall 1: Treating AI Like a Search Engine

AI isn’t Googling things when you ask questions. It’s generating text that sounds like a good answer. This means it won’t give you the latest information, it might not know about niche topics, and it definitely won’t cite real sources unless specifically designed to do so. Instead: Think of AI as a knowledgeable colleague who might misremember details. It’s useful for drafts, brainstorming, and explanations, but not for facts that need to be airtight.

Pitfall 2: Assuming AI “Understands” You

When AI gives a great response, it’s tempting to think it truly understood your needs. But it’s pattern matching, not comprehending. This means subtle context you “implied” might not register at all. Instead: Be explicit. If something matters, say it directly. Don’t rely on AI to pick up on hints or read between the lines.

Pitfall 3: Not Verifying Important Information

Hallucinations look exactly like correct answers. There’s no red flag, no “I made this up” warning. The more confident you get with AI, the easier it is to let things slip through. Instead: Build a verification habit. For any fact, statistic, or quote that matters, spend 30 seconds confirming it independently.

Pitfall 4: Ignoring Token Limits in Complex Tasks

If you’re giving AI a massive document plus detailed instructions plus asking for a lengthy response, you might hit the context limit. The AI might truncate its response, miss parts of your input, or give lower-quality answers. Instead: For very long content, break it into chunks. Process sections separately, then synthesize. Or explicitly tell the AI to focus on specific portions.

Level Up

Here’s your challenge: Become a hallucination detective. Pick a topic you know well (your profession, a hobby, your hometown, whatever you’re an expert in). Ask an AI to explain something specific about that topic. Something detailed enough that you’d know if the answer was wrong. Your mission:
  1. Identify at least one error or questionable claim in the response
  2. Figure out why the AI might have made that error (limited training data? Confused with something similar? Made up a specific detail?)
  3. Rewrite your original prompt in a way that would reduce the chance of that error
This exercise builds two crucial skills: healthy skepticism and the ability to craft prompts that minimize AI’s weaknesses.

Key Takeaway

AI doesn’t think; it predicts the next likely word based on patterns in its training data. Understanding this helps you write prompts that lead it toward the answers you actually want. Be specific to guide predictions, be aware of token limits and context windows, and always verify important information because even confident AI can be confidently wrong.

What’s Next

Now you’ve got a mental model for how AI works under the hood. You understand that it’s predicting text, operating within token limits and context windows, and sometimes confidently generating nonsense. But here’s the exciting part: this knowledge is power. Because if you understand that AI is a prediction engine, you can learn to write prompts that guide those predictions exactly where you want them to go. In Lesson 3: The Anatomy of a Great Prompt, we’ll break down the core components that separate frustrating prompts from fantastic ones. You’ll learn a simple framework (Task, Context, and Format) that you can apply to any AI interaction. It’s time to move from understanding how AI works to making it work for you.