Lesson 2: How AI “Thinks” (Spoiler: It Doesn’t)
Peeking Behind the Curtain
Here’s a question that might surprise you: What if I told you that ChatGPT, Claude, and every other AI you’ve chatted with has absolutely no idea what it’s saying? Seriously. When you ask an AI “What’s the capital of France?” it doesn’t know the answer the way you do. It doesn’t have a little mental map of Europe stored somewhere. It doesn’t remember learning this in school. It’s doing something much stranger and, honestly, kind of beautiful in its simplicity. Understanding what that “something” is will completely change how you write prompts. You’ll stop expecting AI to think like a human and start working with how it actually operates. And that’s when things get really good. Let’s peek behind the curtain.Core Concepts
The World’s Most Sophisticated Autocomplete
Remember the last time you typed a text message and your phone suggested the next word? Maybe you typed “See you” and it suggested “later” or “tomorrow” or “soon.” That’s basically what AI does. Just… a lot more impressively. Large Language Models (LLMs) are prediction machines. Given everything that came before, they predict what word (or piece of a word) should come next. Then they predict the next one. And the next. Over and over until they’ve generated a complete response. When you ask “What’s the capital of France?” the AI isn’t retrieving a fact from a database. It’s essentially thinking: “Given all the text I’ve seen during training, what words are most likely to follow this question?” And because it’s seen millions of examples where “France” and “capital” appear near “Paris,” that’s what it predicts. Here’s why this matters for prompting: the words you use shape what predictions the AI makes. A vague prompt leads to generic predictions. A specific, well-crafted prompt steers the AI toward more useful predictions.Tokens: The Building Blocks of AI Language
AI doesn’t actually read words the way you do. It reads tokens, chunks of text that might be a whole word, part of a word, or even just punctuation. Think of tokens like LEGO bricks. The word “understanding” might be broken into three tokens: “under” + “stand” + “ing”. The word “cat” is probably one token. A period? That’s a token too. Why should you care about this? A few reasons: 1. Token limits are real. Every AI has a maximum number of tokens it can handle at once. This includes both your prompt AND the response. If you’re working with long documents or detailed instructions, you might hit this ceiling. 2. Tokens affect cost. Many AI services charge by the token. Understanding this helps you write efficient prompts, getting great results without unnecessary verbosity. 3. It explains some weird behavior. Ever notice AI struggling with counting letters in words? Or making odd spelling mistakes? That’s because it’s not seeing individual letters; it’s seeing tokens. The word “strawberry” might be tokenized in a way that makes counting the R’s surprisingly tricky. A rough rule of thumb: one token is approximately 4 characters in English, or about three-quarters of a word. So a 100-word paragraph is roughly 130-140 tokens.The Context Window: AI’s Short-Term Memory
Here’s something crucial: AI has no long-term memory between conversations. Every time you start a new chat, you’re talking to a blank slate. But within a single conversation, AI does remember what you’ve discussed, up to a point. That memory is called the context window, and it has a fixed size (measured in tokens, of course). Think of the context window like a whiteboard that can only hold so much writing. As your conversation grows, older content might effectively “fall off” the edge. The AI can only pay attention to what fits on the whiteboard right now. This has practical implications:- Long conversations can “forget” early details. If you established something important at the start of a very long chat, the AI might lose track of it.
- Everything costs context space. Your system instructions, the AI’s previous responses, your prompts: it all counts against the limit.
- Starting fresh isn’t always bad. Sometimes a new conversation (with a well-crafted initial prompt) beats trying to course-correct a confused long thread.
Hallucinations: When Prediction Goes Wrong
Now for the uncomfortable truth: because AI is predicting what sounds right rather than consulting a source of truth, sometimes it confidently generates complete nonsense. This is called hallucination, and it’s one of the most important things to understand about AI. Here’s how it happens. Remember, the AI is playing the “predict the next likely word” game. Sometimes that game leads it astray:- You ask about a topic where it has limited training data, so it fills gaps with plausible-sounding fiction
- You ask for specific details (like citations, URLs, or statistics) and it generates ones that look real but aren’t
- You ask leading questions and it follows your lead, even when it shouldn’t
- Be skeptical of specific claims. Dates, statistics, quotes, and citations are hallucination hotspots. Always verify these independently.
- Watch for confident nonsense. AI doesn’t say “I’m not sure about this.” It speaks with the same confident tone whether it’s right or completely making things up.
- Notice when details are suspiciously perfect. If an AI-generated response has exactly the example you needed or quotes that perfectly support your point, double-check. Real sources are usually messier.
- Test with questions you know the answers to. Before trusting AI on topics you’re less familiar with, try it on topics where you can verify the response.
- Provide more context (less guessing needed)
- Ask AI to cite its reasoning or acknowledge uncertainty
- Be specific about what you want (vague questions invite creative filling)
- Use AI alongside other sources, not as your only source
Try It Yourself
Let’s make these concepts concrete with some hands-on experiments.Exercise 1: Watch Prediction in Action
Try this prompt with any AI:Exercise 2: Token Awareness
Ask an AI:Exercise 3: Catch a Hallucination
Ask your AI for something very specific that you can verify:Exercise 4: Context Window Experiment
Start a new conversation and establish a “secret word”:Common Pitfalls
Pitfall 1: Treating AI Like a Search Engine
AI isn’t Googling things when you ask questions. It’s generating text that sounds like a good answer. This means it won’t give you the latest information, it might not know about niche topics, and it definitely won’t cite real sources unless specifically designed to do so. Instead: Think of AI as a knowledgeable colleague who might misremember details. It’s useful for drafts, brainstorming, and explanations, but not for facts that need to be airtight.Pitfall 2: Assuming AI “Understands” You
When AI gives a great response, it’s tempting to think it truly understood your needs. But it’s pattern matching, not comprehending. This means subtle context you “implied” might not register at all. Instead: Be explicit. If something matters, say it directly. Don’t rely on AI to pick up on hints or read between the lines.Pitfall 3: Not Verifying Important Information
Hallucinations look exactly like correct answers. There’s no red flag, no “I made this up” warning. The more confident you get with AI, the easier it is to let things slip through. Instead: Build a verification habit. For any fact, statistic, or quote that matters, spend 30 seconds confirming it independently.Pitfall 4: Ignoring Token Limits in Complex Tasks
If you’re giving AI a massive document plus detailed instructions plus asking for a lengthy response, you might hit the context limit. The AI might truncate its response, miss parts of your input, or give lower-quality answers. Instead: For very long content, break it into chunks. Process sections separately, then synthesize. Or explicitly tell the AI to focus on specific portions.Level Up
Here’s your challenge: Become a hallucination detective. Pick a topic you know well (your profession, a hobby, your hometown, whatever you’re an expert in). Ask an AI to explain something specific about that topic. Something detailed enough that you’d know if the answer was wrong. Your mission:- Identify at least one error or questionable claim in the response
- Figure out why the AI might have made that error (limited training data? Confused with something similar? Made up a specific detail?)
- Rewrite your original prompt in a way that would reduce the chance of that error

