Skip to main content

Lesson 7: Think Step by Step

Why Showing Your Work Isn’t Just for Math Class

Here’s a riddle for you: A farmer has 17 sheep. All but 9 run away. How many sheep does the farmer have left? Quick, what’s your gut answer? If you said “8,” you’re not alone. Many people do. But the correct answer is 9. (“All but 9” means 9 remain.) Here’s the fascinating part: when researchers asked AI models this exact question, the models often got it wrong too, unless they were asked to think through the problem step by step. When prompted to reason out loud, accuracy jumped dramatically. This lesson is about that simple but powerful idea: asking AI to show its work doesn’t just help you understand the answer; it helps the AI find the right answer in the first place.

Core Concepts

Why “Just Give Me the Answer” Often Fails

Remember back in Lesson 2 when we talked about how AI doesn’t really “think”? It predicts the most likely next word based on patterns. This works beautifully for simple tasks, but it can trip up on anything that requires multiple logical steps. Think about it this way. Imagine I asked you to multiply 23 by 47 in your head. Could you do it? Maybe. But it would be much easier (and you’d be much less likely to make mistakes) if you could write out the steps:
23 x 47
= 23 x 40 + 23 x 7
= 920 + 161
= 1,081
AI faces the same challenge. When you ask for just the answer to a complex problem, you’re essentially asking it to do mental math. Sometimes it nails it. Sometimes it confidently gives you the wrong answer. And you might not even realize it’s wrong because the response sounds so sure of itself. The solution? Give the AI permission (even explicit instructions) to work through the problem step by step.

The Magic Phrase: “Let’s Think Step by Step”

Researchers discovered something almost comically simple: adding “Let’s think step by step” to a prompt can dramatically improve AI performance on reasoning tasks. This technique is called Chain of Thought (CoT) prompting. Here’s why it works. When you ask AI to explain its reasoning, you’re essentially creating a trail of logic. Each step builds on the previous one. If the AI makes a wrong turn early on, it’s more likely to catch it, and you’re more likely to spot it too. Without Chain of Thought:
Prompt: If I have 3 apples and buy 2 bags with 6 apples each, then give away half of all my apples, how many do I have? AI Response: You have 6 apples.
Maybe that’s right? Let’s see. With Chain of Thought:
Prompt: If I have 3 apples and buy 2 bags with 6 apples each, then give away half of all my apples, how many do I have? Let’s think through this step by step. AI Response:
  • Starting apples: 3
  • Apples from 2 bags (6 each): 2 x 6 = 12
  • Total apples: 3 + 12 = 15
  • After giving away half: 15 / 2 = 7.5
You would have 7.5 apples. (In practical terms, you’d have 7 apples and would have given away 8.)
Now you can see the reasoning and catch the answer that’s actually 7.5, not 6. You can also see exactly where the AI went and verify each step makes sense.

Breaking Complex Tasks Into Smaller Pieces

“Let’s think step by step” is great, but sometimes you need to be even more explicit about how to break down a problem. This is especially true for tasks that aren’t purely mathematical. Let’s say you want AI to help you plan a community event. You could ask:
“Help me plan a summer barbecue for 50 people.”
And you’ll get a generic list. Serviceable, maybe. But compare that to:
“I need to plan a summer barbecue for 50 people. Walk me through this step by step:
  1. First, what key decisions do I need to make?
  2. Then, what food and supplies will I need?
  3. Next, what’s a realistic timeline for preparation?
  4. Finally, what could go wrong and how should I prepare for it?”
By explicitly laying out the steps you want the AI to walk through, you get a much more thorough, organized, and useful response. You’ve essentially given the AI a roadmap for its thinking. Pro tip: You don’t always know the right steps in advance, and that’s okay! You can ask the AI to figure out the steps first:
“I want to learn how to brew coffee at home like a pro. Before giving me instructions, first break this goal down into the key skills or areas I’d need to learn, then walk me through each one.”
This two-stage approach (ask for the framework, then ask for the details) is incredibly powerful for learning and tackling unfamiliar topics.

When to Ask for Reasoning vs. Just Results

Not every prompt needs Chain of Thought. Asking AI to “think step by step” about what time zone Tokyo is in is overkill. The answer is simple and factual. So when should you deploy this technique? Here’s a handy guide: Use Chain of Thought when:
  • The task involves multiple steps or calculations
  • You need to verify the AI’s logic (not just trust the answer)
  • The problem has potential for misinterpretation
  • You’re dealing with word problems, puzzles, or conditional logic
  • You’re making a decision and want to see the trade-offs considered
  • The stakes are high enough that a wrong answer matters
Skip Chain of Thought when:
  • The question has a simple, factual answer
  • You’re doing creative tasks where the “why” doesn’t matter
  • Speed is more important than verification
  • You already trust the straightforward response
Think of it like asking for an itemized receipt versus just getting a total. Sometimes you just want to pay and leave. But if you’re submitting an expense report or checking for billing errors, you need to see the breakdown.

Try It Yourself

Exercise 1: The Classic Test

Try this prompt in your favorite AI tool:
“A bat and a ball cost 1.10together.Thebatcosts1.10 together. The bat costs 1.00 more than the ball. How much does the ball cost?”
Note the answer. Now try:
“A bat and a ball cost 1.10together.Thebatcosts1.10 together. The bat costs 1.00 more than the ball. How much does the ball cost? Work through this step by step before giving your final answer.”
Compare the responses. Did you get the intuitive (but wrong) answer of 0.10thefirsttime?Thecorrectansweris0.10 the first time? The correct answer is 0.05, because if the ball costs 0.05,thebatcosts0.05, the bat costs 1.05, which is 1.00more,andtogethertheyre1.00 more, and together they're 1.10.

Exercise 2: Real-World Reasoning

Pick a genuine decision you’re facing (or make one up), then try this prompt:
“I’m trying to decide whether to [your decision]. Think through this systematically:
  • First, list the key factors I should consider
  • Then, evaluate the pros and cons of each option
  • Consider what information I might be missing
  • Finally, suggest which option seems strongest and why”
Notice how the structured reasoning gives you not just an answer, but a framework for thinking about the problem yourself.

Exercise 3: Explain It To Me

Choose a concept you’d like to understand better (how compound interest works, why the sky is blue, how vaccines work, anything). Try:
“Explain [concept] to me. Build up the explanation step by step, starting from the most basic foundation and adding one piece at a time until I understand the full picture.”
This forces the AI to construct a logical progression rather than dumping information on you all at once.

Common Pitfalls

Pitfall 1: Forgetting that step-by-step still needs good context “Let’s think step by step” isn’t magic fairy dust that fixes a vague prompt. If you ask “Help me with my project step by step,” the AI has no idea what project you’re working on. You still need to apply everything you’ve learned about context (Lesson 4) and specificity (Lesson 3). Pitfall 2: Not actually reading the steps The whole point of Chain of Thought is that you can see and verify the reasoning. If you just skip to the final answer, you’ve gained nothing. When the AI shows its work, actually check it. You’ll catch mistakes, and you’ll learn something in the process. Pitfall 3: Over-engineering simple questions If you ask the AI to think step by step about what the capital of France is, you’ll get three paragraphs explaining that Paris is indeed the capital. It’s correct, but it’s wasted effort. Match the technique to the complexity of the task. Pitfall 4: Accepting confident-sounding steps uncritically AI can sound very confident while walking through completely flawed logic. Just because it looks like careful reasoning doesn’t mean it is careful reasoning. Stay engaged. Ask yourself: does each step actually follow from the previous one?

Level Up

Ready for a challenge? Try this: You’re planning to read 24 books this year. It’s now March, and you’ve read 3 books. You typically read for 30 minutes a day, and the average book takes you about 6 hours to complete. Ask your AI to help you figure out: Are you on track? If not, what adjustments would you need to make? Have it show its reasoning completely. Then, critically evaluate the AI’s step-by-step work. Did it make any assumptions you disagree with? Did it miss any factors? Did any of its math steps have errors? This is the real skill: using Chain of Thought not just to get better answers from AI, but to become a better critical thinker yourself.

Key Takeaway

For complex problems, asking AI to show its work leads to dramatically better answers and gives you the ability to catch mistakes before they matter. “Let’s think step by step” is one of the most powerful phrases in your toolkit. Use it whenever accuracy matters more than speed, and always actually read the steps to verify the reasoning.

What’s Next

You’ve now learned how to get AI to reason through problems step by step. But here’s the truth: even with perfect technique, your first prompt is rarely your best prompt. Great results usually come from refining, clarifying, and iterating. In Lesson 8: The Art of Iteration, we’ll explore how to build on AI’s responses, redirect when things go off track, and develop the back-and-forth conversation skills that turn good prompts into great outcomes. Because prompting isn’t a single shot; it’s a dialogue.