Lesson 9: Troubleshooting
When AI Goes Wrong
You’ve learned the techniques. You’ve practiced iteration. But sometimes things still go sideways: the AI writes a novel when you wanted a tweet, answers a question you never asked, or confidently states something that’s just plain wrong. Every AI user has been there. The difference between frustrated beginners and confident prompters isn’t that experts never get bad outputs. It’s that they’ve learned to read those bad outputs like clues in a mystery, then apply the right fix. In this lesson, we’re turning you into a prompt detective.Core Concepts
”That’s Not What I Meant!”: Fixing Misunderstandings
This is probably the most common AI problem, and it almost always comes down to one thing: the AI didn’t have enough information to understand what you actually wanted. The Restaurant Analogy Imagine you walk into a restaurant and tell the server, “I’d like something good.” They bring you a plate of liver and onions. Technically, someone thinks that’s good. But it’s not what you meant. Now imagine you said, “I’d like something light, maybe a salad with grilled chicken, no onions, with the dressing on the side.” Much harder to mess that up, right? AI works the same way. When it “misunderstands” you, it’s usually because your prompt left too much room for interpretation. The Fix: Get Specific About What You Actually Want Look at your failed prompt and ask yourself:- Did I explain the context? (Who is this for? What’s the situation?)
- Did I specify the format? (Length, style, structure?)
- Did I mention any constraints? (What should it NOT include?)
“Write about dogs.”What you got: A 500-word essay about the evolutionary history of canines. What you wanted: A fun Instagram caption for your new puppy photo. The fix:
“Write a short, playful Instagram caption (under 20 words) for a photo of my new golden retriever puppy playing in autumn leaves. Keep it lighthearted and use 1-2 relevant emojis.”See the difference? You didn’t just say what you wanted - you said who it’s for, how long it should be, and what tone to use.
Responses Too Long, Too Short, or Too Generic
Sometimes the AI understands your request but delivers it in the wrong “size” or with all the personality of a corporate memo. The Goldilocks Problem- Too long: You asked for a summary and got an essay. The AI is trying to be thorough, but you needed brevity.
- Too short: You wanted a detailed explanation and got a sentence. The AI defaulted to concise when you needed comprehensive.
- Too generic: The response is technically correct but could apply to anyone or anything. It lacks the specific details that would make it useful.
- Add explicit length constraints: “in 2-3 sentences,” “under 100 words,” “in bullet points (max 5)”
- Tell it what to leave out: “Skip the background context and get straight to the actionable steps”
- Use formatting instructions: “Give me the headline version, not the full article”
- Ask for specifics: “Include at least 3 examples,” “Explain your reasoning,” “Cover both the pros and cons”
- Request depth: “Go into detail about each step,” “Elaborate on the most important points”
- Add “comprehensive,” “thorough,” or “in-depth” to your request
- Add your specific context: instead of “marketing tips,” try “marketing tips for a small bakery in a college town”
- Include details about your situation: your constraints, your audience, your goals
- Ask it to tailor the response: “Make this specific to my situation as a…”
“Give me tips for better sleep.”What you got: Generic advice about dark rooms and avoiding screens. The fix:
“I’m a new parent with a 3-month-old who wakes up twice a night. I work from home and need to be functional for 9am meetings. Give me realistic sleep optimization tips that work around unpredictable baby schedules.”Now the AI has something to work with. The tips will actually fit your life.
AI Confidently Giving Wrong Information
This is perhaps the trickiest problem because the AI doesn’t signal when it’s uncertain. It delivers wrong information with the same confident tone as correct information. Welcome to the world of “hallucinations.” Why This Happens Remember from Lesson 2 that AI predicts the most likely next words based on patterns. Sometimes those patterns lead to plausible-sounding but incorrect information. The AI isn’t lying - it doesn’t know the difference between true and false. It’s just generating text that sounds right. Red Flags to Watch For- Specific numbers, dates, or statistics (especially recent ones) - these are often fabricated
- Quotes attributed to real people - AI frequently makes these up
- Claims about current events - AI’s knowledge has a cutoff date
- Technical details in specialized fields - the AI may blend similar concepts incorrectly
- Anything that feels “too perfect” - if it sounds exactly like what you wanted to hear, double-check it
- Ask the AI to flag uncertainty: Add to your prompt: “If you’re not certain about something, say so. It’s okay to say ‘I’m not sure’ or ‘you should verify this.’”
- Request sources or reasoning: “Explain how you arrived at this answer” or “What’s this based on?” This doesn’t guarantee accuracy, but it helps you spot shaky logic.
- Break it into verifiable pieces: Instead of asking “Tell me everything about X,” ask specific questions you can fact-check.
- Use AI as a starting point, not the final word: For anything important, treat AI output as a first draft that needs verification, not a finished answer.
- Cross-reference important facts: If the AI gives you statistics or claims that matter for your work, take 30 seconds to verify them with a quick search.
When the AI Refuses to Help (And What to Do)
Sometimes you get a response that essentially says “I can’t help with that.” This can be frustrating, especially when your request seems perfectly reasonable. Why AI Says No AI systems have built-in guidelines that cause them to decline certain requests. These typically fall into a few categories:- Safety guardrails: Requests that could potentially cause harm
- Content policies: Topics the AI is instructed to avoid
- Misinterpreted intent: The AI thinks you’re asking for something problematic when you’re not
- Ambiguous requests: The AI isn’t sure if the request is okay, so it errs on the side of caution
Try It Yourself
Exercise 1: Diagnose the Problem
Here are three “failed” prompts and their unsatisfying outputs. For each one, identify what went wrong and write an improved version. Prompt A:“Explain quantum computing.”Output: A 1,500-word technical explanation filled with jargon about qubits, superposition, and quantum entanglement. What went wrong? ___ Your improved prompt: ___ Prompt B:
“I need a business plan.”Output: A generic template with sections like “Executive Summary” and “Market Analysis” - no specifics, just headers and placeholder text. What went wrong? ___ Your improved prompt: ___ Prompt C:
“What’s the best restaurant in Chicago?”Output: The AI names a specific restaurant with made-up details about awards it supposedly won in 2024. What went wrong? ___ Your improved prompt: ___
Exercise 2: The Troubleshooting Checklist
Take a prompt that recently gave you a disappointing result. Run it through this diagnostic:- Clarity check: Could a smart stranger understand exactly what I want?
- Context check: Did I provide enough background?
- Format check: Did I specify length, style, or structure?
- Constraint check: Did I say what to avoid or exclude?
- Accuracy check: Am I asking about facts I should verify?
Exercise 3: Salvage an Output
You asked: “Write me a cover letter for a marketing job.” You got: A generic cover letter that could be for any job, any company, any person. Without starting over, write 2-3 follow-up messages that would transform this generic output into something you could actually use. (Hint: Think about what specific information you need to add.)Common Pitfalls
Pitfall 1: Blaming the AI instead of the prompt When you get a bad output, the instinct is to think “this AI is terrible.” But 90% of the time, the AI is doing exactly what the prompt asked for - you just asked for the wrong thing. Train yourself to look at the prompt first. Pitfall 2: Starting completely over instead of iterating A bad output isn’t a dead end. It’s information. What specifically was wrong? Too long? Wrong tone? Missing context? Use that feedback to refine rather than restart. Pitfall 3: Not being specific enough about “wrong” When something isn’t working, vague follow-ups like “That’s not right” or “Try again” don’t help. Be specific: “That’s too formal - use a conversational tone like you’re talking to a friend.” Give the AI something to work with. Pitfall 4: Trusting confident-sounding facts AI doesn’t have a “not sure” voice. It states made-up facts with the same confidence as well-established ones. Build in verification for anything that matters. Pitfall 5: Fighting the refusal instead of understanding it When AI declines a request, your first instinct shouldn’t be to find a way around it. First, consider: is there a legitimate reason for the hesitation? Then decide if reframing (vs. circumventing) is the right approach.Level Up
Here’s a challenge that puts all your troubleshooting skills to work. The Scenario: You’re helping a friend who’s new to AI. They show you this exchange:Their prompt: “Write a speech for my dad’s retirement party.” AI output: A 800-word formal speech filled with corporate jargon about “leveraging synergies” and “transitioning to the next chapter,” plus made-up references to “his 30 years at the company.”Your friend says: “See? AI is useless. It doesn’t know anything about my dad.” Your challenge:
- Explain to your friend what went wrong (in a kind way that doesn’t make them feel bad).
- Write 3 specific questions you’d ask them to gather the information needed.
- Craft the improved prompt using the information they might provide.
- Add a sentence to the prompt that would prevent the AI from making up facts it doesn’t know.

