Skip to main content

Lesson 9: Troubleshooting

When AI Goes Wrong

You’ve learned the techniques. You’ve practiced iteration. But sometimes things still go sideways: the AI writes a novel when you wanted a tweet, answers a question you never asked, or confidently states something that’s just plain wrong. Every AI user has been there. The difference between frustrated beginners and confident prompters isn’t that experts never get bad outputs. It’s that they’ve learned to read those bad outputs like clues in a mystery, then apply the right fix. In this lesson, we’re turning you into a prompt detective.

Core Concepts

”That’s Not What I Meant!”: Fixing Misunderstandings

This is probably the most common AI problem, and it almost always comes down to one thing: the AI didn’t have enough information to understand what you actually wanted. The Restaurant Analogy Imagine you walk into a restaurant and tell the server, “I’d like something good.” They bring you a plate of liver and onions. Technically, someone thinks that’s good. But it’s not what you meant. Now imagine you said, “I’d like something light, maybe a salad with grilled chicken, no onions, with the dressing on the side.” Much harder to mess that up, right? AI works the same way. When it “misunderstands” you, it’s usually because your prompt left too much room for interpretation. The Fix: Get Specific About What You Actually Want Look at your failed prompt and ask yourself:
  • Did I explain the context? (Who is this for? What’s the situation?)
  • Did I specify the format? (Length, style, structure?)
  • Did I mention any constraints? (What should it NOT include?)
Example of a Misunderstanding Your prompt:
“Write about dogs.”
What you got: A 500-word essay about the evolutionary history of canines. What you wanted: A fun Instagram caption for your new puppy photo. The fix:
“Write a short, playful Instagram caption (under 20 words) for a photo of my new golden retriever puppy playing in autumn leaves. Keep it lighthearted and use 1-2 relevant emojis.”
See the difference? You didn’t just say what you wanted - you said who it’s for, how long it should be, and what tone to use.

Responses Too Long, Too Short, or Too Generic

Sometimes the AI understands your request but delivers it in the wrong “size” or with all the personality of a corporate memo. The Goldilocks Problem
  • Too long: You asked for a summary and got an essay. The AI is trying to be thorough, but you needed brevity.
  • Too short: You wanted a detailed explanation and got a sentence. The AI defaulted to concise when you needed comprehensive.
  • Too generic: The response is technically correct but could apply to anyone or anything. It lacks the specific details that would make it useful.
The Fixes For responses that are too long:
  • Add explicit length constraints: “in 2-3 sentences,” “under 100 words,” “in bullet points (max 5)”
  • Tell it what to leave out: “Skip the background context and get straight to the actionable steps”
  • Use formatting instructions: “Give me the headline version, not the full article”
For responses that are too short:
  • Ask for specifics: “Include at least 3 examples,” “Explain your reasoning,” “Cover both the pros and cons”
  • Request depth: “Go into detail about each step,” “Elaborate on the most important points”
  • Add “comprehensive,” “thorough,” or “in-depth” to your request
For responses that are too generic:
  • Add your specific context: instead of “marketing tips,” try “marketing tips for a small bakery in a college town”
  • Include details about your situation: your constraints, your audience, your goals
  • Ask it to tailor the response: “Make this specific to my situation as a…”
Example of Fixing Generic Your prompt:
“Give me tips for better sleep.”
What you got: Generic advice about dark rooms and avoiding screens. The fix:
“I’m a new parent with a 3-month-old who wakes up twice a night. I work from home and need to be functional for 9am meetings. Give me realistic sleep optimization tips that work around unpredictable baby schedules.”
Now the AI has something to work with. The tips will actually fit your life.

AI Confidently Giving Wrong Information

This is perhaps the trickiest problem because the AI doesn’t signal when it’s uncertain. It delivers wrong information with the same confident tone as correct information. Welcome to the world of “hallucinations.” Why This Happens Remember from Lesson 2 that AI predicts the most likely next words based on patterns. Sometimes those patterns lead to plausible-sounding but incorrect information. The AI isn’t lying - it doesn’t know the difference between true and false. It’s just generating text that sounds right. Red Flags to Watch For
  • Specific numbers, dates, or statistics (especially recent ones) - these are often fabricated
  • Quotes attributed to real people - AI frequently makes these up
  • Claims about current events - AI’s knowledge has a cutoff date
  • Technical details in specialized fields - the AI may blend similar concepts incorrectly
  • Anything that feels “too perfect” - if it sounds exactly like what you wanted to hear, double-check it
The Fixes
  1. Ask the AI to flag uncertainty: Add to your prompt: “If you’re not certain about something, say so. It’s okay to say ‘I’m not sure’ or ‘you should verify this.’”
  2. Request sources or reasoning: “Explain how you arrived at this answer” or “What’s this based on?” This doesn’t guarantee accuracy, but it helps you spot shaky logic.
  3. Break it into verifiable pieces: Instead of asking “Tell me everything about X,” ask specific questions you can fact-check.
  4. Use AI as a starting point, not the final word: For anything important, treat AI output as a first draft that needs verification, not a finished answer.
  5. Cross-reference important facts: If the AI gives you statistics or claims that matter for your work, take 30 seconds to verify them with a quick search.
The Healthy Mindset Think of AI as a very knowledgeable but sometimes overconfident friend. They know a lot and can help you think through problems, but you wouldn’t stake your reputation on their memory of specific facts without double-checking.

When the AI Refuses to Help (And What to Do)

Sometimes you get a response that essentially says “I can’t help with that.” This can be frustrating, especially when your request seems perfectly reasonable. Why AI Says No AI systems have built-in guidelines that cause them to decline certain requests. These typically fall into a few categories:
  1. Safety guardrails: Requests that could potentially cause harm
  2. Content policies: Topics the AI is instructed to avoid
  3. Misinterpreted intent: The AI thinks you’re asking for something problematic when you’re not
  4. Ambiguous requests: The AI isn’t sure if the request is okay, so it errs on the side of caution
The Fixes For misinterpreted intent - clarify your purpose: If you’re asking about something sensitive for a legitimate reason, explain that reason. Instead of: “How do I pick a lock?” Try: “I’m a landlord and I’m locked out of my own rental property. What are my legitimate options for gaining entry? I’m also happy to call a locksmith - what should that process look like?” Instead of: “Write a story where the villain wins.” Try: “I’m writing a literary fiction piece that explores moral complexity. I’d like to write a scene from the antagonist’s perspective that helps readers understand their motivation, even if they don’t agree with their actions.” For content the AI seems hesitant about - reframe the request: Sometimes how you ask matters as much as what you ask. Instead of: “Write a negative review of [competitor product]” Try: “Help me write an honest comparison between Product A and Product B, including the weaknesses of each.” For topics that require expertise the AI shouldn’t provide: If the AI declines to give medical, legal, or financial advice, that’s actually appropriate. Reframe your request to get useful help without asking AI to play professional: Instead of: “What medication should I take for my symptoms?” Try: “What questions should I prepare to ask my doctor about these symptoms? What information will they likely need from me?” When to Accept the “No” Sometimes the AI is right to decline. If you’re asking for something that could harm others, violate privacy, or cross ethical lines, the refusal is doing its job. Take a moment to consider whether your request might need rethinking. The Workaround Approach If you genuinely need help with something the AI seems reluctant about, try breaking it into smaller, clearly-appropriate pieces. Often the AI can help with components of a task even if it’s hesitant about the whole thing.

Try It Yourself

Exercise 1: Diagnose the Problem

Here are three “failed” prompts and their unsatisfying outputs. For each one, identify what went wrong and write an improved version. Prompt A:
“Explain quantum computing.”
Output: A 1,500-word technical explanation filled with jargon about qubits, superposition, and quantum entanglement. What went wrong? ___ Your improved prompt: ___ Prompt B:
“I need a business plan.”
Output: A generic template with sections like “Executive Summary” and “Market Analysis” - no specifics, just headers and placeholder text. What went wrong? ___ Your improved prompt: ___ Prompt C:
“What’s the best restaurant in Chicago?”
Output: The AI names a specific restaurant with made-up details about awards it supposedly won in 2024. What went wrong? ___ Your improved prompt: ___

Exercise 2: The Troubleshooting Checklist

Take a prompt that recently gave you a disappointing result. Run it through this diagnostic:
  1. Clarity check: Could a smart stranger understand exactly what I want?
  2. Context check: Did I provide enough background?
  3. Format check: Did I specify length, style, or structure?
  4. Constraint check: Did I say what to avoid or exclude?
  5. Accuracy check: Am I asking about facts I should verify?
Which check revealed the problem? Rewrite your prompt addressing that gap.

Exercise 3: Salvage an Output

You asked: “Write me a cover letter for a marketing job.” You got: A generic cover letter that could be for any job, any company, any person. Without starting over, write 2-3 follow-up messages that would transform this generic output into something you could actually use. (Hint: Think about what specific information you need to add.)

Common Pitfalls

Pitfall 1: Blaming the AI instead of the prompt When you get a bad output, the instinct is to think “this AI is terrible.” But 90% of the time, the AI is doing exactly what the prompt asked for - you just asked for the wrong thing. Train yourself to look at the prompt first. Pitfall 2: Starting completely over instead of iterating A bad output isn’t a dead end. It’s information. What specifically was wrong? Too long? Wrong tone? Missing context? Use that feedback to refine rather than restart. Pitfall 3: Not being specific enough about “wrong” When something isn’t working, vague follow-ups like “That’s not right” or “Try again” don’t help. Be specific: “That’s too formal - use a conversational tone like you’re talking to a friend.” Give the AI something to work with. Pitfall 4: Trusting confident-sounding facts AI doesn’t have a “not sure” voice. It states made-up facts with the same confidence as well-established ones. Build in verification for anything that matters. Pitfall 5: Fighting the refusal instead of understanding it When AI declines a request, your first instinct shouldn’t be to find a way around it. First, consider: is there a legitimate reason for the hesitation? Then decide if reframing (vs. circumventing) is the right approach.

Level Up

Here’s a challenge that puts all your troubleshooting skills to work. The Scenario: You’re helping a friend who’s new to AI. They show you this exchange:
Their prompt: “Write a speech for my dad’s retirement party.” AI output: A 800-word formal speech filled with corporate jargon about “leveraging synergies” and “transitioning to the next chapter,” plus made-up references to “his 30 years at the company.”
Your friend says: “See? AI is useless. It doesn’t know anything about my dad.” Your challenge:
  1. Explain to your friend what went wrong (in a kind way that doesn’t make them feel bad).
  2. Write 3 specific questions you’d ask them to gather the information needed.
  3. Craft the improved prompt using the information they might provide.
  4. Add a sentence to the prompt that would prevent the AI from making up facts it doesn’t know.

Key Takeaway

AI isn’t trying to frustrate you; it’s doing its best with the information you gave it. When something goes wrong, diagnose first: Misunderstanding? Add context and specifics. Wrong length? Add explicit constraints. Too generic? Include your unique situation. Potentially wrong? Ask for reasoning and verify key facts. Refused? Clarify your purpose or reframe the request. The more you practice diagnosing problems, the faster you’ll write prompts that work the first time.

What’s Next

You’ve now learned how to craft prompts, iterate on them, and troubleshoot when things go wrong. In our final lesson, Lesson 10: Putting It All Together, we’ll synthesize everything into a practical framework you can use every day. You’ll create your own prompt templates, work through real-world scenarios, and leave with a complete toolkit for making AI a genuine amplifier of your work. You’ve come a long way. Let’s bring it all home.