Lesson 1: Welcome to the Conversation
You Already Know How to Do This
Here’s a secret that the tech industry doesn’t want you to know: you’ve been training for AI your entire life. Every time you’ve asked someone for help and realized you needed to explain things differently… every time you’ve rephrased a question because the first version didn’t land… every time you’ve given more context because someone misunderstood what you meant… You were prompt engineering. You just didn’t know it had a fancy name.The Google Trap
Let’s start with an experiment. Think about the last time you used Google. You probably typed something like:best italian restaurants near meShort. Choppy. Keywords. No full sentences needed. That’s because search engines are like vending machines. You punch in a code, and they spit out a list of links. The machine doesn’t care about you, your preferences, or your situation. It just pattern-matches your keywords against a database. Now here’s the thing: AI doesn’t work like that. When people first try ChatGPT, Claude, or other AI assistants, they often treat them like fancy search engines. They type keyword-style queries and get confused when the results feel… off. Too generic. Not quite right. That’s not a failure of the AI. It’s a mismatch in expectations. Because AI isn’t a vending machine. It’s more like a very eager new colleague who’s brilliant but doesn’t know anything about you yet.
Core Concepts
What Is an LLM, Really?
LLM stands for Large Language Model. But let’s skip the jargon and talk about what it actually does. An LLM is a system that has read an almost incomprehensible amount of text (books, articles, websites, conversations, code) and learned patterns from all of it. Not facts, exactly. Patterns. It learned how humans communicate, how ideas flow, how questions get answered. Think of it like this: Imagine someone who has read every book in the world’s largest library. They can talk about almost any topic, write in almost any style, and help with almost any task. But they weren’t there when those books were written. They don’t have personal experiences. And they sometimes mix things up or fill in gaps with confident-sounding guesses. That’s an LLM. Incredibly capable. Sometimes surprisingly insightful. But not all-knowing, and definitely not infallible. Why should you care? Because LLMs are now woven into the tools you use every day, whether you realize it or not. Email assistants, customer service chatbots, writing tools, coding helpers, search features. Understanding how they work gives you a superpower: you can actually get them to help you instead of frustrating you.The Conversation Mindset
Here’s the mental shift that changes everything: Stop thinking of AI as a tool. Start thinking of it as a conversation partner. When you talk to a friend about a problem, you don’t bark keywords at them. You explain the situation. You share context. You describe what you’re trying to accomplish. And if they misunderstand, you clarify. AI works the same way. The difference between someone who’s frustrated with AI and someone who gets amazing results often comes down to this single mindset shift. One person is typing commands into a machine. The other is having a conversation with a helpful (if sometimes literal-minded) collaborator. Your role in this conversation:- You bring the goals, the context, the judgment
- You know what “good” looks like for your situation
- You can steer, clarify, and redirect
- It brings vast knowledge and pattern recognition
- It can generate, analyze, and transform at incredible speed
- It never gets tired, impatient, or annoyed when you ask for changes
Your First Prompt: What’s Actually Happening
Let’s say you type this into an AI assistant:Write me a poemThe AI will write you a poem. It will probably be… fine. Generic. A little Hallmark-y. Not bad, but not special. Now let’s say you type this instead:
Write me a short poem in the style of Mary Oliver about finding unexpected peace while doing mundane choresThis time, you’ll get something much more interesting. Why? Because in the first prompt, the AI had to guess at everything: What kind of poem? What tone? What subject? What length? Who’s the audience? It made safe, middle-of-the-road choices because you gave it nothing to work with. In the second prompt, you gave it constraints and direction. You told it what “good” looks like for your specific situation. The AI didn’t have to guess; it could focus all its capability on the actual creative work. Here’s the principle: AI doesn’t read minds. Every piece of information you leave out is a blank the AI fills in with its best guess. Sometimes those guesses are great. Often they’re not what you wanted.
What AI Can and Can’t Do
Before we go further, let’s set some realistic expectations. This will save you a lot of frustration. What AI is genuinely good at:- Generating drafts and starting points. Need a first version of an email, a blog post, an outline? AI can give you something to react to instead of a blank page.
- Explaining things in different ways. Confused about a concept? AI can reframe it, simplify it, or explain it from a different angle.
- Brainstorming and ideation. AI can generate lots of options quickly. Most won’t be winners, but often that spark of an idea is all you need.
- Transforming content. Summarizing, expanding, reformatting, translating: AI excels at taking something that exists and turning it into something else.
- Answering questions about things it’s been trained on. Within its knowledge base, AI can be remarkably helpful for research and learning.
- Tedious but straightforward tasks. Formatting data, generating variations, filling in templates. These are tasks that are boring for humans but clear in their requirements.
- Being current. AI has a knowledge cutoff date. It doesn’t know about yesterday’s news (unless it has access to the web, and even then, verification matters).
- Accuracy for critical facts. AI can sound confident while being completely wrong. Always verify important information.
- Understanding your specific context without being told. The AI doesn’t know your company’s culture, your personal preferences, or your audience unless you share that information.
- Genuine creativity and originality. AI remixes patterns it has seen. It can be surprisingly creative within those patterns, but true novelty is hard.
- Judgment calls that require human values. AI can inform a decision, but the final call, especially on ethical matters, should be yours.
- Doing things in the real world. AI generates text (and sometimes images, code, etc.). It can’t actually send emails, make purchases, or take physical actions on your behalf unless connected to other systems.
Try It Yourself
Time to get your hands dirty. Open your favorite AI assistant (ChatGPT, Claude, Gemini, whatever you have access to) and try these exercises.Exercise 1: The Vending Machine vs. Conversation Test
First, try a “vending machine” style prompt:marketing tipsNote what you get. Is it useful? Specific to your situation? Probably not. Now try a conversational version:
I run a small bakery in a college town. We’re great at making artisan bread but terrible at social media. What are three realistic marketing ideas I could start this week with zero budget?Compare the two responses. Notice how the second one gives you actionable, specific ideas because you gave it context about who you are and what you actually need?
Exercise 2: The Clarifying Follow-Up
Take whatever response you got from Exercise 1 and practice the conversation. Try saying:That second idea sounds interesting, but I’m worried it might come across as gimmicky. Can you help me think through how to do it in a way that feels authentic to our brand?Notice how the AI adjusts based on your feedback? That’s the conversation in action. You’re not starting over; you’re refining together.
Exercise 3: The “Wrong” Response
On purpose, give the AI a vague prompt and see what happens:Help me with my projectThe AI will probably ask clarifying questions or give you something very generic. That’s not the AI being dumb; it’s the AI showing you exactly where your prompt was unclear.
Common Pitfalls
As you start this journey, watch out for these traps that catch almost everyone: The Keyword Habit You’ve spent years training yourself to type keywords into search boxes. Breaking that habit takes conscious effort. When you catch yourself typing choppy keyword phrases, stop and ask yourself: “How would I explain this to a helpful colleague?” The One-and-Done Mentality Your first prompt rarely gives you the perfect result. That’s not failure; that’s normal. The magic of AI is in the back-and-forth. Think of your first prompt as starting a conversation, not placing an order. Over-Trusting the Confident Tone AI always sounds confident, even when it’s making things up. This is probably its most dangerous feature. If something seems off, or if the information is important, verify it. AI is a great starting point for research, not the end point. Under-Sharing Context People often worry about making prompts “too long.” In reality, more context almost always leads to better results. You’re not bothering the AI by explaining your situation; you’re helping it help you. Expecting Perfection AI outputs are drafts, not final products. They’re meant to give you something to work with, react to, and refine. If you expect perfection on the first try, you’ll always be disappointed.Level Up: Your Challenge
Here’s a challenge to test what you’ve learned: Think of a real task you need to accomplish this week, something you might actually use AI for. Maybe it’s drafting an email, brainstorming ideas for a presentation, or explaining a concept to someone. Write two versions of a prompt for this task:- The “old you” version: What you would have typed before reading this lesson
- The “conversation mindset” version: A prompt that includes context, specifics, and clear direction

