Watch the video
Each lesson starts with a short video. It covers the same material as the text — just pick the format you prefer. You can skip it and read instead.
The probability hacker
Most people type into AI the way they text a friend — and then wonder why the answer sounds like it was written by a committee.
AI is not a conversation partner.
It is a prediction engine.
The difference between a useless response and a useful one is not the model.
It is the prompt.
Watch
Video version of this lesson
Video lesson
Read
Full lesson text
Deep dive theory
Why this matters?
Imagine someone gives you a half-finished sentence: "The students open their..."
Your brain immediately calculates the most likely next word:
- Books? Highly likely (45%)
- Laptops? Very likely (35%)
- Minds? Possible in a poetic context (10%)
- Refrigerators? Statistically near zero (0.001%)
You did not "think" about this. Your brain ran a prediction based on everything you have ever read and heard.
AI does the exact same thing — except it has read billions of pages, and it runs the calculation in milliseconds. This is what a "large language model" (LLM) actually is: a statistical prediction engine. Think of it as the most advanced autocomplete ever built.
When you type a prompt, you are setting the starting conditions for that prediction. If your prompt is vague — "write a story" — the prediction could go anywhere. The AI pulls from the broadest possible pool of patterns, and the result will be average, safe, and generic.
If your prompt is specific — with context, format, and constraints — the prediction gets forced into a narrow path where only relevant answers exist.
That is what this lesson is about: how to control the prediction.
It is not a chat — it is a program
Most people fail at prompting because they treat the AI like a human friend. When you ask a friend to "write an email," they use their life experience, social intuition, and everything they know about you to figure out what you want.
AI has none of that. It has no life. It has no memory of you. It has no sense of what is appropriate in your company, your industry, or your culture.
The mindset shift: if you think of prompting as "chatting," you become lazy. If you think of it as "programming," you become precise.
- Traditional programming: uses Python or C++ to manipulate data through rigid syntax
- Prompting: uses English to manipulate a prediction engine through descriptive instructions
The language is different, but the logic is the same — you give specific instructions, and the machine executes them.
| Concept | The chatter | The programmer |
|---|---|---|
| The prompt | A question or a request | A program with starting conditions |
| The output | An answer from a smart mind | A prediction based on statistical patterns |
| When output is wrong | "The AI is broken" | "I need to fix my prompt" |
| When AI invents facts | "The AI is lying" | "The AI is filling a gap with a likely guess" |
If you think you are chatting, you tolerate vague results. If you think you are programming, you debug them.
How to hack the probability
To get a good result, you need to "hack" the probability — provide enough constraints so that the only statistically likely output is the one you want.
The catchphrase experiment
Vague prompt: "Complete my catchphrase: You need to learn..."
Result: the AI could go in a thousand directions — AI, coding, patience, new skills, to cook. The probability is spread too wide.
Specific prompt: "I am a technology YouTuber known for the phrase: 'You need to learn [blank] right now!' Complete the phrase."
Result: the AI lands on a specific, relevant answer.
What happened: by adding three pieces of context — who you are, the phrase format, and the tone — you blocked 99% of the training data and forced the model into a tiny corner where only a few specific words exist.
This is probability hacking. You add constraints until the only likely output is the one you want.
Why vague prompts always produce mediocre results
When you type "write me an email," the AI has to make dozens of decisions on its own:
- Who is the email for?
- What is the tone — formal, casual, urgent?
- How long should it be?
- What facts should it include?
- What is the goal — inform, persuade, apologize?
For each unanswered question, the AI guesses based on the most common patterns in its training data. The result sounds like every other email ever written — because that is what the average of the internet looks like.
Every piece of context you add removes one guess. The fewer guesses the AI makes, the better the output gets.
The framework: task, context, references, evaluate, iterate
There is a repeatable loop that turns this idea into a practical process. Five steps, in order.
1. Task: the anchor
The task is your specific deliverable — not the general topic.
- Weak task: "Help me with my gym." (The AI does not know if you want a workout, a business plan, or a complaint letter)
- Strong task: "Draft an email to gym staff asking to change the schedule for the 6 AM HIIT class on Tuesdays."
Why this works: a clear task sets the format and the goal immediately. It gives the prediction a destination.
2. Context: the background data
Context is everything the AI does not know about your situation — which is everything.
AI is frozen in time. It does not know what happened yesterday, and it does not know what is happening in your specific office.
Whatever context you do not provide, the AI will invent. This is the single biggest source of made-up information in AI output.
If you do not tell the AI your budget is $30, it might suggest a $500 gift.
Example of context injection:
"I am building a landing page for a project management tool. It is for freelance designers (ages 25-40) who are frustrated that Asana is too complex. We focus on visual timelines. Keep the tone warm but professional."
Why this works: it creates a box. The model can no longer use its generic training data — it must stay inside the box of "visual, designer, freelance, warm tone."
3. References: show, do not tell
A reference is an example of what you want the output to look like. In the industry, this is called "few-shot prompting":
- Zero-shot: you give a task with no examples. High risk of style mismatch
- Few-shot: you provide 2-3 examples of what good looks like. The AI copies the pattern
Why this works: showing is more precise than describing. If you want a specific brand voice, do not describe it with adjectives like "quirky" — paste three existing tweets and say "analyze the style and replicate it." This locks the AI into your specific pattern.
4. Evaluate
If the result is wrong, do not just retry. Troubleshoot like an engineer:
- Revisit the framework: did you forget the context? Is the task too vague?
- Simplify sentences: long, dense paragraphs confuse AI just like they confuse people. Break instructions into numbered lists
- Try a different verb: if "write a proposal" gives boring results, try "write a persuasive argument for a partnership." A different action word triggers a different pattern in the AI
- Add constraints: tell the AI it must start with a question, stay under 200 words, and avoid the word "innovative." Constraints force creativity
5. Iterate
Prompting is not a one-shot action. It is a loop.
Each round of iteration is not starting over — it is refining the program until the output matches what you need.
The persona: priming the prediction
One of the most powerful techniques is the "persona" — telling the AI to act as a specific type of expert.
This is not pretend. You are priming the model to access a specific set of vocabulary, logic, and priorities.
- A "physical therapist" persona will prioritize safety and anatomy
- A "bodybuilder" persona will prioritize intensity and muscle growth
Ask both for a workout plan, and you get completely different results — from the same AI, with the same prompt. The only difference is which "lens" the AI uses to predict.
System prompt vs user prompt
In professional environments (APIs, code), the persona usually lives in the "system prompt":
- System prompt: "You are a world-class financial analyst." (The permanent identity)
- User prompt: "Summarize this report." (The specific task)
A note on honesty: some guides suggest tricking the AI with phrases like "I will tip you $500" or "this is for my dying grandmother." These emotional hacks sometimes work on older models, but they are unnecessary with current ones. A clear persona and clear context work better than manipulation.
Why AI makes things up
AI is trained to be helpful. This sounds like a good thing, but it creates a specific problem.
If you ask about something the AI does not have information on, it will not say "I do not know." Instead, it predicts what a helpful response should sound like — and generates text that sounds right but is completely made up.
This is not the AI lying. It is filling a statistical gap with a plausible guess, the same way your phone's autocomplete sometimes suggests a word that fits grammatically but makes no sense.
The cost in casual use: a wrong fun fact. Annoying, but harmless.
The cost in business: an investor email with invented revenue numbers. A report with fake statistics. A contract with fabricated legal references. Any of these can destroy credibility in a single send.
The permission to fail
The simplest technique to stop this: give the AI explicit permission to say "I do not know."
If the answer is not in the provided context, say that you do not know. Do not make up information.
Why this works: it changes the prediction. Without this instruction, "I do not know" is a low-probability response because the AI is trained to always be helpful. With this instruction, honesty becomes the high-probability path when the data is missing.
Beyond text: images as context
Modern AI models can process images, not just text. This opens a powerful shortcut for giving context.
Instead of spending ten minutes describing a website layout you do not like, upload a screenshot.
Example prompt: "Analyze this homepage. Identify three areas where user attention might drop off and suggest improvements."
Why this works: an image contains far more context than any written description. The AI can see spacing, color, hierarchy, and layout all at once — instead of relying on your words to describe it.
This works for any visual content: dashboards, charts, design mockups, competitor websites, handwritten notes. If you can take a screenshot, you can use it as context.
Listen
Audio version of this lesson
Podcast
READY
The probability hacker
Think
What would you do in these scenarios?
Simulator
The product launch email
Your SaaS product launches Monday. You need AI to write an announcement email to your 2,000 beta users. You open your AI tool. What do you type?
Practice
Test yourself and review key terms
Knowledge check
What is the most accurate way to define a Large Language Model (LLM)?
Concepts
Show answer
Apply
Your action steps for today
Action plan: what to do today
- The prompt rewrite:Take your last AI prompt and rebuild it with the framework — add a clear task, context about your situation, and a persona. Compare the new output with the old one.
- The honesty line:Add "If you do not know, say so. Do not make up information." to your next important prompt. Check if the AI's output changes.
- The persona test:Pick one task you do regularly. Run it twice — once as a "marketing strategist" and once as a "data analyst." Compare what comes back.
Finish
You made it through this lesson
Thank you!
Your feedback helps us improve. We appreciate the time you took to share your thoughts!
Unlock the full course
You just finished the free lesson of AI for business. Pick the option that works for you.
Continue AI for business — pick your plan.
- → All lessons in one course
- → Simulations & flashcards
- → Quiz & certificate
- → Lifetime access
- → 80+ lessons across 16 courses
- → All lessons & simulations
- → Quizzes & certificates
- → Lifetime access
Some examples and details may be simplified to better convey the core idea. Every business is different — adapt these ideas to your specific context and situation.