Lesson 4/5GROWTH5 min read

A/B testing: data over opinions

People are bad at predicting what customers want.

Even experts guess wrong.

Testing two options against real customers reveals the truth faster than any meeting or debate.

Deep dive theory

Why this matters?

Imagine this conversation at a restaurant:

We should put the special at the top of the menu. People read top-to-bottom.

No, put it at the bottom. That is where people look last, so it sticks in their mind.

Actually, put it in the middle. The eye naturally rests there.

Everyone has an opinion. Everyone sounds confident. But nobody actually knows until customers decide. And customers often surprise us.

A/B testing removes the guessing. You try both options with real customers and measure which one works. The data decides, not opinions.


1. What is A/B testing?

A/B testing means comparing two versions of something to see which one works better.

The basic process:

  1. Pick one thing to test. Just one. Changing multiple things at once makes results confusing.
  2. Create two versions: A (the current way) and B (the new idea).
  3. Split your customers randomly. Half see version A, half see version B.
  4. Measure results. Which version gets more sales, more signups, more repeat visits?
  5. Keep the winner. The version with better numbers becomes the new standard.

Example:

A salon wants to know if Book Now or Reserve Your Spot works better on their website.

Version A: Book Now button

Version B: Reserve Your Spot button

After 500 visitors, Version B has 15% more bookings. They switch to Reserve Your Spot and keep testing other things.


2. What to test first

Not all tests are equal. Some changes barely move the needle. Others can double your results.

High-impact areas:

  • The main offer. What you are actually selling matters more than how you describe it.
  • The price or pricing structure. Small changes in price can change buying behavior a lot.
  • The headline. The first thing people read. A better headline can double attention.
  • The first impression. What customers see in the first 10 seconds.

Low-impact areas:

  • Font choices
  • Small wording changes in the middle of a page
  • Button colors (usually)
  • Minor layout adjustments

Real example:

A gym tested two offers:

  • Version A: $50/month membership
  • Version B: $50/month membership + free fitness assessment

Same price. Version B had the free assessment. Results: Version B got 40% more signups. The offer change moved the needle far more than any design tweak could.

Test the big things first. Save the small optimizations for later.


3. How to test without a website

A/B testing is not just for online businesses. You can test anything.

In-person testing:

A restaurant wants to know which special sells better.

  • Week 1: Server suggests the fish special.
  • Week 2: Server suggests the pasta special.

Compare sales. Which week sold more specials?

Signage testing:

A retail store tests two window signs.

  • Monday-Wednesday: Sign A
  • Thursday-Saturday: Sign B

Count foot traffic and sales for each period.

Phone call testing:

A consultant calls 20 prospects.

  • First 10: Pitch A (focuses on saving money)
  • Next 10: Pitch B (focuses on saving time)

Which pitch gets more meetings booked?

The key is to change only one thing and measure the result. You do not need software. You need a notebook and discipline.


4. Reading results correctly

A/B testing can mislead you if you do not understand the numbers. The most common mistake is stopping a test too early.

The problem with small samples:

If you flip a coin 5 times and get 4 heads, you might think the coin is rigged. But flip it 100 times and it will be close to 50-50. Small samples are noisy.

The same happens with A/B tests. If 10 people saw your website and 6 bought from Version B, that does not mean Version B is better. It could be luck.

Statistical significance:

This term means your result is probably real, not random noise. Most testing tools calculate this for you. A common rule: do not trust a result until it shows at least 95% confidence.

What to do:

  • Decide in advance how many customers or visitors you need before judging results.
  • Do not peek at the data and declare a winner early. Wait for enough data.
  • If you do not have software, run the test for a fixed period (e.g., two full weeks) before comparing.

5. Running more tests wins

The company that runs the most experiments usually wins. Not because every test is a home run, but because they learn faster.

The math of testing:

Imagine you run 50 tests per year. Most do nothing. But 5 of them improve results by 10% each. Those gains compound: 1.1 × 1.1 × 1.1 × 1.1 × 1.1 = 61% total improvement by the end of the year.

A competitor who runs 5 tests per year might find one improvement. Their 10% gain versus your 61% — and that gap widens every year.

Velocity over perfection:

A quick test that teaches you something is better than a perfect test you never run.

Do not wait for ideal conditions. Test messy ideas. A failed test still teaches you what does not work, which saves time later.

Keep a log:

Write down every test you run, what you expected, and what happened. This log becomes a goldmine of knowledge. You stop repeating mistakes. You build on what works.


6. When A/B testing fails

Testing is powerful, but it has limits.

Low traffic or volume

If your business sees 10 customers a week, tests take months to produce useful data. In this case, direct conversations with customers teach you faster than waiting for statistical significance.

Testing the wrong problem

You can A/B test your menu all day, but if the food is bad, no layout will save you. Testing optimizes what already works. It does not fix broken fundamentals.

Overthinking small decisions

Not everything needs a test. If you spend a week debating two button colors, you wasted a week. Some things are not worth the effort. Test big decisions: the offer, the price, the first impression.

Ignoring qualitative feedback

Numbers tell you what happened but not why. If Version B wins, ask a few customers why they liked it. That context helps you design better tests next time.


Think

What would you do in these scenarios?

Simulator

1 / 5
Sim_v4.0.exe

The pricing debate

Two managers at a home cleaning company have been arguing for two weeks about pricing. One insists that a higher price signals quality. The other insists that a lower price will bring more volume and more repeat bookings. Both are confident. Neither has data. The company gets a steady flow of new inquiries every week. What do you recommend?


Practice

Test yourself and review key terms

Knowledge check

Q1/3

You ran an A/B test and after 12 visitors, Version B has twice the sales of Version A. Should you declare B the winner?

Concepts

Question

In the restaurant menu debate, why can nobody in the room actually know the right answer?

Click to reveal

Answer

Because everyone has opinions but no data. Everyone sounds confident, but customers often surprise us. Only testing with real people reveals the truth.

1 / 15

Do

Your action steps for today

Action plan: what to do today

  • Pick a debate:Choose one decision you have been debating. Your team has opinions but no data. Design a simple A/B test to settle it.
  • Start a test log:Create a document with columns: test name, hypothesis, result, lesson learned. Keep every test on record.
  • Run one test this week:It does not have to be perfect. Make a change, split your customers, and measure what happens.
Note.txt

Some examples and details may be simplified to better convey the core idea. Every business is different — adapt these ideas to your specific context and situation.