How do you know if your offer actually made a difference?

Most eCommerce marketers think they can answer this question.
They run a promotion, check the conversion rate, compare it to the week before, and call it a win. The numbers are up. The shopping cart is fuller. Everyone's happy.
But that's not forecasting. That's storytelling. And there's a big difference between the two. Especially when your promotion budget and margin are on the line.
Your headline number is hiding a lot
Discounts and promotions are among the most widely used tools in eCommerce CRO. Run a sale, watch conversions go up, declare success. The logic feels airtight.
It isn't.
Your headline conversion rate during a promotion tells you one thing: people bought while the offer was live. What it doesn't tell you is far more important.
It doesn't tell you how many of those customers would have bought anyway. Did the offer just come along for the ride?
According to IAB research, 71% of advertisers now recognise incrementality (the net sales uplift that wouldn't have occurred without the campaign) as their most important KPI. Yet most retail brands still measure promotional performance by what happened during the sale, rather than by what their offers actually caused.
It doesn't tell you whether you accelerated a purchase that was coming next week at full price.
Think of a fashion retailer running a weekend flash sale. A customer who planned to buy on Monday ends up buying on Saturday, but at 20% off. That's not a conversion win. That's a margin loss with a delay.
It doesn't tell you whether some visitors were going to pay full price until you showed them a discount, and now they'll expect one every time. This is how full-price sales windows quietly disappear.
Customers learn the pattern. They wait. And the customer experience you're optimizing for starts working against you.
It doesn't tell you whether 5% would have done the same job as 15%. Because you've never tested discount depth, and "more" always appears to work.
A beauty brand running 25% off sitewide versus 10% off for first-time buyers will see different conversion rates. But the margin impact is drastically different, and the headline number won't tell you which was smarter.

It doesn't tell you whether you're pulling in deal-seekers who won't return without an incentive. High conversion during a promotion can mask a dependency problem. That's not a customer base you're building, it's a costly habit you're reinforcing.
It doesn't tell you whether the promoted product grew at the expense of a higher-margin item in the same category. Research published in the Universal Library of Business and Economics (2026) found that in some flash sales, up to 40% of the sales spike was explained by cannibalization of the brand's own higher-margin products.
The top-line looked great. The margin told a different story.
The raw conversion rate during a promotion gives you just enough signal to feel good, and not enough to know if you should.
Start with the hypothesis
Real forecasting in eCommerce starts before the offer goes live. It starts with a hypothesis.
Not a goal. Not a target. A hypothesis. A testable statement about what you expect your offer to do and why.
Something like:
"We believe showing a 10% off incentive to basket abandoners with a cart value over £50 will increase their conversion rate by 15% compared to eligible visitors who don't see the offer."
A good hypothesis has three parts:
- Who you're targeting
- What you're showing them
- What you expect to happen
That third part is what most retailers skip entirely. They know what they're doing, but they don't know what they're expecting. Which means they have no way to know if the promotion worked, or if it just happened at the same time as sales that were coming anyway.
This is where personalization changes the game. Rather than running blanket discounts across your entire eCommerce platform, a hypothesis-led approach forces you to think about specific customer segments, such as first-time visitors, high-intent abandoners, loyalty members, and design promotions that match their behaviour.

That's better for the customer experience and better for your margin.
Then test it properly
A hypothesis without a test is just an opinion. This is where control groups come in.
The method is straightforward. Take a portion of the eligible audience (typically 10%) and exclude them from the offer. Everyone else sees it. After the campaign, compare the two groups.
If the treatment group converts at 8% and the control group converts at 5%, you have a 3% lift (or a 60% increase). That's your incremental effect. That's what the offer actually did to conversions, not the background noise of customers who were going to buy regardless.
If both groups convert at the same rate, you have a problem. You've been giving away margin for nothing. And without the control group, you'd never have known.
This approach works just as well for testing offer types, discount codes, free shipping, or gift-with-purchase, as it does for testing discount depth or personalised timing. The eCommerce platforms that support this kind of experimentation give marketers a genuine edge in optimization. The ones that don't leave them guessing.

Forecasting off real data
Here's where the hypothesis-led approach really pays off: once you have a few clean test results, you can forecast with confidence.
If a shopping cart abandonment offer consistently drives 3-4% of incremental conversion across multiple tests, you can model the revenue impact of scaling it, expanding into new markets, or tightening the discount to protect margin. You can make the case internally for promotional spend with evidence, not assumptions.
That's what real eCommerce CRO looks like. Not running more promotions, running smarter ones, with a clear view of what each one is actually doing for your business.
You can also use clean test data to optimize inventory management. If a particular incentive drives a predictable uplift in a specific category, you can plan stock levels accordingly. If a promotion consistently pulls forward demand without creating new demand, you can factor that into your forecasting and avoid over-ordering for future peaks.
Think of a homeware brand that discovers its free delivery threshold promotion doesn't actually move inventory, it just shifts the timing of purchases. That's a planning problem dressed up as a conversion win
The bottom line
Every offer you run without a control group is an assumption you're calling a result.
Every promotion you run without a hypothesis is a spend you can't defend, optimize, or learn from.
The good news is this isn't complicated. Write the hypothesis. Set up the control group. Compare the groups after. The data will tell you what's working, what's wasted, and what's worth scaling.
That's forecasting. And it starts long before the offer goes live.

