When marketers measure advertising impact, the numbers can feel abstract. Audience holdout experiments help reveal what advertising actually changes, but the results often include terms like confidence and margin of error.
To understand these terms and how they differ at a glance, imagine you're running an ice cream taste test to see if customers prefer your newest flavor, Decadent Nanaimo Bar, over vanilla.
Together, they help marketers trust the results and understand their accuracy.
When talking about data from an audience holdout experiment, confidence scores describe how sure we can be that advertising caused the observed difference between exposed users and holdout users. When confidence is reached, the platform has enough data to confirm that the difference is real and not due to random variation.
Before confidence is reached, results are directional. They suggest a likely trend, but the numbers can shift as more data comes in. After confidence is reached, the impact results are reliable. Advertising is either driving change or it is not.
Let's return to the ice cream taste test: Before confidence, the data might indicate promising results, but they're not conclusive. After confidence, marketers can be sure people really do prefer one flavor over the other.
Margin of error explains how precise the reported number is.
Margin of error does not change whether the impact is real. It only affects how exact the measurement is.
Let's go back to the ice cream analogy. If 70% of tasters prefer the new flavor with a ±5% margin of error, you can say it’s about 70%. If the margin is ±25%, marketers can only say it’s definitely more popular, but the exact percentage could vary a lot.
Here's an example.
A campaign shows a 460% incremental conversion lift at 90% confidence with a ±25% margin of error. This confirms that advertising causes the increase. The true lift likely falls near 460%, but it could be somewhat higher or lower within that range.
Confidence levels such as 90% or 95% describe how sure the platform is that the true result falls within the reported range.
In marketing measurement, 90% confidence is widely accepted and often sufficient to make decisions. Higher confidence improves certainty but usually requires more time and a larger sample size.
As margin of error decreases, the size of the impact becomes more precise. This helps with forecasting, budgeting, and comparing performance across campaigns.
Confidence means the platform has enough data to confirm that advertising causes the observed difference between exposed and holdout users. Before confidence, results remain directional. After confidence, marketers can trust the direction of impact and know that random variation does not explain the outcome.
Margin of error defines how precise the reported impact value is. A smaller margin shows a tighter estimate of impact size, while a larger margin shows less precision. Margin of error never changes whether impact exists. It only affects how exact the number is.
Confidence confirms whether advertising impact is real, while margin of error defines how precisely the size of that impact is measured. A study can reach confidence even with a wide margin. Precision improves later as more data reduces uncertainty around the estimate.
Confidence levels such as 90% or 95% express how often repeated studies would fall within the reported margin of error. In marketing, 90% confidence usually supports decisions without long delays. Higher confidence increases certainty but requires more time and larger samples.
Before confidence, results indicate trends and guide early learning but not final decisions. After confidence, advertising impact is proven. Marketers can optimize budgets, evaluate performance, or expand campaigns. As margin of error shrinks, planning and forecasting improve.