Wisepops Experiments provide powerful insights into how your campaigns influence visitor behavior. This guide helps you interpret your experiment results.
Note: Detailed results are only available in the Wisepops Intelligence plan.
Understanding general A/B testing concepts
Number of visitors per variant: Shows how many visitors were exposed to each variant, based on your campaign’s targeting and triggering rules. The control group follows the same rules as the original campaign.
Learn more:
If targeting/triggering rules are consistent across variants, visitor distribution will match the allocations set during A/B test creation.
If rules differ between variants, the actual visitor distribution may vary from your initial setup. Be aware—this can introduce selection bias in your results.
Since results reflect only visitors exposed to campaigns, metrics may differ from overall website trends. For example, a campaign targeting return visitors may show higher average pageviews than your site-wide average.
Baseline: The control group serves as the baseline for your A/B test. Uplift metrics are calculated relative to this baseline. If no control group exists, the variant with the lowest main goal metric value becomes the baseline.
Note: In the absence of a control group, the variant assigned to be the baseline might change over the course of the A/B test as the measure main goal metrics values change
Uplift: Indicates the positive or negative impact of a variant on a metric compared to the baseline.
For example, if a variant shows a +13.5% uplift in sessions per visitor, the ±5% range means we’re 95% confident the true uplift lies between +8.5% and +18.5%.
Significance: A green check appears when the estimated uplift is statistically significant (95% confidence or higher). Wait for this level of significance before drawing conclusions—earlier estimates are unreliable.
Metrics explained
Learn more: All metrics reflect the full visitor journey, including activity before campaign display.
Bounce rate: Percentage of visitors who did not open another page after viewing the campaign.
Number of sessions per visitor: Average sessions per visitor for each variant.
Number of page views per visitor: Average pages browsed per visitor.
Revenue per visitor (USD): Total revenue generated per visitor across all revenue-associated goals.
Order rate: Percentage of visitors who completed a revenue-associated goal ("order").
Average Order Value: Average revenue per order.
Displays: Total campaign impressions. If set to show once per visitor, matches the number of targeted visitors.
Clicks: Total clicks on the campaign’s CTA button.
CTR (Click-Through Rate): Percentage of visitors who clicked (measured per visitor).
Attributed conversion rate: Percentage of clickers who converted within the attribution window.
FAQs
FAQ 1. Why do the Experiments results look different from what I see in the regular campaign dashboard?
This is expected behavior, not a bug, and it comes up when an A/B test is created from a campaign that was already published and running before the experiment started.
The campaign dashboard shows the full historical performance of that campaign , including all data collected before the experiment began.
The Experiments dashboard only shows data collected from the moment the experiment started, which is when the proper A/B split and tracking began.
If you see a discrepancy, check when the experiment was created vs. when the campaign was first published. Any older data in the campaign dashboard is from before the controlled experiment period and should not be used to evaluate the test results.
Another, usually smaller, source of discrepancy can come from the different data refresh rate of the campaign dashboard (every couple of minutes) vs. experiments results (daily). The Experiments results refresh less often because they require to calculate in-depth statistical indicators.
FAQ 2. What's the difference between "Attributed revenue/goals" and "Total revenue/goals"?
Attributed revenue/goals counts conversions that can be directly linked to visitors who clicked on your campaign then converted within the attribution window. In other words, these are conversions where Wisepops had a plausible role.
Total revenue/goals counts all conversions that happened during the experiment period across all visitors in each variant group — regardless of whether they saw or interacted with the campaign.
Attributed revenue is useful to track the performance of your campaigns on a daily basis in the campaign dashboard but it doesn't tell you the full story:
What if visitors who view the campaign without clicking on it end up leaving the website earlier and converting less often?
What if visitors convert after the attribution window?
etc.
In order to get the full incrementality picture about how a campaign influence your conversion funnel, you need to run an experiment and compare the total revenue of cohort of visitors exposed to the campaign vs. control group.
FAQ 3. Should I use CTR (click-through rate) as my main success metric when testing with a control group?
No, do not use CTR as your main goal if your experiment includes a control group.
The control group shows your campaign to 0% of visitors, meaning it has a 0% CTR by definition. Any variant that shows the campaign at all will therefore always appear to have an infinite percentage uplift on CTR compared to the control. This makes the metric meaningless for comparison. If you are running an experiment with a control group, use conversion/revenue-based metrics or behavioral metrics as your primary goal. These make a fair comparison between visitors who were exposed to a campaign and those who were not. If you want to optimize CTR, use an A/B test without a control group, comparing two variants of the same campaign against each other.
FAQ 4. How do I know if my site has enough traffic to run a meaningful experiment?
This is one of the most important questions to ask before launching an experiment. Running a test on a low-traffic site often leads to inconclusive results, no matter how long you wait. General rule of thumb: You need approximately 50,000 visitors per variant over the full duration of the experiment to detect a meaningful effect (around a 10% uplift on conversion-related metrics). This means:
A 50/50 A/B test needs ~100,000 total visitors during the experiment window.
We recommend experiments run for no longer than 2 months to avoid confounding variables like seasonality.
At 2 months, you would need roughly 50,000 visitors per month just to hit that minimum threshold in the most optimistic scenario (where 100% of visitors are reached by the campaign, which is rarely the case for popups).
FAQ 5. Why hasn't my experiment declared a winner even though one metric shows very high significance?
The winner is declared based solely on the primary goal metric you set when creating the experiment, not on all metrics shown in the results. If your primary goal (e.g., Revenue per visitor) hasn't reached the significance threshold, no winner will be declared, even if secondary metrics like sessions or page views show 90%+ significance. Secondary metrics give useful context but don't drive the winner determination.
What to do: If you're confident in the direction based on secondary metrics and the primary goal is trending positively, you can manually conclude the experiment. If the primary goal metric is flat after sufficient time, the variant likely isn't having the impact you hoped for.
FAQ 6. Can I edit the goal of an experiment after it's been created?
No. The success metric must be set when the experiment is created and cannot be changed once the experiment is running. Before launching, make sure you've chosen the right goal. If you realize you've set the wrong goal, the only option is to stop the current experiment and create a new one with the correct goal.
FAQ 7. What counts as an "order" for a non-ecommerce website?
For non-ecommerce sites that don't have a native order system, an order is defined as any goal that has revenue attributed to it. If you've set up a custom goal and associated a revenue value with it, each time that goal is reached it will be counted as an order in the Experiments results.



