Revenue attribution is a complex topic. In practice, the same revenue is often credited to several marketing tools at the same time. While Wisepops uses rigorous attribution methods (see the related help doc), attribution alone cannot tell you with certainty whether a campaign actually caused additional revenue.
The only reliable way to know whether a Wisepops campaign truly made you more money is to run an A/B test—comparing visitors who see the campaign with a control group of visitors who do not see it at all, and comparing how much revenue each group of visitors generated.
Step 1 - Create an A/B test
Click the “A/B” button to the right of the campaign you want to test.
2. Enable the control group and keep the traffic split at 50% / 50% to reach results as quickly as possible.
3. Set the success metric to “Revenue per visitor.”
Click Done - then make sure your experiment is published.
Step 2 - Analyzing how much more money you campaign made
Once the experiment is running, you need to wait until it reaches a statistical conclusion on revenue per visitor. The Experiments Results page shows, with statistical confidence, whether visitors exposed to the campaign generated more revenue than visitors in the control group. This allows you to measure the real revenue impact of the campaign—not just attributed revenue.
Go to your Experiment dashboard and click on "See results".
2. Then scroll to "All metrics", select the "Revenue" tab
You can now compare Revenue per visitor between:
the Control group (Baseline), and
visitors exposed to the campaign (Variant).
In the example above, visitors exposed to the campaign generated $2.55 per visitor, compared to $2.44 for the control group. This represents an uplift of +4.8%.
Because this uplift is shown in grey, it means the result has not yet reached statistical significance. If you hover over it, you can see the current confidence level. The result becomes significant once it reaches 90% confidence.
More insights on attributed vs. incremental revenue
Revenue attribution is still a useful metric. Unlike incremental revenue measured through experiments, attributed revenue can be tracked daily. Incremental revenue, by contrast, requires time (typically one to several weeks) to reach a statistically significant conclusion. You should always wait for statistical significance before comparing revenue between exposed and control groups.
From our experience, depending on the campaign type, 20% to 80% of attributed revenue is truly incremental. The highest incrementality is typically observed for Onsite Feed campaigns with product recommendations, as this format is non-intrusive and the product recommendation attribution logic is stricter than for other campaign types.
Once you have established the relationship between attributed revenue and actual incremental revenue for a campaign, you can usually assume that this ratio remains relatively stable for that campaign. You can then rely on attributed revenue to monitor performance on a daily basis, while using experiments periodically to validate the campaign’s true impact.





