The way we look at data can impact the overall interpretation and the experiments’ results.
Let’s take a clear example: an A/B test on the home page with ‘Control’ (the original version of the page with no modifications made to it) and ‘Variation#1’ presenting a completely different design.
To the visitors who saw the control version of the homepage, a pop-up overlay containing a discount coupon of 15% will be displayed. For the users who landed on Variation #1, a different overlay (visually, different design) will be shown to them (a pop-up with a 15% discount coupon, as well). In this scenario, a path for your visitors has been created. Let’s imagine a user landed on Variation #1 of the A/B test and was later shown version #2 of the pop-up. This user ends up buying a product worth $300.
Depending on how we look at this situation – and given the fact that in his way to convert the user entered two different experiments, we could interpret the data and also the experiments’ performance differently:
In the linear attribution model, each touchpoint in the conversion path – in this case, the A/B test and the Overlay, would share equal credit for the sale (50% each, for the same sale). If the product that was sold is $300 worth, and each experiment would be considered equally responsible for the conversion, $150 would be attributed to the A/B test and $150 to the overlay.
When considering the sales we made based on the assisted conversion model, we won’t give 50% of the ‘credit’ to each experiment for succeeding to make the user convert. Each experiment will be equally and fully responsible for the conversion – 100%. Therefore, the $300 sale will be once attributed to the A/B test where the visitor participated, and the sale will also be assigned to the overlay giving 15% discount to the user. In the assisted conversion model, the total amount of sales is a bit less realistic. Given the fact we take into consideration $300 for the A/B test and $300 for the pop-up overlay, that would mean that our actual sale is (cumulatively speaking) $600, which is not the case.
Below, you can check the difference when comparing the data from the same three experiments – considering the results by ‘Assisted conversion’, respectively ‘Linear distribution’ criteria. As you can observe, the number of users and views are not changed – however, in terms of revenue, the differences are noticeable because the profit is differently attributed: