A/A test

What is A/A testing?

The term ‘A/A test’ can be best defined as a method or technique that is used primarily to test 2 identical web pages or applications against one another. This technique enables marketers to try and test the effectiveness of the CRO tool that is being used to run the experiment and simultaneously check its statistical fairness.

Though the technique A/A testing stems from A/B testing, the key distinction between the two types is that A/B testing measures 2 variants by either absolute or relative differences in their conversion rates and A/A testing helps marketers determine the natural variability or noise of an app or a web page by testing a similar user experience.

Simply stated, an A/A test when implemented correctly should report zero difference in the conversion rate of both control & variation. It helps measure the setup correctness and also the reliability of A/B platform.

A/A testing vs A/B testing

The main difference of approach between A/B testing and A/A testing is interpreting the results. While for A/B testing you're looking to get a statistically relevant result of over 95% statistical relevance, a successful A/A test would come up inconclusive in regards to statistical relevance. This means that whenever an A/A test returns a positive result (>95% relevance) it means that either the CRO tool used is not fine tuned or that the experiment its self was not properly implemented, thus leading to skewed results. 

When should you run an A/A test?

Ideally, you'd run an A/A test as the first thing when starting to work with a new CRO software in order to see if the statistical algorithm is working properly. Starting to work with a new CRO tool that does not have a solid statistical engine will mean that all your future tests might be compromised, thus rendering your work irrelevant. Even worse, you might be taking decisions based on skewed data and end up with implementations on your web page that hurt your conversion rate. 

How to correctly run an A/A test and what to expect?

In order to make sure your A/A test is correctly set up you should pay attention to 4 main things: variation, segmentation, goals and traffic. 

Variation: both the control and the variation should be the exact same page. Even if minor differences are present, they might compromise the test results

Segmentation: make sure you send the same segment of traffic to both the control and variation. All other things being the same, sending different segments of traffic to your pages can result in a false positive.

Goals: track the exact same goals for each page

Traffic: it is extremely important to send enough traffic to the experiment. There is no fixed amount that is considered best, but the general consensus in the A/B testing world is a minimum of 10.000 visits and at least 100 goal conversions. 

How to interpret A/A test statistics

When interpreting the results of a correctly set-up A/A test, the main KPI you need to focus on is the statistical relevance. Considering that you have a solid sample size in your test, you should see that none of the variations are declared as a winner, so the statistical significance of the p-value is under 95%. It's possible even to see a conversion rate difference the two variants, this is very common and it does not mean there is something wrong with the test as long as there is no winner declared. 

aa test