Quite a few years ago, I discovered split testing and instantly I was hooked… the fact that you could create two versions of a website page, landing page or advert with one difference and data could tell you which one was the best version was awesome.

So I set about testing everything across all the campaigns I was running – big changes, small changes, even testing button colours… The problem was, I shouldn’t have been testing a lot of the time and years later, I’m still seeing marketers make the exact same mistake.

The reason a lot of companies shouldn’t be split testing is that you need a lot of volume to get statistically significant results. Essentially you need enough data to know to a high degree of certainty that luck or chance has not influenced the outcome.

A simple example of statistical significance is rolling a dice. If, say, you only rolled it 6 times, a 4 could come out half the time by chance (50%). If you rolled it 60,000 times, however, the 4 would come out closer to its actual probability (16.66%). The same thing applies for split testing, meaning that companies with not much data are essentially running tests that are like throwing the dice 6 times – not the level of data you should be making business decisions from.

I can already hear you saying, “if I don’t have enough traffic, I can just run the split test for longer until I hit statistical significance”. Well, I thought the same thing back then and I started running tests for 2, even 3 months. The issue? Time will influence your split tests, with the longer you run the test the more likely it is that external factors such as temporary website changes, technical issues and things like users deleting their cookies will come into play, making the tests inaccurate.

There is also a huge opportunity cost of waiting 2-3 months for tests to end, as during that time you won’t be able to touch or improve the page using other methods. You could get to the end of 3 months and have a test that made no difference, and you’ve essentially wasted quarter of a year.

When using the tools, ensure you are using just numbers from the page that you’re testing and not your whole site. If you get a result that shows over 4 weeks, I wouldn’t recommend split testing.

So what should you do instead of split testing?

Double Down On Qualitative Research

The main way you can improve your website and landing pages without split testing them is by doubling down on the qualitative research you’re doing prior to making any changes, so you’re confident they’re the right changes.

That means:

  • Running polls and surveys and asking open questions that will give you insights and ideas through a tool such as Omniconvert.
  • Heat mapping, click tracking and recording visitors as they use your website through a tool such as Hot Jar to identify any issues.
  • User testing of your website to find out what your target market are actually thinking as they use it, which can be done either in person or via a service like usertesting.com.
  • Actually picking up the phone and talking to your customers and prospects who did not convert – their insights can be invaluable.


With enough qualitative data, you can work out the actual issues with your website or landing pages and set about improving them straight away. For example the data you generate from polls and surveys could give you insights into exactly what your target market deem your key selling points or USPs are. You can then use these insights to improve both the order and positioning of these features on your landing page + you can update your copy to more closely match the voice of your target market – describing your key features how they do.

You should also be doing the same thing with quantitative data, looking for device or browser compatibility issues, pagespeed problems and any other errors that you can confidently fix without testing.

Monitor Your Competitors Closely

Along with research, I’d recommend monitoring your top competitors who get a lot of traffic. If they’re big, chances are they’re doing a lot of testing and if you closely monitor the tests they run and the website changes they make, you can see what is working in your industry and get ideas on what to implement.

For example, if you’re in the hotel booking industry, it would be very worthwhile reviewing booking.com’s website and checkout process regularly. They reportedly have over 1,000 tests running every single day and are constantly improving every aspect of their site. They also regularly speak at CRO events around the world which is another sign of a company to follow.

Whilst you can’t blindly implement competitors changes, it will give you some great ideas of things you can try and it can also help back up your qualitative findings.

There are a few tools available to help you with the monitoring of your competitors websites such as kompyte.com or visualping.io. Don’t forget you can also look at offline competitors to find ideas you can use online as well. For example,  eCommerce stores can learn a lot from supermarkets about pricing, product placement and sales techniques.

Use Best Practices…. But Be Careful

If you follow anyone in the CRO industry, you’ll have heard them say “don’t follow best practices, test them for yourself as every industry is different” and that’s 100% true. If you’ve got a high traffic website and can afford to use the traffic to test every change on your website then you should.

If you have a low traffic website, however, and can’t test “best practices” to get a result within four weeks, should you just implement them? Justin Rondeau of Digital Marketer tested just that on 10 separate pages and found that 80% of the time, adding best practice elements improved the conversion rate of the pages.

In my opinion, best practices are a better place to start when improving your pages than non-best practices, and they can back up both qualitative findings and competitor research you’ve carried out. But you need to be careful. There are a lot of “CRO experts” out there and unfortunately some of the best practices they are preaching are likely to do more harm than good. So just make sure you’re getting your best practices from sources that are high quality and respected in the industry – like the ConversionXL Blog – and not sources that simply make data up or analyse experiments that aren’t statistically significant.

If you’re making untested “best practice” changes to a low traffic site, make sure to use annotations in Google Analytics for each change. You can then keep an eye on the conversion rates and results from your marketing campaigns over time and revert things back if need be.

Still Plan On Testing a Low Traffic Website?

If you’ve got a low traffic site and you’re still not convinced to stop split testing, just make sure you’re testing huge changes like a total redesign. The bigger the improvement you can make, the fewer conversions it will take you to get to statistical significance and so the quicker it will be. Ultimately, as long as you’re improving your page constantly you’re ahead of the pack – just don’t let the opportunity cost of running lengthy split tests let you down.

Duncan Jones is a growth focused marketing specialist who is Head of CRO at Digital Marketing agency Web Profits.

38 Shares
Share7
Tweet15
Share16