Most optimization programs have small results or fail altogether. One of the most important causes of this situation is the lack of a coherent system for prioritization. While it’s not hard to come up with a list of ideas, the challenge is to decide where to start and in what order to execute your tests. In this post, you’ll learn what prioritization means, what are the most used systems and how and why you should build your own.

This article is the edited version of the presentation offered by Johann van Tonder, CEO at AWA Digital, a large conversion rate optimisation from London, during the third edition of the International E-commerce Day. The host was Valentin Radu, our very own CEO at Omniconvert.


Valentin: Hello and welcome to the second session of the International E-commerce Day! We are now live with Johann van Tonder straight from Cape Town, South Africa. Johann has a huge experience in A/B testing and conversion rate optimisation. He is the Chief Operational Officer at AWA Digital, a large conversion rate optimisation agency from London and Johann is there to dominate the world and to make it an international agency. Today he’s going to talk about prioritization, a very important thing in life and in conversion rate optimisation as well. The stage is yours, Johann!

Johann: Thank you, Valentin and thanks to all the viewers for joining us! Today we’ll be talking about prioritization, which I think is an area in CRO that is not talked about enough.

I’ve seen many programs not too well or even fail, and when you get called into rescue when you look at the details, then often this is where it falls down. I’m gonna get straight into it and I want to start by looking at a very interesting story out of Amazon. I’m sure you all know the famous product recommendation engine that Amazon became known for.

There’s an interesting story behind this and I wonder how many of you know this? The guy who came up with this idea was a junior software developer at Amazon called Greg Linden. He was inspired by what he saw in the checkout line at a supermarket, that track you end up before you get to pay, surrounded by all the stuff, things that you absolutely don’t want but you’re going to end up buying some of it anyway.

And he was playing with the idea of what that could look like online. He made a quick mock-up of an early version of this and shared it internally with his superiors. And to his surprise, not everyone was as excited about the idea as he was. It was one guy in particular, senior VP of Marketing, a very influential role in Amazon, who was dead against it. He said that this idea would distract customers and that Amazon wasn’t ready for it. So Greg was ordered to abandon the idea. Now a tenacious software developer wasn’t going to let this dude from marketing win without putting up a fight.

So he built a prototype and he got it ready for testing. And well, of course, you know what happens next because not only is this launched with Amazon but it’s all over the web. And as Greg recounts the story on his blog he concludes, “not only did it win this test but the feature won by such a wide margin that not having it live was costing Amazon”.  


So this is a remarkable story that illustrates the value of having a system by which to evaluate ideas like this, some consistent criteria that you can apply and inject some level of objectivity. When you first listen to the story you might simply dismiss it as HIPPO nonsense but when you scratch a little bit deeper it’s essentially about somebody’s hunch with no evidence versus somebody’s opinion.

And that’s where you need something like a triage framework, some predefined set of criteria that everyone has agreed on, this is important, and that you can apply consistently: the same set of criteria used every time to evaluate each idea.


And it also evolves as you learn what works and what works better and what works less well, you build on this. And the easiest thing in CRO is coming up with a list like this of things to do.

You put a few of us in a room for half an hour looking at a site we walk away with a long list of ideas, I’m sure you’ve seen this movie before. The key thing is where do you start, what is test number one, how do you sequence it, how do you stack it up into a roadmap, where are your quickest gains?

And there are a couple of realities that you’re up against.

Reality number one is that most A/B tests don’t win.

You’ll find many different stats online, this is just one. But the fact is that most AB tests in most programs don’t win. Now there are things you can do to make that slightly better by using, for example, a proven methodology. And in and of itself this isn’t even such a bad thing because you can learn from losing tests, we all know that you can take that insight and build on it. But that’s reality number one.

Reality number two is of all those tests that do win, most of them are quite modest wins.


This is from Effective Experiments and they have the benefit of thousands of test data stored in their system. When they did an analysis on this, they found out that the vast majority has a single-digit uplift. So inherently, every test you run is an opportunity cost. Because instead of running that test at that time you could be running something else that gives you a better chance at winning and it gives you a chance at getting a better win.

There’s an opportunity cost that you’ve got to mitigate against and this is where triage comes in. But triage is more than just ranking, if you do it well, it will determine or improve the success of your overall testing program. So here are the key drivers to success, drivers on the testing program: win rate, effect, which is your magnitude of uplift, and then your velocity, the number of tests you run.



And if you do triage well, if you do prioritization well, then you’ll improve each of these KPIs. So triage essentially is about ROI.

Fundamentally, triage is return on investment, which is the ratio between the cost of doing something versus the expected return of that something being done.

And if you look at the frameworks, the triage, the prioritization frameworks, you’ll see that theme coming through.

So let’s look at the first one just to give you that idea.

This is Ryan Eisenberg’s framework, a reasonably famous one, the TIR framework.

The TIR Framework

There you’ll see the cost and return, the two elements played up against each other.

Time, how long will it take to execute.

Importance, there’s your return, how much can we increase revenue or reduce costs.

And the resources cost element.

And the way this framework works is you assign a score in each of those areas of five, and then you multiply them. The maximum score would be 125, five times five times five and that would be ranked at number one. And then the list would go on, the lower the score the lower the rank.

To me, this is a very straightforward, very simple system and it brings those two components of ROI together very well and it’s used widely by a lot of people.


The other one that’s probably even more famous, arguably the most widely used framework in CRO is Chris Goward’s PIE framework.

Chris Goward’s PIE Framework


Let’s go through that a little bit and you’ll see the same elements at play here. So what you do in this case is assess each idea in each of these three areas: potential, importance and ease.

Potential is how much improvement can be made in this particular area that’s being tested.

Importance is about reach, how valuable the traffic is in that area.

Ease, which is the complexity of the test, so your cost coming in there.

I often find there is a bit of confusion between potential and importance, so I’m going to look a little bit deeper into each of these.

This is the overall framework, the calculation how it’s put together. You’ve got the score out of ten in each of these areas, potential, importance, and ease, and then you add it together and divide it by three to get the average.

I want to drill into each of these.

Potential is your evidentiary basis so this is where you look at your evidence, analytics for example, you might look at page value, bounce rates, journey mapping. But you’ll also look at your other data sources, you’ll triangulate these things and this is where you see the opportunities and how they stack up and you assess what the potential upside is in each of these.

This is where you look at the evidence in order to determine that expected business impact.


The next one, importance, is as I said about reach, so mainly traffic volume, how much traffic is on this page, and of course that’s important because we all know that the less traffic you have in an area the more difficult it is to run tests in that area. But also traffic cost, how much money are you spending on traffic in that particular area that’s being investigated. And the more money you’re spending to buy traffic to that page, that area of the site, the higher it will rank ultimately. Because as you improve the performance of that area you are, by extension, stretching that acquisition dollar of yours.

And that brings us to ease which is the last one, and Goward makes a distinction between technical ease and political ease, technical ease things like what level of CSS and JavaScript back-end changes, advanced conditioning that sort of thing, but also your resources. How many people are involved from a dev perspective, design, copywriting and the more people involved the more complex that becomes. You might look at things like restrictions, even reusability of code.

Political, I think there’s no better story that illustrates that better than the Amazon one that I opened up with. An area that’s often politically difficult is the home page, we’ve often come up against it because it’s so visible everyone has an interest in it and it’s looked at every day, and it carries brand tone, so often that’s politically not the easiest area of the site.

But you look at each of these areas, you assign a score and then your list evolved.

Now you’ve got that same list but you’ve got four additional columns, each of your potential, importance, and ease, you’ve got a score in each of those areas. And then you have the PIE score in the last column and the higher the score, the higher will rank. So you’ll do a sort by PIE score and that will determine your ranking. Now, this is a great system and as I said probably the most widely used system. And it’s either this or the TIR that I just showed from Ryan Eisenberg, both are good starting points.

cro prioritization

The one drawback with something like PIE is that in my mind it gives too much weight to ease.

Let me explain that a little bit. So you might have an idea, a test idea that has reasonable potential, but because it scores low on the ease side of things it won’t rank highly. The converse is also true and that’s probably more often where you have a test of limited potential, but because it’s easier to execute it finds its way to the top of your list.

And the worst case and I’ve seen so many of these roadmaps, the worst case is where you have a roadmap that leans towards those tests. You’ve got a list of tests at the top, tests from 1 to 10, all have limited potential but they sit at the top because they’re essentially easy to do. And that I think is not a great place to be.

So you need to think through that a little bit. And I think a good way to do that, to think through this, is to look at a matrix like this one.


Here you plot complexity, cost, versus impact, return. And this is an interesting exercise because now you can look at, for example, if you look at the top left, something of low complexity but high impact, that’s a no-brainer, that’s the kind of idea you want to jump all over and you wanna tackle right away. Those are your quick gains. The problem with that is there are only a few of them and you’re going to deplete that list very quickly.

So that’s low complexity and high gain, a no-brainer.

The other no-brainer potentially is bottom right.

If you think of it, high complexity low impact why would you? It’s going to cost you a lot, it’s super complex, it’s not going to deliver anything, you want to avoid that. And this is one of the problems with the RATS approach, have you heard about that? Random Acts of Testing, where you pick ideas from a list randomly. Chances are you’ll be picking ideas that fit in this quadrant, it’s a waste of time and a waste of money and a waste of resources.

Now, this is where it becomes interesting because what’s left are two quadrants which open the course of a lot of debate. So let’s start with the one bottom left where you have low complexity and also low impact. A lot of people will say well, do it! This is the next area to pick from because it’s low complexity.

I’m not so sure about that because of the low impact, that’s where the opportunity cost comes in. You could be doing something, spending that time and money and effort working on something that’s going to deliver a better impact.

So for me, unless there’s a learning upside, meaning you’re going to draw insight from this that will enable you to build on and find momentum in the project, I don’t think it’s worth doing. And the other way to approach it is to say let’s draw from this area for feathers. And what I mean by that is to build momentum, to increase velocity or to keep velocity in your testing program.

You can pick from this occasionally to make sure that you have a momentum, as you’re busy preparing for bigger tests, as your developers are working on more complex tests, then maybe this is where you can draw to keep that momentum going.

And that leaves us with the last column which is also a very interesting one because, for me, that’s the next go to one after the obvious low complexity and high impact in block one top left, that’s where I harvest ideas for the next set, because it’s high-impact which is what you want. But there are a few strategies, a few things you can do to mitigate that, to deal with that complexity which sits in that column.

And there were two that I want to mention to you briefly.

One is to run pilot studies. So what these mean is before you actually run the A/B test you do some form of pilot study. It could be a survey where you test the potential response from your audience before you actually put it out to an A/B test.

Somebody who did this recently was Spotify. Before they ran a big test they ran a survey to get some data on how that might look and that steered them in the direction.

You could also do a low five prototyping something like Invision and then you do usability testing on. And at the very least what it lets you do is rank those complex ideas. So you know which one to bet on first because they’re all complex. And so you might even be in a position where I’ve often been that allows you based on the feedback from that usability testing to find your hypothesis, to adjust, to make tweaks and you give yourself a better chance of success over when you finally launch it.

And then the other area that I’m a big proponent of is Minimum Viable Experiments. So just test enough to launch with enough to test hypotheses to better up that hypothesis.

I often see optimizers obsessing over minutiae and the question really is: how can you dial down complexity? If it doesn’t get in the way of validating that hypothesis, then drop it.

So that’s one way of doing it.

The other way, still in the vein of Minimum Viable Experiments, is taking a big concept like a big test and breaking it down into smaller thoughts. And essentially what you’re doing then is you’re taking that one massive test that sits in the top right quadrant and you’re moving it down into the bottom left quadrant.

So you’re making it a far lower complexity test but at the same time lower impact. But then you build on it, so it’s a series of minimum viable experiments that build on each other and gradually you build your way into that top right quadrant where you want to be, and you learn your way through. Maybe you abandon halfway through, but then at least you haven’t spent all that resources, you’ve done it much better way.

On to another system which is lesser known in CRO, it’s from the world of product management and it’s a different way of looking at this ROI equation.



So you’ve got cost and value weight up against each other, but what’s different here, you’ll see is the scale. You’ve got a defined scale. And what they do in this case is they use the Agile Point System and it’s beyond the scope of this discussion to talk about exactly what the Agile Point System is, many of you who have worked in SCRUM and Agile systems will know what it is, you can google it, you’ll find lots of information on it, but that’s the way you assign the score to each of the ideas in each of these areas, cost and value.

And you’ll see the similarities with PIE and even with TIR but this is where it ends because instead of looking at it in a 2D way like this, just a flat table, what you do next is you plot this on on a scatter plot, you put your two axes cost and value like this.

This by itself is of limited use. Then you draw a line from the zero point to each of the ideas so it looks like this.

What’s interesting here note that idea 2 and idea 4 are on the same line, they’re in a tie for position number two. If you compare that to the previous list, if you just look at this, you will have ranked idea 2 far higher than idea 4, but looking at the edit on the graph you can see that actually there’s a tie. So it’s a different way, a more robust way of plotting that out and looking at how it stacks up.

So with the few minutes we have left, I want to talk a little bit about creating your own triage framework.

Create Your Own Triage Framework

And you might think why would you want to create your own triage framework? There are so many frameworks, we’ve spoken about a few of them already, you could use PIE, TIR, whatever. Why would you want to create your own?

And if you ponder this question a little bit, would Greg Linden’s idea, the Amazon product recommendation engine, would that idea have survived the PIE framework? If you think about it, weak evidentiary base, it’s politically complex as it can be, it’s not technically easy, algorithms and that sort of thing, so it might not have ranked very high in a system like PIE.

So what you want to do is you want to determine, you want to consider what’s relevant in your context, what’s right for your business.

The things on the left we’ve spoken about all of that. Some of that might not even be relevant to you, some of it will weigh higher than the others or it might be that you’re in a lean environment and you really want to focus a lot on cost and less on potential and that’s your prerogative.

The ideas on the right, and that’s by no means an exhaustive list, you’ve got to consider again business objectives, what’s important in your context it may be that you want to emphasize something like stakeholder buy-in, so when the senior VP of Marketing in your organization says “kill an idea” maybe that’s something that has to be reflected in your program.

I’ve worked with clients who’ve said they valued learning more than the actual revenue uplift because they want to build a customer knowledge base. You might want to in terms of your culture have a little bit more fun, so you want to inject that into the system, we’ll look at an example of doing that.

You could sit with your team and your stakeholders and think about what’s important for you and then come up with a system that accounts for that, that reflects that.

That’s what Hotwire did, this is now quite a famous system and you’ll see how it works in a moment.

So you’ve got that list of predetermined criteria on the left and this is fluid, this is what’s important in your organization. So in this example, the top line what this company wants to do is they want to focus on revenue as far as possible. So the question they ask is “can we use our RPV, revenue per visitor, as a primary metric?” and then you add a point if it’s possible, and you subtract a point if it’s not possible for this given test idea. And so you go through the list and your various criteria will differ but the system is you add and subtract points depending on whether it satisfies that criteria or not.

A similar system, maybe slightly easier, is where you can have unlimited criteria on your spreadsheet. In this case, I’ve got five criteria, and so it’s also a list of predefined criteria. And then you’ve got your ideas on the left in the rows, each row is a different idea, and then depending on whether that idea satisfies a particular criterion or not, you will give it at plus or minus or zero if it doesn’t make a difference. You’ll add it all up in the score column and then that determines your rank.

Now similar to that, maybe slightly more complex, is the Weighted scorecard where again you’ve got the criteria in the top row, predetermined criteria in the same way that we’ve just spoken about. But in this case, each one has a weight assigned to it. And this will be influenced and agreed on by your stakeholders, management perhaps, and then for each idea, you assign a score out of a hundred against each of those criteria and this then determines your score, which then makes up the rank.

And there are many different systems, I’m just trying to give you a flavor of what’s possible and how you can build this out.

And so here’s another approach and that’s the Bucket System. It’s usually used in combination with other systems, with some of these other ranking methods that we’ve spoken about. In this case, it lets you find a good spread through various criteria. So let’s say for example you’ve got different KPIs: conversion rate, AOV, and various others.

By using the bucket system you can make sure that you have tests at all time that talk to each of these KPIs.

In this case, on this screen, there’s a test in one of these areas on the site. So you have multiple tests running at the same time you’ve got to be comfortable with that and you’ve obviously got to do the necessary precautions to make sure that there’s no cross-contamination and that sort of thing, but that’s the fundamental principle. Here, for this business, it’s important to make sure that there’s something running in each stream. And then again you have your specific ranking criteria within each of these buckets.

And another way of slicing it is user-driven, persona-driven, so especially when you’re doing personalization this is maybe a good way of making sure that you’ve accounted for each of those segments.

Again, many ways in which you can do it.

How to make triage fun

So here’s one way of injecting fun into it, is this notion of letting people, letting your team buy an idea. So how this works is as follows: it can be the team, the project team, stakeholders, the client, it doesn’t matter, anyone with an interest in in the testing program. So you have your list of ideas and each idea has a price tag.

Now this price tag depends on certain criteria, so it could be the lower the PIE score the higher the price tag, it could simply be the more complex the test is the higher the price tag. And then each member of the team gets a certain amount of money, not too much, only enough to afford about let’s say a third of all the ideas, and of course as you pay for a more expensive idea you have less money left for other ideas.

The interesting dynamic thing that starts happening, firstly as you buy an idea you’ve got to justify why it is that you’re willing to put money on that idea, even though it’s monopoly or some other form of play money, you’ve gotta justify that expense.

But where it really gets interesting is where you run out of money and where the ideas that are really expensive, towards the bottom of the list, and where people have to pull money. So now you’ve got to convince somebody else why this is such an important idea that they’ve got to contribute money to you so that you can afford this idea. So it stirs discussion and debate and of lot of thinking that doesn’t happen under normal conditions.

So that’s a really interesting way of doing it and so many ways in which you can do this. And it’s now your turn.

I think my core message here is that you’ve got to have a system, any system, there are many. It doesn’t have to be one of these that I showed but you should have some way, some consistent set of criteria that you’ve thought about, that you’ve agreed on and that you can apply consistently to objectively evaluate all ideas.



It’s got to be focused on ROI, so return on investment, cost versus return, those are the two elements you probably want to bring into it, but you also want to align it with your business context. No two businesses are the same, we know this, and so there may be things that we haven’t even spoken about that are important in your organization, you’ve got to reflect that in your system.

And probably the most important thing is that the list of ideas is not what you should stress about, that’s not what you should obsess over. It’s about the process.

Both the process of generating those ideas and also the process of triaging them, of prioritizing them, of deciding which idea you’re going to tackle next, how are they going to rank, and which ideas are you going to allow to bubble up to the top.

And then I want to leave you with this recommendation: there’s a lot more of what I spoke about today in this book that I’m the co-author of, and I’m excited to announce that the publisher Kogan Page, they were kind enough to offer a 20% discount for this audience. And if you go to that URL you enter that code and that’s what will unlock it.


Valentin: That’s great, thanks a lot, Johann! We have a few questions for you. The first question, I think you’ve actually answered to this, it was how do you prioritize not the ideas, but the criteria. So mainly what I’m understanding is that maybe he’s asking if you are giving a different weight to certain criteria before prioritization.

Johann: I understand the question it’s not about the ideas but about the criteria by which the ideas are evaluated. That is the product of lots of discussion and debate and usually, it involves stakeholders. Those stakeholders could be your management team, it could be the hippos, it could be clients if you’re agency side and it also depends. The starting point there is understanding of what the business objectives are, and flowing from that what the KPIs are. That will determine what even those potential criteria would be and then in terms of ranking them it’s discussion, and it’s agreement, it’s debate.


Valentin: We have another question from Samantha. She’s asking what’s important to create a great user experience? The motivation of the user, the value proposition, the friction that elements are presenting or the elements that are presenting inside. So how to collectively bond all of these factors to hit that sweet spot?

Johann: That’s a big question and so let’s try and relate that back to prioritization before maybe I’ll answer a little bit more broadly as well. So in terms of prioritization again, each of these things that you mentioned you can take a view in your organization about the relative importance of each of these. So you spoke about UX, you spoke about motivation, value proposition, friction and a couple of other things so you could say those are your criteria. So does this test, does this idea satisfy what you want to achieve in terms of value proposition? Then that would be a plus one. If it doesn’t that, it would be minus one. And so you can go through the rest of them. A broader answer that’s how it relates to – prioritization and that will be different depending on your context, it’ll be different for every business.

The broader question, if there is a broader question, these are all important things. Without motivation, nothing will happen but I urge you to look at if you haven’t yet, there’s a scientist at the Stanford Persuasion labs dr. BJ Fogg. He came up with the BJ Fogg Framework that explains how important motivation is. Look at that, without motivation nothing will happen.

So you need to understand that. The way you understand that is is by doing a number of things. There’s a whole list of research techniques you can use to explore motivation, interviews and also quantitative things. That’s the starting point, without that nothing will happen, but it’s not the only thing. A few other things have to fall in place as well, value proposition, I can’t stress the importance of value proposition, I could do an entire talk on value proposition.

Here’s the thing with value proposition: it’s probably the most misunderstood and misinterpreted and misrepresented concept in CRO.

In fact, I’m not saying that. The guy who came up with the term “value proposition” says that a guy called Frank Lemming who used to be a consultant at McKinsey in the ’80s, he came up with the term “value proposition” and he recently said he reads a lot of the stuff that people wrote about value proposition recently, he doesn’t recognise it even, it’s not value proposition.

So value proposition defined correctly and understood properly is fundamental, it is one of the key pieces to get right and it’s one of the areas that have the biggest impact potentially on conversion rate, not only conversion rate on business overall but needs to be understood correctly and as I say there’s unfortunately, a lot of rehashed hogwash about that on the blogs out there.

Valentin: Thanks a lot, Johann, thanks a lot for joining us at International E-commerce Day and all the best to you!


If you want to watch the recorded presentation and the conversation between Johann and Valentin here it is: