logo UE Guvernul Romaniei Instrumente Structurale

In the ever-evolving world of eCommerce, the ability to adapt and innovate is crucial for success. A/B testing has become an invaluable tool for businesses aiming to enhance user interaction and drive growth. One of the leading figures in this field is Jakub Linowski, the editor-in-chief of GoodUI and founder of Linowski Interaction Design Inc. 

With a keen eye for patterns and a passion for design, Jakub has transformed the landscape of UI design by leveraging data-driven insights to create effective and user-friendly interfaces. In this interview, we delve into Jakub’s journey, exploring his unique approach to A/B testing, his philosophy on pattern identification, and the practical applications of his findings for eCommerce growth. Join us as we uncover the secrets behind Jakub’s success and learn how to apply these strategies to boost your own business.

The power of patterns.

Key Takeaways

  • Patterns Enhance Growth: Leveraging patterns in A/B testing improves UI and eCommerce success.
  • Systematic Analysis: Record and analyze all test results for reliable optimization.
  • Balanced Strategies: Use best practices with new ideas for effective results.
  • Machine Learning: Integrate machine learning for cost-effective, faster, and accurate testing.
  • Multiple Metrics: Track various metrics for a complete view of testing impact.

Welcome to Growth Interviews!

Welcome to Growth Interviews, the fun, stimulating, and engaging series of conversations driven by digital business growth.

Our mission is to provide insights and ideas from world-class professionals on the topic of growth and to cut through the noise of so-called marketing tips and tricks, revealing the money-making strategies behind e-commerce.

Each episode is an intriguing challenge involving an insightful expert who reveals some of their best-kept secrets, which you can use right away to boost your business. 

In this week’s episode of Growth Interviews, we invite you to join our conversation with Jakub Linowski, the editor-in-chief of GoodUI, the founder and lead designer of Linowski Interaction Design Inc., and one of the most talented and inspired pattern hunters in the world, completely dedicated to finding and refining a design through to its ultimate version.

Jakub’s experience in engineering and graphic design, along with his passion for finding and identifying patterns, has been the driving force and inspiration behind his impressive history of discovery through UI Design. He has transformed science and art into a solution that is now successfully used by many companies, creating a new perspective in the industry. 

Jakub is the founder and leader of GoodUI, a company that – through searching and sharing UI (User Interaction) patterns that can win or lose in A/B tests – is fully focused on helping design & growth teams make better decisions based on the findings and results that are published at GoodUI.org.  

During our interview, we talked to Jakub about how he first discovered those patterns and how he went further into developing a system that provides the best solution. His unique approach of taking into consideration all the data that already exists, and creating classes of experiments for further testing, is a unique one that has provided unexpected help for those small companies that do not yet have any experience in testing but are afraid of losing out.

Have a listen to Jakub’s ideas on A/B testing, experiments, conversion rate optimization, the way anyone should look at UI (User Interaction), and much more, in the podcast below. There is so much we can learn!

‘Verba volant, scripta manet’ is an old Roman saying that whatever is written will last longer. So, let’s jump straight into Jakub Linowski’s interview and hear his great ideas.  

Finding, Identifying, and Following the Patterns

Valentin Radu: Jakub, tell us how you got into the conversion rate optimization space in the very beginning.

Jakub Linowski: I lead GoodUI which is focused on finding and identifying patterns, things that tend to work, things that don’t work, where they work, where they don’t work with different degrees of effects as a way of driving optimization work. 

There are thousands and thousands of experiments being run and if we just pay attention to all that kind of information and all those experiments, there’s most likely some things that repeat over and over that reproduced are largely generalizable. And those patterns I think are very interesting for reuse and exploitation across projects. So that’s the purpose of GoodUI, to find these patterns.

Valentin Radu: I’ve also seen that at goodUI you also have classes of categories of experiments, so it’s not like these experiments work. You could also select what kind of experiments have you run and on which type of website, right?

Jakub Linowski: That’s true, yes. I think one general use case is that all the experimentation itself is very valuable. The act of trying to make and improve something, a product or business; there is a case to be made for someone just starting off or hasn’t optimized their site and just by starting off and there’s a lot of opportunity. 

There is a case to be made for utilizing as much as possible those big wins maybe other companies have already discovered. First and foremost to get a bigger leap, a bigger shift. I think that’s the core use case there.

GoodUI has had experience of running thousands of experiments in different industries and organizations, from major companies to small businesses. These experiments are uniquely focused on identifying and using a pattern across many industries with the end goal of finding and defining winning experiments that can be used successfully in a wide range of situations.

The power of patterns in A/B testing  is of undeniable value for any eCommerce company that needs to see, compare, measure and improve the quality of their User Interaction and, consequently, their eCommerce growth.

The Opposing Evils of Best Practices, Patterns and Tactics

A circular flowchart illustrating the process of data-driven analysis: analyze data, conduct research, generate insights, prioritize hypotheses, and experiment with new web experiences.

Valentin Radu: : Tell us what’s frustrating you right now in the CRO landscape. Which are the things that you think they should be completely vanishing in order to make this industry much more efficient, much better and more transparent.

Jakub Linowski: That last words, transparency and authenticity. I think when people look at so-called best practices, patterns, heuristics, tactics. Most often there’s two opposite evils that are assigned to them. There’s this idea of under-confidence where some might say ‘No! Best practices are the only one that work. There’s no such thing. And what works for one client never works for another. You just have to keep a silo.’ and you have to discover the truth on your own.

And of course, there’s the other extreme where, like in those blog posts with 100 ideas, there’s this overconfidence of ‘Yes, these are guaranteed to work.’ Both points I think are extreme. And a healthier approach is to look at these patterns with this probability in mind, with how often they work with reproducibility, with repeatability. 

Many of them will work or will not work or will work at different degrees or will work in different situations. And being transparent and authentic in publishing means that all these winners and losers will get a bigger more accurate picture.

Valentin Radu: : You’ve looked at a lot of results from a lot of A/B testing, you have a database of a lot of results of experiments that worked and of experiments that haven’t worked even though they seemed really legit and they should have been working based on the initial assumptions and hypothesis. 

So, based on your knowledge, what do you think the CRO experts are missing out? What do you think they should be doing in order to be more effective and that’s, of course, besides learning from other people’s mistakes or from other people wins. What do you think keeps the losing teams from being winning teams or winning, not only teams but CRO freelancers?

Jakub Linowski: I would say a systematic way. By the way,  tools are starting to do this, keep the track and remembering the results. But I think that’s one key is just beginning to systematically remember and build up ways of grouping similar experiments together. And when I say remember, it’s with minimized publication bias. 

There is this human tendency to only look at the big wins, the significant big positives. But you have to be really honest with yourself at publishing the significance, the results, the low impact, the negative impact and storing them together. 

If there’s a similar experiment that has been run three times, having a way of putting them together to see whether in fact it won three times or maybe there are cases where won twice and there’s a really strong negative. Maybe it is a very interesting sub-insight why that didn’t perform. So, that’s systematic approach.

Experimentation is one of the most important and powerful tools in improving UI, but there are no defined best practices, patterns and tactics to fit all situations. On the contrary, there are two attitudes that undermine performance:

  • ? Under-confidence, which means working only with older best practices, never mixing new ideas
  • ? Overconfidence, which means going for overly-praised solutions that others have published, without bringing in the specifics of your own experience

The winning way is the middle way. The healthiest approach is to look at these patterns with probability, reproducibility and repeatability in mind, while at the same time being systematic in measuring and observing any experiments and applying the rules from the patterns.  

The Methodology of Predictivity

A diagram highlighting the advantages of predictive analysis, including cost reduction, credit scoring and rating, forecasting customer demands, operation efficiency, overall decision-making insight, risk mitigation, and fraud detection.

Valentin Radu: Looking at e-commerce websites, because our audience is mainly looking to e-commerce, conversion rate and customer experience, which are three ideas or insights that you’d like to share with the eCommerce marketers?

Jakub Linowski: There are a couple of things that are beginning to reproduce or repeat and I think that’s one key thing that I would pay attention to is, if it worked for a dog food company and fashion industry and it worked similarly, then those are the patterns that surfaced to the top. 

One particular thing is the next day delivery, something we’ll called urgent next day delivery, where basically, a message is shown that “Hey if you act within this time frame’ and – by the way, this is not fake, it’s made authentic. 

Because if there is a given time frame that we act on and then you can maybe get the item still shipped today or perhaps it could be also pre-calculated, you can receive it tomorrow or within two days. So, clarifying that kind of message and having that kind of prominent on the product page that ‘here’s the cutoff time by which you should act and if you do so, then you will be rewarded by having that item sooner.’

Valentin Radu: Okay. So, are there other ideas regarding this e-commerce growth based on a data driven manner?

Jakub Linowski: Another common one is a popular thing, so people might notice. But nevertheless, these things, again, they are repetitive, there’s a positive track record of watching numerous times. Coupon fields are very popular one where showing coupon feel, a blank coupon feel to everyone is a way of losing customers. 

So, finding ways to minimize that, to make it less obvious to only showing an upside coupon for those people that have already earned it. These are other ways of increasing revenues.

Additionally, obviously, customer star ratings. We have numerous experiments where actually showing customer reviews on products, also leads to better sales increases and revenue increases.

Valentin Radu: Tell us your favorite framework. There are a lot of methodologies towards running optimization. Do you have a favorite one, do you have a personal one, are you using some of the classic ones?

Jakub Linowski: There is some criticisms of frameworks and we came up with our own for this reason, and one potential criticism is that many of these are not tied to past results. So, if we want to predict with prior information, with prior data, then there should be some numbers that raise or decrease ideas based on whether they worked or not.

And that’s where at goodUI we count or calculate; how often something has worked or not and not become directly translatable to what gets surfaced to the top or gets deprioritized. So, if we have one pattern based on one highly significant experiment that was positive, it would get one point. If it worked twice, it will get two points, and then we take these points and we directly use them to prioritize.

Valentin Radu: : So that means you are always looking for experiments that are being made previously. Or what happens if you have a good insight, good data driven inside and you haven’t been writing about experiment, how is? Because this methodology is going to act favorable for the pre-existing experiments.

Jakub Linowski: It will and that’s exactly by design what we wanted. And a quick answer to your question, if we have a new idea, we defaulted to a neutral probability, such as zero. And by the way, zero, we don’t mean as a bad thing. It actually looks like it will go negative. There’ll be a negative ideas which have loss. Right?

And we want to surface things that have worked before. Because if we look at explorer versus exploit mindsets than it’s in the business interests to use what has worked before first, before going into the unknown. So, that’s how we approach optimization work. Use what has worked before first.

Valentin Radu: So, that’s the moment to feature a solution and why people should be working on using the patterns that you have on goodUI. Let’s say your elevator pitch.

Jakub Linowski: Well, there’s two approaches, right? By the way, coming up with an idea is super fun so it’s not like ‘Let go of whatever you’re doing and here’s the 100 things you should test right now.’

No, I think people should continue testing , accept multiple sources of the ideas that are coming to them You talk to customers, of course and they do the research and I think that is completely valid. 

That’s great! But when that moment occurs when people have a bit more space to slow down and consider and ask themselves what are the opportunities they have missed, what are the ones that were achieved by other companies. Then, of course that’s where goodUI comes in and we’re trying to surface those wins so they’re a bit more visible. But that requires to make a little bit of space for those external ideas and the way that moment happens, when people kind of run out of ideas as well, that’s where that sets in.

The best way towards e-commerce growth, conversion rate optimization, and great customer experience lies in the methodology of predictivity.

The predictive model usually used in statistics, when applied in business, gives a competitive advantage, because it is critical to have insight for future outcomes that challenge key assumptions.

At GoodUI, all predictions are made based on prior information and prior measured data from similar experiments that define that specific predictive model and raise the chances of having a large number of winning experiments.

Machine Learning Experimentation vs Human Driven Experimentation

A comparison chart between human learning and machine learning, showing that human intelligence corresponds to machine learning models, learning materials to data, and learning skills to "skillearn" methods like creating tests and interleaving learning.

Valentin Radu What’s your take on machine learning optimization? What do you think about it?  I mean let’s say we have the data collection, then we have the data driven inside. Then you have segmentation, then you have the hypotheses building, you have the prioritization, you have the experiment implementation, you have the QA, you have the reporting and you have the learning. So, there are a lot of steps. 

No matter what kind of methodology you’re using, the experimentation needs you to get through these steps. Do you think there are any steps that machine learning would be better in the future? What’s your take on the machine learning for website optimization?

Jakub Linowski: First of all I think it’s inevitable because the cost benefit is there. Human driven experimentation it’s super fun and thinking about problems of what to optimize is great but it’s a bit of a slow process and if it can be turned into an algorithm, it can be made more efficient. I think it’s just inevitable. It’s just a matter of time of different areas where it could apply. Maybe some areas are easier to automate than others.

Although I might be completely wrong but one area which I think still might need some human intervention is identifying the patterns, identifying the reasons, the cases where something breaks down and creating those hypothesis. I think you yourself have created something like this, looked through a bunch of experiments and identified maybe 12 rules or something like that.

That act of identifying could use some human help or maybe it’s a hybrid. Maybe it’s a hybrid between code and human. So, it’s a long answer, I think it’s just a matter of time and I see no reason why it shouldn’t improve. I’m just gonna give one more example here. A simple pattern is having a call to action that is visible above the fold. That can be easily calculated with lines of code. There is a button that can be clicked that’s visible. And then, run those checks.

The implementation of machine learning in A/B testing is, without question, an inevitable phenomenon. The most important benefits of testing with machine learning are expected to be the lowering of costs and a considerable increase in the speed of carrying out experiments, with greater accuracy, and an increased number of measurable and classified results.

Human-driven experimentation, intervention in pattern recognition, identification of the reasons for their appearance, and the creation of hypotheses, will all combine with machine learning in the near future to create a unique uber-performant hybrid.

A Persuading Set of Tactics for Conversion Rate Optimization

A flowchart outlining key strategies for effective conversion rate optimization (CRO): analyze the current state, develop effective CRO strategies, highlight the role of content, build trust and credibility, emphasize the importance of CTAs, and ensure continuous monitoring and improvement.

Valentin Radu: Tell us your conversion rate optimization set of tactics, the words convincing and persuading new visitors to buy from certain e-commerce websites, looking at e-commerce alone or the act of persuading existing customers for placing the second order.

What’s your take on customer retention rate? Do think this is going to be big sometime, do you think this is going to divide conversion rate optimization and anonymous persuasion of visitors is going to be split into new customer acquisition optimization and customer retention optimization on the website and other channels of advertising email, whatever. 

Because we’ve been looking at customer retention rate, and I think it’s one of the black horses for e-commerce growth in the long run, but still, people are not taking it too serious. Do you think it is the same? Have you seen this pattern? Because your pattern hunter?

Jakub Linowski: I think it’s a very healthy sign of the industry going beyond the micro conversions, the shallow conversions, clicks, adds to cart. And I say when I think about retention for lifetime value. Maybe retention could be looked at as an extension of engagement, so usage over time. Not necessarily purchasing but using it over time, not leaving. 

And things like lifetime value of course are at the e-commerce level not just buying something once but being a more profitable customer. These are both more long-term metrics and I think it’s very healthy to pay attention to things like that. So that when there is a spectrum of metrics for a given experiment, ideally, there’s a way to track things over time from again, at the shallow level on Add To Cart, to being a super high lifetime value customers. It is sometimes difficult to do so because maybe test runs for a couple weeks.

Valentin Radu: Yeah. That depends on the traffic and the products which are being sold as well.

Jakub Linowski: But that will be healthy to transparently collect that and make that also a factor into the equation. Because, as you probably discovered yourself, there’s probably things that you can make short term gains but at the cost of long term gains and vice versa. That’ll be good to see in order to make a tighter prediction.

The best conversion rate optimization tactics are proven to be directly influenced by the metrics we are looking at.

  • ? Retention for lifetime value gives a direct insight into how much your customers enjoy, admire and are interested in your products, and how well they will remember you as a brand.
  • ? Profitability increase in customer lifetime is the metric that not only shows the value of a client throughout his relationship with your business but also the evolution of the value of his purchase.

The Anatomy of the Best Metric in the Industry

A graphic listing priority metrics for conversion rate optimization (CRO): visits to transaction, unique and returning visitors, and points of entry and exit.

Valentin Radu: Last question is regarding the KPIs. Do you think revenue per visitor is the best metric or conversion rate if you think at e-commerce conversion rate optimization?

Jakub Linowski: The best metric, that’s a philosophical question. There’s typically a tradeoff, right? Maybe I mean a quick answer to what’s better than revenue is profit.

Valentin Radu: So, margin per visitor, let’s say.

Jakub Linowski: Right. But can you silver bullet on that and say ‘Okay now drop all metrics and measure only that?’ Well, profit probably gets more fuzzy and more variable. Revenues, even more variable because maybe you’re selling multiple products, right. I’m basically going from the deep metric to earlier on, to the closer to that first click. 

What you gain when you go closer and closer earlier on is potentially even more data. You might get 10,000 adds to cart versus 100 sales. And that is also valuable because when you’re selling 100 cans of dog food, calculating profits might be a little bit more tricky than having to work with something that will lead to eventual profit where things like adds to cart, something just to make tradeoffs.

Valentin Radu: So, what you’re saying if I’m getting it correctly is that micro conversions should take into consideration all kinds of events, so you can’t simply say yes,  “Track revenue per visitor, there’s no need to track other kinds of events on the experiments.’

That giving me the opportunity to ask you, how many events or micro-conversions should you track to gain a proper overview of an experiment?

Jakub Linowski: I think what I’m saying is that it’s probably healthy to keep multiple metrics in mind and when they all work together, when the adds to carts are congruent with the sales, and the sales with the revenue, and the revenue with the profit, that’s a healthy thing.

KPIs (Key Performance Indicators) are key metrics that help any business understand where its success is coming from or what changes are needed in order to build a bigger customer base and thus generate more revenue.

Choosing the best metric for measuring conversion rates in eCommerce can be tricky, but revenue-per-visitor and margin-per-visitor remain the most relevant in the industry.

Nevertheless, limiting yourself to only one metric means you are only narrowing your perspective. In order to avoid this, a set of micro conversions should be tracked for each experiment in order to reach a more solid conclusion and a clearer overview of the entire A/B testing series. 


Experimentation is about constant change, transformation and improvement. Using the power of patterns in A/B testing for reuse and exploitation across projects and industries with the purpose of improving UI (User Interaction) provides even greater certainty in terms of winning more customers and retaining them for longer.

Infusing the process of experimentation with machine-learning capabilities will only increase the quality and quantity of much-needed customer data and make it ready to use in new growth tactics, in the process making the eCommerce industry more vibrant and challenging each day.

proiect cofinantat UE

Frequently Asked Questions

How can small eCommerce businesses start implementing A/B testing effectively?

Small eCommerce businesses can start implementing A/B testing by focusing on simple, high-impact areas such as website landing pages, product pages, and checkout processes. Tools like Google Optimize, Optimizely, and VWO can help conduct tests without extensive technical knowledge. Start with one variable at a time to clearly see what changes drive improvements, and use the insights to gradually optimize the entire site.

What are the common mistakes to avoid in A/B testing for UI improvements?

Common mistakes in A/B testing include testing too many variables at once, not running tests long enough to gather significant data, ignoring negative results, and failing to segment tests by audience type. Another frequent error is not having a clear hypothesis or objective for each test, which can lead to inconclusive results and wasted resources.

How does A/B testing contribute to long-term eCommerce growth?

A/B testing contributes to long-term eCommerce growth by providing data-driven insights that help improve user experience, increase conversion rates, and enhance customer satisfaction. By continuously testing and optimizing different elements of a website, businesses can make informed decisions that lead to sustained improvements in key performance metrics such as average order value, customer lifetime value, and overall sales.