This is part of an ongoing series, focusing on how to use numbers to analyze and run your company more efficiently and effectively. You can find more at the contents page.
A/B testing (also named split- or bucket-testing) has been around for some time now. However, it remains a constant topic at conferences and meetups. Since I have been hearing a lot about it, I thought it would be good to share some of what I learned what works and what misconceptions I came across. But before we start off, let me describe it in short:
‘A-B testing‘ is a way to show different options for part of the UI to two (or more) groups of users, and then monitor what they do as a result.
There’s a lot of myths circulating around A/B testing and I think it’s right refuting those first to get you in the right direction.
Myth 1: It’s all about the color of a sign-up button
A very common misconception is about the typical example: it’s all about the color of the sign-up button. While this might be the most obvious and intuitive example, it is by far not the only viable application. You can basically run through all pages with this sort of mechanism: pricing, message inbox, about, etc. The underlying reasoning is to find out what works for your users.
Myth 2: It doesn’t matter as long as I have only a placeholder page
Even on a single page A/B testing makes a lot of sense. You can track basically all outgoing links and can offer two different pages for your visitors to see. Following your visitors is essential in understanding what they were looking for and how they thought they get there.
Myth 3: It’s a SEM thing
Nothing can be further from the truth. SEM and A/B testing might work well together (running a certain campaign and stuffing that with A/B tests from the other side) but they don’t necessarily have anything to do with each other. It might be wise though to have a certain campaign running either completely with or without running an A/B test at the same time in order to prevent the numbers from being skewed.
Myth 4: It’s only necessary in the beginning
Sure, in the beginning you are really fishing for every visitor to your site and need to make sure that it gets traction quickly while reducing the cost of visitors leaving without converting. But even in the future it is a great tool to figure out how your users think about the things you present to them.
So how do you do it right? Unfortunately, there is no one right way, but let me describe how some use it and how it made most sense to me.
Using it for a short period of time
As with most tests, the cases should be limited to a certain time period. This is not only to make sure that you will run a lot of different versions on your website at all times but more so to give you time to think about what you did and what you learned from it. It isn’t so much about doing this continuously but it’s a matter of running it for certain parts and then keep monitoring those parts at all times. Once the numbers start to fall behind expectations, you need to start about new mechanisms.
So, the rule of thumb is: after you reach the statistical relevant amount of clicks, analyze it, switch either one or the other page/feature to on but keep monitoring it.
Combining it with Analytics
Well, this is what most tools already provide you. To gain a deeper sense of what your people are doing, running those stats more deeply might help as well. Let’s take the sign-up example: wouldn’t it be great if we might be able to find out if the red sign-up eventually made more women sign up then men? For this, we need to tie the data from the split test with the data from the sign-up process (if gender is part of what you require during sign up). So, if our site is targeted at young women, we would get a deeper insight into “red buttons make more women convert” and may be defying our previous statement about green buttons creating more sign-ups due to the overwhelming male crowd we were attracting.
Conversion is everything from your typical AdWords campaign to your product being mentioned on TechCrunch or Mashable. Let’s face it: when you are starting off with a consumer product, finding your target audience seems like a daunting task. Sticking with our previous example, tracking which source led to most sign ups and then looking at what these sites are usually covering in their news is vital to get a deeper understanding of your users.
Combining it with AdWords
Campaigns are a great opportunity to test new stuff. When combining certain search keywords with certain pages might give you more insight into how you can attract new users than fumbling around with your existing ones. Just remember: run the campaign from beginning to end with the test and end the campaign with the test to prevent skewing the numbers.