Archives for category: running the numbers

This is part of an ongoing series, focusing on how to use numbers to analyze and run your company more efficiently and effectively. You can find more at the contents page.

When you are running a site/app/service that is tracking individual users either through sign-ups, downloads or some other mechanism, you are most certainly aware of all the regular metrics: active users, retention, viral coefficient, etc. A really good way to measure and gain insight into your business is cohort analysis.

The traditional cohort definition is pretty self-explanatory:

A type of multiple cross-sectional design where the population of interest is a cohort whose members have all experienced the same event in the same time period (eg birth).


When replacing birth with sign-ups, for example, you get to the point where cohort makes sense for you.

Here’s what you want to get out of such an analysis:

  • Analyze and present user behavior by showing similar behavior over different time periods.
  • Explaining how your current user base (for the given criteria) is made up of user groups from previous time periods.
  • Explain what percentage of your revenue you expect to generate from the sign-ups you were just generating a couple of periods out.
  • Become more predictable in both user growth and revenue, easing the growth process and thus opening the door to planning.


What you want to prove is that you can predict future behavior by showing that users act the same over different periods. Actually, what you want to show is a certain pattern.

Presentation (Example)
This sort of analysis is best presented in a graph as shown below:

What you can see immediately is that the area on the right (Period 5) stacks up our current status with users from Period 1 to Period 4. The really interesting piece of the puzzle comes into play when you are considering what exactly your users represent: active, subscribers, etc. So here is what we can infer from the chart:

  • The height of the chart at Period 5 (at 280) is the number of users currently using (or paying for) our system/app.
  • The individual stacks have a drop-off. As we can see, the drop-off is high in the beginning and then starts to level out but does not go down to zero. Since this is homogeneous across all periods, we can infer that there is something we are doing right: user behavior becomes predictable.
  • For each period 1 to 4, new users were signing up and the number of users from Period 1 makes up 17.8% (50 out of 280) of the users in Period 5.
  • The fall off of users from one Period to the next is higher in subsequent Periods, leveling out at about 25%  of the original sign-ups after 3 periods.

Cohort analysis offers great insights into what’s happening at your user base. It surely consumes a lot of time to collect and grasp the data, especially when your site is getting a bit more popular. But it is well worth the time as you are getting the insights into your users behavior and the development of your site/app. After all, cohort analysis is not too complex but still one which is easy to grasp graphically.

UPDATE: LSVP posted an interesting article on this as well.


This is part of an ongoing series, focusing on how to use numbers to analyze and run your company more efficiently and effectively. You can find more at the contents page.

A/B testing (also named split- or bucket-testing) has been around for some time now. However, it remains a constant topic at conferences and meetups. Since I have been hearing a lot about it, I thought it would be good to share some of what I learned what works and what misconceptions I came across. But before we start off, let me describe it in short:

A-B testing‘ is a way to show different options for part of the UI to two (or more) groups of users, and then monitor what they do as a result.

Mat Hampson


There’s a lot of myths circulating around A/B testing and I think it’s right refuting those first to get you in the right direction.

Myth 1: It’s all about the color of a sign-up button

A very common misconception is about the typical example: it’s all about the color of the sign-up  button. While this might be the most obvious and intuitive example, it is by far not the only viable application. You can basically run through all pages with this sort of mechanism: pricing, message inbox, about, etc. The underlying reasoning is to find out what works for your users.

Myth 2: It doesn’t matter as long as I have only a placeholder page

Even on a single page A/B testing makes a lot of sense. You can track basically all outgoing links and can offer two different pages for your visitors to see. Following your visitors is essential in understanding what they were looking for and how they thought they get there.

Myth 3: It’s a SEM thing

Nothing can be further from the truth. SEM and A/B testing might work well together (running a certain campaign and stuffing that with A/B tests from the other side) but they don’t necessarily have anything to do with each other. It might be wise though to have a certain campaign running either completely with or without running an A/B test at the same time in order to prevent the numbers from being skewed.

Myth 4: It’s only necessary in the beginning

Sure, in the beginning you are really fishing for every visitor to your site and need to make sure that it gets traction quickly while reducing the cost of visitors leaving without converting. But even in the future it is a great tool to figure out how your users think about the things you present to them.


So how do you do it right? Unfortunately, there is no one right way, but let me describe how some use it and how it made most sense to me.

Using it for a short period of time

As with most tests, the cases should be limited to a certain time period. This is not only to make sure that you will run a lot of different versions on your website at all times but more so to give you time to think about what you did and what you learned from it. It isn’t so much about doing this continuously but it’s a matter of running it for certain parts and then keep monitoring those parts at all times. Once the numbers start to fall behind expectations, you need to start about new mechanisms.

So, the rule of thumb is: after you reach the statistical relevant amount of clicks, analyze it, switch either one or the other page/feature to on but keep monitoring it.

Combining it with Analytics

Well, this is what most tools already provide you. To gain a deeper sense of what your people are doing, running those stats more deeply might help as well. Let’s take the sign-up example: wouldn’t it be great if we might be able to find out if the red sign-up eventually made more women sign up then men? For this, we need to tie the data from the split test with the data from the sign-up process (if gender is part of what you require during sign up). So, if our site is targeted at young women, we would get a deeper insight into “red buttons make more women convert” and may be defying our previous statement about green buttons creating more sign-ups due to the overwhelming male crowd we were attracting.

Tracking conversion

Conversion is everything from your typical AdWords campaign to your product being mentioned on TechCrunch or Mashable. Let’s face it: when you are starting off with a consumer product, finding your target audience seems like a daunting task. Sticking with our previous example, tracking which source led to most sign ups and then looking at what these sites are usually covering in their news is vital to get a deeper understanding of your users.

Combining it with AdWords

Campaigns are a great opportunity to test new stuff. When combining certain search keywords with certain pages might give you more insight into how you can attract new users than fumbling around with your existing ones. Just remember: run the campaign from beginning to end with the test and end the campaign with the test to prevent skewing the numbers.