I recently had a dinner with an entrepreneur in the very early stages of his venture. And although it was only a couple of months old, the site was showing some considerable traction. During our meeting, I asked some more detailed questions about his growth and the user base to get a feel where he was heading with this. Unfortunately, he didn’t have any key metrics or statistical overview of what was happening. Which motivated me to write a bit about what we as investors look for in numbers and what might help you in your decisions going forward.

Please remember: you might want to figure out and apply your very own metrics going forward as these are just examples!

1. Fundamentals: number of monthly/daily active users
Usually DAU/MAU’s are used to measure social games’ success. However, this fully qualifies to measure how often your users return and if they are engaged. What is always interesting to see are changes over the prior periods: change in comparison to last month, last 60 days, last 90 days.

What to record: date/timestamp of sign-up, last sign-in

2. Demographics: sex, age, location
While the first metric measured how many times users are using your service (at least once per day or month), you now need to learn more about who your users are. So, no matter how you create incentives to make people reveal their sex, age, and location, it ultimately helps you understand what users we are talking about.

What to record: sex, age, location (this could be sign-up location as well as user-entered)

3. Engagement: Page views, clicks
Engagement is what investors generally care about. If people come back, you want to make sure that you know what tasks you expect them to do and that they actually perform them. For many social networks this was simply number of page views. Only later, as they switched to more AJAXified user interfaces, other metrics were established in favor of providing a better user experience.

What to record: length of activity/session, number of page views

4. Virality: viral coefficient, invites
Virality is another metric that is a buzz-word that some people already motivated to write whole books about it. No matter how easy it is to understand, it is unusual hard to measure if you don’t care about recording it early enough. Virality usually means that one user invites (at least) one more who then become a user of the site as well. The interesting part is how many users are inviting how many more and how many get converted. As this is clearly a step-process, there is a lot to learn from the conversion from each step to the next. But it also serves you well showing your potential investors that you learned about your users and know how to motivate them and tweak your product.

What to record: number of invitations sent by people, number of successful sign-ups through referrals

One last comment on metrics in general: there is always a danger of recording stuff which is completely irrelevant and simply serving as statistics porn. However, be aware that you might consider recording more upfront than you might need afterwards.


UPDATE: An article on how fab.com raised $40m and saving itself a lot of trouble by sharing their dashboards with potential investors is a great example how dashboards can save you a lot of pain and time. It’s also a great example on what metrics to focus on in your business and pitch.


A question that came up during my session with Andree last weekend at BarCamp Munich, was how effective accelerators like SeedCamp or Y Combinator are in helping startups receive funding.

Before digging into the numbers, I would first like to point out that receiving funding is not necessarily a good indicator for a startup’s success. And while the above mentioned programs are very different (in both nature and geographical focus) a blunt comparison might not give each the credit they deserve.

I am still convinced that both are doing a terrific job creating great startups and I highly recommend applying there to get real mentorship, exposure to great, passionate people, and accelerate your startup.

For the following analysisI used publicly available data from TechCrunch’s CrunchBase.

It should be noted that there is a minor presence of error as some companies of Y Combinator weren’t showing up on CrunchBase and others did go through funding rounds but did not disclose the amounts raised.

Having been established in London about 3 years ago, SeedCamp is primarily focused on the UK and Europe. Its primary event, SeedWeek, is happening once a year in London. Due to its roots and its geographical focus, it is also not surprising that the companies coming out of this program are primarily from the UK. But the program’s outreach into other areas has created a pretty diverse roster with startups from 11 different countries.

Y Combinator
Y Combinator has been around for 5 years. Unlike SeedCamp, Y-Combinator doesn’t have fixed deadlines for application. However, Y Combinator hosts its 3-month-mentoring event twice a year. The first data points available for my analysis are from the June 2005 intake. Y Combinator is known for being an American program and hence the startups applying here are primarily from the US (with some minor exceptions like the UK).

Due to the afore mentioned differences, it should be of no surprise that the programs created a very different output in terms of startups going through their programs.

Hence, the fact that the total amount raised by Seedcamp companies was at around € 9.6M whereas this number is at about $ 135.4M for Y Combinator. To determine the spread of these rounds, I looked into the averages and the standard deviations:

What might matter more is the question about how these two programs differ in their performance. While Y Combinator is around for a longer time, the hypothesis can be made that these large amounts are due to sheer numbers. So let’s have a look at averages by plotting how many of each class were funded grouped by the classes they were in.

As is pretty evident by this chart, Seedcamp started out way more successful along this metric. But even Y Combinator shows some swings. The most recent classes should not be considered in both cases as they are both pretty young (Seedcamp’s class usually starts in September and Y Combinator’s in June).

As we will soon see, these numbers are also quite inconclusive as the spread is too big to jump to any conclusions. Going through the individual rounds and plotting these in a bubble chart provides a more visual approach, providing more helpful insights in answering the question above.

The X-axis is the program these startups went through whereas the Y-axis denotes the days between the start of the program and the first funding. I did not consider the amount these startups were receiving from the respective programs to go through the mentoring months. However, should there have been a rather large amount provided by e.g. Y Combinator, I considered this a first funding round.

Again, as these graphs show, both programs are doing a great job bringing out some fine companies. And Y Combinator’s quick flips (e.g. Reddit, Zenter, GraffitiGeo, Omnisio all got acquired within 6 months after going through Y Combinator) show that Y Combinator is doing a great job, accelerating companies for an acquisition.

Although I have been owning an iPad for only four weeks now, I already experience a couple of changes in my reading behavior I find really interesting and worth sharing.

More means more
I usually carry all of my ebooks on my iPad and I find it very comforting knowing that carrying an additional book in my bag costs me exactly nothing in terms of size and weight. I don’t have to choose anymore before going on a trip or realizing that I was probably going to get through that book before hitting home again to select another on my shelf to bring along.

More resolute
If I start to read a book now I don’t really enjoy, I have become more resolute closing it and probably never looking at it again. In the past, whenever I found myself reading a chapter I did not enjoy, I was either working through that chapter or just skipping it. Either way, due to the fact that I was usually carrying only one book in my bags on business trips, I felt more inclined to work through that book. Not anymore. The iPad made it simple to carry all sorts of books with me at the same time, making it easy to match my mood.

On-screen reading
I remember vividly how people were despising having to read on a screen with 1024 x 768 pixels. I never quite got that argument since I was probably reading more on screen than in books ever since going through a computer science program at my university. And that volume hasn’t decreased with emails and PDFs at work. I just got so used to it that I don’t feel any disturbing side effects. And, by the way, the resolution is really sufficient for my taste.

And PDFs…
Being able to browse through presentations, legal documents or company filings on a larger screen than my iPhone is a relief. No more zooming (or pinching). And, again, I can carry as much as I want, receiving even more on the way due to the 3G capabilities of my iPad.

iBooks is currently my killer app as I am eagerly awaiting an app that enables me to comment in PDFs. The iPad has been such a smooth and welcoming experience, I don’t want to miss it anymore. And having tested a few of the new devices back at IFA, I must admit that I don’t see anyone being up to this user experience, yet.

As you might have heard before, fund investors are putting money in a VC fund because they are expecting above-average returns. The very best VCs have returns of their funds well above 20x. However, the larger the fund, the more do fund managers need to be focusing on really high growth startups. Hence, it becomes more difficult for fund managers to make the right decisions while only a very small number of startups actually qualifies.

Josh Kopelman has a nice post on his blog that pretty much summarizes my hypothesis. But what does that mean for you as a startup? Basically, it means that you should be more selective in who you are going to pitch to.

The VC Selection Criteria
Let’s run the numbers on this for the purpose of illustration. Let’s assume a fund has a size of $100 million and runs for 10 years. Should the fund managers expect to return 10% annually, they need to return 100 x (1,10)^10 = 259 = 2.59x. Hence, after 10 years the manager would need to return a total of at least $259 million at the end of the fund’s lifetime.
During its lifetime, the managers think of making 25 investments which means that the average size of investment is $4 million. Since the failure rate for early stage startups is pretty high, let’s assume that only 3 startups will actually become stars, meaning they will greatly outperform. So, actually these three must generate all of the funds returns or on average $ 86.3 million.
In an early-stage fund VCs tend to take something between 20% and 40%–let’s assume that the managers take 30% for the three investments above. In order to return the $86.3 million, each one of the startups need to generate a value of $288 million.

However, should only 2 investments materialize, each one needs to return $129.5 million–for a 30% stake, this means that each needs to create a value of $431 million.

History Shows
So, we have two categories: exit values of $288 million  and $431 million. Let’s have a look at the real world again: during the past 6 years, the number of exits in both categories has shrunk as the following table shows:

What this means for VCs
All of this is what tips the resentment that the VC landscape will change since larger funds will have more trouble making their investments as they need to be hunting for bigger and bigger deals, competing over the big buyers (who are the only one able to spend that much for a startup) for the largest returns and ultimately realizing their promises to their investors. Smaller funds, however, are much more likely to generate their returns as it will be far easier for them to make a sound investment and sell it off to someone for a reasonable price.

The Startup Perspective
So, instead of just sending your business plan to email-overloaded VCs and running the risk of becoming the latest rumor, it is probably wise getting some basic info on the firm you are sending your information to. And make sure you are raising the right amount of money. I am pretty convinced that the times for higher burn-rates and greater sizes in rounds are long gone. So, run your numbers and analyze how much you really need.
But, in general, it is going to be far more important to determine the true value of a VC: integrity, network, knowledge. Ultimately, the question will always be: do I want to have this guy on my board? And if he is, what value is he going to bring to the table? Nevertheless, some metrics will prevail (e.g. when will the fund need to exit).

After all, there are many more reasons, why smaller funds are more likely to be the future of venture capital. Larry Cheng has a (slightly provocative) post, summarizing those aspects. From my perspective this is neither good nor bad for the industry but might indicate where we are heading to: some really huge funds while the vast majority will remain small and focused.

This is part of an ongoing series, focusing on how to use numbers to analyze and run your company more efficiently and effectively. You can find more at the contents page.

When you are running a site/app/service that is tracking individual users either through sign-ups, downloads or some other mechanism, you are most certainly aware of all the regular metrics: active users, retention, viral coefficient, etc. A really good way to measure and gain insight into your business is cohort analysis.

The traditional cohort definition is pretty self-explanatory:

A type of multiple cross-sectional design where the population of interest is a cohort whose members have all experienced the same event in the same time period (eg birth).


When replacing birth with sign-ups, for example, you get to the point where cohort makes sense for you.

Here’s what you want to get out of such an analysis:

  • Analyze and present user behavior by showing similar behavior over different time periods.
  • Explaining how your current user base (for the given criteria) is made up of user groups from previous time periods.
  • Explain what percentage of your revenue you expect to generate from the sign-ups you were just generating a couple of periods out.
  • Become more predictable in both user growth and revenue, easing the growth process and thus opening the door to planning.


What you want to prove is that you can predict future behavior by showing that users act the same over different periods. Actually, what you want to show is a certain pattern.

Presentation (Example)
This sort of analysis is best presented in a graph as shown below:

What you can see immediately is that the area on the right (Period 5) stacks up our current status with users from Period 1 to Period 4. The really interesting piece of the puzzle comes into play when you are considering what exactly your users represent: active, subscribers, etc. So here is what we can infer from the chart:

  • The height of the chart at Period 5 (at 280) is the number of users currently using (or paying for) our system/app.
  • The individual stacks have a drop-off. As we can see, the drop-off is high in the beginning and then starts to level out but does not go down to zero. Since this is homogeneous across all periods, we can infer that there is something we are doing right: user behavior becomes predictable.
  • For each period 1 to 4, new users were signing up and the number of users from Period 1 makes up 17.8% (50 out of 280) of the users in Period 5.
  • The fall off of users from one Period to the next is higher in subsequent Periods, leveling out at about 25%  of the original sign-ups after 3 periods.

Cohort analysis offers great insights into what’s happening at your user base. It surely consumes a lot of time to collect and grasp the data, especially when your site is getting a bit more popular. But it is well worth the time as you are getting the insights into your users behavior and the development of your site/app. After all, cohort analysis is not too complex but still one which is easy to grasp graphically.

UPDATE: LSVP posted an interesting article on this as well.

By the end of 2008, I was bothered by the question: do people play more games in economic downturns. As I used this material for my own purposes at work, it is only until now that I feel comfortable releasing parts of it to the public.

Let’s look at two major players to answer this question: Nintendo (with its Wii, DS, GameBoy, GameBoy Advance, and GameBoy Color) and Sony (with its PS, PS2, PS3, and PSP).


When looking at the changes in sales QoQ (Quarter over Quarter), you can easily see that while there is still a lot of growth happening, the initial spikes indicate that people are really catching up fast in the beginning, rushing to get their hands on the new hardware.

What’s somewhat expected is that there is a high correlation between hardware units and software units sold. And only in the first quarter of 2008, you see an indication of this correlation breaking up.

What’s somewhat surprising though are the spikes at the beginning of the current recession: shortly after the official date of the current economic downturn, hardware and software units sales went up for both the Wii and the DS.


Don’t be fooled by the graphs at first: the scale is different from the previous slide to capture the tremendous falloff in the first couple of quarters after launching a new platform.

What’s interesting is that Sony’s sales seem pretty stable even during economic uncertainties. And what’s even more surprising is that Sony started both the PS2 and PS3 in bear markets. Now, if that was Sony’s strategy or just coincidence remains a mystery. But looking at how the different consoles performed in terms of sales, I think it’s reasonable to call this a success.

Games Per Console (GPC)

This is the ratio that should get most of our attention to answer the question at hand. The hypothesis was: do people play more when the economy turns bad? To play more you basically have two options: you either play existing games longer or more often or you play more new games. The latter implies that you buy more games which should be captured by both Nintendo and Sony since these are closed platforms with tight monitoring from their makers. You could only buy software (on pieces of hardware) until very late when Sony opened up its PS-platform for direct downloads. Hence, the data here is not that much influenced by this latest feature.

GPC: Sony Vs. Nintendo

Interesting enough is that during bear markets, GPC stays constant for most of the consoles. This shows that hard core gamers and people playing a GameBoy/Advance/Color do not buy more games.

However, the Wii shows some pretty interesting moves. While it first goes down, it starts to rise until it hits 8 games per console. So, how do we account for the move downwards? This is due to its appeal for one very specific software title. My best guess would be the Wii Step or the sports games that were shipped with each console.

But the fact that the Wii has been designed for casual gamers gives us a clue about how we are able to relate the behavior of different demographic groups in downturns to each other. As people get more used to it or tired of the existing games, they start to shop more games, showing the graph going up. I guess this can be regarded as an indication of a certain demographic group of people playing more games.


I guess it would be a bit far fetched from this data to claim that in general people play more games when faced with economic uncertainties. The reason any more recent data on these two players would not help is the dominance of mobile gaming platforms arising from more capable and powerful handset devices like the iPhone and Androids.

This is part of an ongoing series, focusing on how to use numbers to analyze and run your company more efficiently and effectively. You can find more at the contents page.

A/B testing (also named split- or bucket-testing) has been around for some time now. However, it remains a constant topic at conferences and meetups. Since I have been hearing a lot about it, I thought it would be good to share some of what I learned what works and what misconceptions I came across. But before we start off, let me describe it in short:

A-B testing‘ is a way to show different options for part of the UI to two (or more) groups of users, and then monitor what they do as a result.

Mat Hampson


There’s a lot of myths circulating around A/B testing and I think it’s right refuting those first to get you in the right direction.

Myth 1: It’s all about the color of a sign-up button

A very common misconception is about the typical example: it’s all about the color of the sign-up  button. While this might be the most obvious and intuitive example, it is by far not the only viable application. You can basically run through all pages with this sort of mechanism: pricing, message inbox, about, etc. The underlying reasoning is to find out what works for your users.

Myth 2: It doesn’t matter as long as I have only a placeholder page

Even on a single page A/B testing makes a lot of sense. You can track basically all outgoing links and can offer two different pages for your visitors to see. Following your visitors is essential in understanding what they were looking for and how they thought they get there.

Myth 3: It’s a SEM thing

Nothing can be further from the truth. SEM and A/B testing might work well together (running a certain campaign and stuffing that with A/B tests from the other side) but they don’t necessarily have anything to do with each other. It might be wise though to have a certain campaign running either completely with or without running an A/B test at the same time in order to prevent the numbers from being skewed.

Myth 4: It’s only necessary in the beginning

Sure, in the beginning you are really fishing for every visitor to your site and need to make sure that it gets traction quickly while reducing the cost of visitors leaving without converting. But even in the future it is a great tool to figure out how your users think about the things you present to them.


So how do you do it right? Unfortunately, there is no one right way, but let me describe how some use it and how it made most sense to me.

Using it for a short period of time

As with most tests, the cases should be limited to a certain time period. This is not only to make sure that you will run a lot of different versions on your website at all times but more so to give you time to think about what you did and what you learned from it. It isn’t so much about doing this continuously but it’s a matter of running it for certain parts and then keep monitoring those parts at all times. Once the numbers start to fall behind expectations, you need to start about new mechanisms.

So, the rule of thumb is: after you reach the statistical relevant amount of clicks, analyze it, switch either one or the other page/feature to on but keep monitoring it.

Combining it with Analytics

Well, this is what most tools already provide you. To gain a deeper sense of what your people are doing, running those stats more deeply might help as well. Let’s take the sign-up example: wouldn’t it be great if we might be able to find out if the red sign-up eventually made more women sign up then men? For this, we need to tie the data from the split test with the data from the sign-up process (if gender is part of what you require during sign up). So, if our site is targeted at young women, we would get a deeper insight into “red buttons make more women convert” and may be defying our previous statement about green buttons creating more sign-ups due to the overwhelming male crowd we were attracting.

Tracking conversion

Conversion is everything from your typical AdWords campaign to your product being mentioned on TechCrunch or Mashable. Let’s face it: when you are starting off with a consumer product, finding your target audience seems like a daunting task. Sticking with our previous example, tracking which source led to most sign ups and then looking at what these sites are usually covering in their news is vital to get a deeper understanding of your users.

Combining it with AdWords

Campaigns are a great opportunity to test new stuff. When combining certain search keywords with certain pages might give you more insight into how you can attract new users than fumbling around with your existing ones. Just remember: run the campaign from beginning to end with the test and end the campaign with the test to prevent skewing the numbers.