tl;dr: Investors and product managers can lose time and money on bad decisions when their DAU/MAU data seems to tell them they have a killer product. I’ll show you how not to be fooled by stickiness so that you make better decisions.

What you’re seeing is not what’s happening

After launching a product, few things are more prized than proving its stickiness. Product stickiness is the gateway to growth. Makes sense — if the product can keep pulling back users, the business can bring in more investors and confidently ramp up acquisition.

That’s why it concerns me how many people are misled by data into making bad business decisions.

This article is a pretty detailed look at a specific metric and how not to be fooled by it, so as a warning to the reader: Anyone not interested in product engagement and user growth might find this more dry than the standard “one KPI everyone should know” article. This isn’t one of those articles.

“Our DAU/MAU is better than Twitter”

Big decisions in and around startups often pivot around the value of the DAU/MAU ratio — the most popular metric for product stickiness or engagement.

In plain English, DAU/MAU literally means this:

How DAU/MAU is calculated

DAU/MAU is typically interpreted as the average frequency with which users use (and hopefully get value from) the product, ie. more is better. If you invest time or money into a product, one way or another you should know whether it’s doing well in this regard.

  • As a VC facing investment decisions, you might see DAU/MAU in pitch decks or board reports next to reassuring claims that users just can’t get enough of the product
  • As a VP Product facing roadmap prioritisation decisions, you might use DAU/MAU to help choose between creating something new vs improving what you already have
  • As a CEO or CMO facing marketing budget decisions, you might use DAU/MAU to help decide whether it’s time to ramp up acquisition or whether the product still needs improvement

It’s no surprise then that many businesses rely heavily on DAU/MAU, and enthusiastically seize opportunities to make claims about their product’s game-changing stickiness. Likewise, investors challenge founders to prove they have a product worth shouting about.

A high DAU/MAU is worth shouting about

You sometimes see it presented as a single figure or just implied with average DAU and MAU figures:

Stickiness in numbers

A more visual way uses the daily trending figure, which shows the number of active users each day divided by the number of active users in the trailing period, typically 28 days or month.

Here’s a chart showing DAU/MAU in a format I’ve seen a few times:

“We’re stickier than Twitter!”

Looks great, right? Consistently good DAU/MAU of around 34% — higher than Twitter and seemingly a genuine indicator of user behaviour. Maybe you’d be persuaded to invest or to spend more on acquisition. You might even see something like this and be grateful for such clear, compelling evidence to support your decision. Except, you’re probably wrong and you should not be grateful.

Looking at DAU/MAU in this aggregated way is a mistake I’ve seen in practice, very similar to the chart above. As a result I’ve seen teams operate for too long under the illusion that their users were using the product with high frequency, with a couple of consequences for the business:

  • Prematurely shifting product development onto incremental tweaks to nudge up DAU/MAU.
  • Prematurely ramping up marketing budget, with confidence that the new cohorts would show the same level level of engagement.

Getting these decisions wrong was painful. Product tweaks didn’t change behaviour enough, and DAU/MAU didn’t hold up as marketing scaled. This is the lesson I learned:

Aggregated DAU/MAU might be good for slideshows, but it’s a bad way to measure user behaviour and product performance.

The chart above is illustrative (click here to see the dummy data), but — having seen enough examples — it’s very much representative of how the data can fool you. Here’s the real picture behind the data you can see in the chart above:

  • Most of the users use the product on less than 20% of the days
  • No users use the product on 36% of the days
  • 36% frequency is too low for the users to get long-term value from this product

Here’s how performance looks when you segment the data::

You can see segment definitions below and more detailed day-by-day product usage data in the spreadsheet.

How not to be fooled

Meaningful data is always the answer to a good question — even if you got lucky and hadn’t made the question explicit beforehand (and if that is the case, it’s still worthwhile making the question explicit). The corollary is that good questions are a defence against being fooled by data. Here are the questions to ask so that you get meaningful data about user engagement from DAU/MAU:

  1. What action(s) deliver value to the user? 
    Nothing else counts when counting active users. From Amplitude’s great article: “Take the vanity out of activity…and figure out how often users are getting to the core value of your app.”
  2. What’s the minimum frequency which a user performs the action(s) to really get value from the product, and for the product to become an indispendable to them? 
    This defines the power user segment. Some people call “power users” people with insanely high usage stats, but for this analysis I find this definition more meaningful
  3. What positive experience (“Aha” moment) indicates a user should understand the core value of the product?
    These are the activated users, until they become power users
  4. To start with, everyone else can be grouped into one segment
    Call these the fleeting users. You can refine and expand the segmentation later
  5. What is the mix of segments (% of total users) the DAU/MAU for each one?
    This is a meaningful starting point for understanding user behaviour and making decisions

Imagine you saw the (misleading) aggregated chart above and had to make a decision based on the trending value of 34%. Then afterwards you saw this way of looking at the exact same data:

Segmented user data wins

What would you instinctively think? Perhaps…

  • I didn’t realise so many users use the product with such low frequency
  • I didn’t realise so few users actually use the product with the frequency we need to see
  • I wouldn’t have made the same decision if I’d seen it this way

A couple of questions also come to mind:

  • What’s actually a good DAU/MAU for this product?…
  • …and how many of our users are engaged at that level?

These questions cut through the aggregated stats, help you understand what’s actually happening and help you make better investment and product management decisions. They’re also a complement to the L28 Power User Curve, developed at Facebook and brought to life recently by Andrew Chen.

What to do now

Here’s the outline of an action plan you can adapt and use to get your own answers to these questions. You’ll need someone with data analytics skills and tools to follow this plan.

Step 1: Identify the core action

Identify, define in a measurable way and track the user action(s) that deliver value — the core action(s).

  • You need to understand the language of your user what it is that brings them back to your product — it’s the payoff for the hassle of whatever actions they need to perform. If you don’t know, you need to get better (and / or more frequent) at user interviews. Suggested reading: Teresa Torres.
  • Use this to identify the core action(s) a user performs to get the value. If you don’t know, you need to watch users using your product — either live or with a tool like Hotjar or Appsee.
  • Then define how to track that core action in software, whether using event data or transactional data. Suggested reading: ConversionXL.

Step 2: Define the power user

Make a reasonable hypothesis about the power user’s minimum level of engagement, expressed in measurable terms of frequency, duration or some other relevant attribute (Suggested reading: Andrew Chen). It doesn’t need to be perfect — you’ll learn over time how to refine this definition by speaking to users and looking at retention data.

Who is a power user? It’s someone who is using your product with at least the necessary level of engagement to get long term value from it, and whose engagement gives you confidence that you’ll retain that user. It doesn’t need to be someone who has insanely high usage stats — that’s an edge case that can lead you down unhelpful routes of product development.

Step 3: Define a simple set of segments

Create a simple set of behavioural segments based on engagement.

  • Write down the name of each segment, the qualifying criteria and the data you’ll use to identify what segment a user falls into on any given day.
  • If you’ve done prior analysis of behavioural correlations (for example, what to count as the “Aha” moment when the user first really experiences your product’s value) or have insights from user interviews, obviously incorporate what you have; but don’t wait.

Step 4: Create the data

Create a table in your database that categorises each user into a segment on any day that they perform the core action(s).

  • You need someone with the skills and tools to create new tables in your database — this might be a combination of a data engineer or data analyst, together with a data modelling tool like dbt.

Step 5: Report meaningfully

Report the DAU/MAU of each segment rather than an aggregated figure, or at least present it once to show where the aggregated figure comes from so that people don’t get fooled by the overall figure. The linked spreadsheet shows what segmented DAU/MAU data can look like, and with a more real-world number of users (ie. much more than 10), it works when plotted on a chart.

The last word

If you find this interesting and you want to talk more or have a problem you’re struggling with around product analytics and user growth, you can find me on Twitter @ukcharlietaylor




SOURCE

LEAVE A REPLY

Please enter your comment!
Please enter your name here