Hello, and welcome to the first edition of Expander. I’m Abhishek, and each week I’ll share practical ideas on doing product, measuring what matters, working with people, and growing a business. Send me your questions and in return, I’ll humbly offer BS-free actionable advice. 🤜🤛
To receive this newsletter in your inbox weekly, consider subscribing. 👇
Today, let’s talk about NPS. I have a love-hate relationship with it. I believe it is a reductive indicator of customer loyalty and satisfaction. It doesn’t give a detailed picture, but on the other hand it’s the clearest picture any company can get with the least effort.
While NPS may not be of much value at scale, it has some merit among young startups. But more than often, companies tend to take NPS at face value. The score only paints half a picture, and often not an accurate one. Startups need to dig deeper to make sense of the number. Here are seven ways to get the most out of NPS. 🙌
For fast-moving startups, a rolling NPS gives a more accurate picture.
The product and the service of a company may change every 4–8 months or 1–2 years — depending on the size and the speed of execution of the company. Including NPS data collected a year or two back may not give the accurate score.
If the service has gotten better with time, the older data would pull the score down. If the service has degraded, older data would paint a picture rosier than reality.
Customer reviews from NPS surveys are a good indicator of what isn’t working — rather than what is working.
Leo Tolstoy opens Anna Karenina by observing: “All happy families are alike; each unhappy family is unhappy in its own way.” This is the same for users.
All happy users and their reviews are alike. They are generic and short — great service, very happy, cannot recommend enough — and not very helpful. But all unhappy users have unique problems and comprehensive reviews that can give a company detailed insights.
A company shouldn’t prioritise features based on frequency or intensity of negative reviews from NPS surveys.
Customer-Driven Development is an ugly process where the loudest customers dictate the product roadmap of a company. What a few loud customers demand may not be what most customers want. Companies should avoid it at all costs.
Feature prioritisation should depend on the company’s mission, values, and the strategic pillars that are needed to grow the North Star metric — not on the loudest or most critical customers.
Feedback from NPS surveys can be weighed against the existing product roadmap and categorised into three buckets: Burning, Important (but not urgent), In Radar (may become a problem later, but nothing to worry now) and prioritised accordingly.
Only Burning issues should be prioritised. Important and In Radar issues should be prioritised only when they become Burning.
One small caveat: if something is breaking trust or making users feel duped — basically anything around ethics or principles — it shoudl automatically become a Burning issue, even if it isn’t on the roadmap.
The purpose of an NPS surveys isn’t to collect rating/feedback of a particular experience (such as speed of delivery or call quality) from the user.
NPS is an indicator of how users feels about the company and its services in general. Think of it this way: when a friend enquires about booking a homestay, how many users think of Airbnb immediately? Or, if somebody asks whether Booking.com is any good, how many users say, “Hell yeah!” without thinking twice?
Also, NPS isn’t related to the last time a user availed the services of a company. Even if I haven’t booked an Airbnb in the last two years, I’m still a strong promoter of the company. On the other hand, even though I haven’t used Facebook for about 5–6 years, I’m a strong detractor — mostly due to their lax attitude towards privacy.
Passives should be less than 30%.
In theory, any person who rates either 7 or 8 is a passive user, but in reality any user who doesn’t take the NPS survey is very likely a passive user. Among the users who take the survey, the number of passives shouldn’t cross 30%. Passives above 50% is a bright red flag, especially for a young startup.
Users should ideally feel strongly about a product or a service, so that the company would either have happy customers (who promote the product) or angry customers (who give feedback). Either way it helps. What doesn’t help is when users ignore the product.
Keeping this in mind also aligns the team to work on features that have big impact, and hence strong reactions. For example, at one point Myntra went “app-only” and shutdown its mobile and desktop sites. Customers hated it so much that they had to bring ‘em back within a year.
The frequency and trigger of NPS surveys may create bias in the data. They should be shuffled/randomised to remove as much bias as possible.
This is another reason why companies shouldn’t tie NPS to feature/experience reviews, and also why the gap between multiple reviews from a user should be widely spread out.
Companies can use the following as a ballpark schedule:
Initial survey: 1–2 week after the first major milestone. For example, user uploaded first video on YouTube.
Second survey: 12–24 weeks after the initial survey, unless the user was inactive.
Recurring survey: once every 20–32 weeks after the second survey, unless the user was inactive.
The definition of an inactive user may vary between products. Generally, any user who doesn’t perform the “minimum activity” is an inactive user. For a social media platform, it would be a user who didn’t watch any video or read any posts. For an eCommerce product, it would be a user who didn’t add anything to the wishlist.
The above schedule is applicable to products and services meant to be used for a long time. For shorter lifetime products/services, such as live courses or counselling sessions, the second survey may be skipped, and the recurring survey can be made more frequent depending on the total lifetime of the user. A final survey may also be sent 2–3 weeks after the relationship has officially concluded, even if temporarily.
The purpose of the NPS survey should be made obvious to the user.
Often the purpose of the survey isn’t obvious. When asked, “How likely are you to recommend this product?” it often creates a doubt. “Recommend whom and how? Do they want to know if I would be posting it on social media? Or are they interested in knowing if I would call up my friends and tell them how much I like this product?”
In this dilemma, users often skip the survey. But rephrasing the question can make it easier for the user to understand the context. For starters, the question should make it obvious whom to recommend the product. For example, “How likely are you to recommend <product name> to a friend?” would make sense for a social network platform. “Friend” can be replaced by “colleague” for a product intended for business use, such as Slack or LinkedIn. Similarly, the target audience would be “fellow parent” for childcare products.
There is one other dilemma in the survey question. A 0–10 scale may help a business to calculate the NPS, but for the user, it’s a pretty wide range. “If they want my rating, why don’t they ask me to rate between 1 and 5 stars like App Store?”
A simple tweak to address this problem is to add labels such as “Least Likely” and “Most Likely” below 0 and 10 respectively, to subtly guide the user. Words work better than numbers.
If you’ve noticed, I haven’t mentioned anything about the score. To be honest, I don’t think much of it. The yardstick of a good score varies between domains, so it’s very hard to standardise it.
Regardless of the score, NPS is a half-indicator. While a good score doesn’t guarantee customer loyalty, a bad score can definitely be a red flag. The only way to know the truth and make the most out of the data gathered is to dig deeper.
Talk to Me
Do you agree with what I said, or do you think otherwise? Send me counters, comments, questions, and other ways to put NPS data to good use.
Until next week,