Skip to content
Ideas

How to Measure User Satisfaction (Without the NPS)

See why the NPS can lead to unclear results and what you should use instead to better understand how pleased (or unhappy) your users are.

Words by Nikki Anderson, Visuals by Allison Corr

There aren't too many things that cause me to go off on a rant, but NPS (and measuring satisfaction) are among the few.

I have worked at several companies that have used the NPS as a significant way to understand and measure customer satisfaction. It was typically the only way they measured customer satisfaction.

Two examples of misleading NPS results

One of those companies I worked for was in B2B software. The team nestled the NPS into the software, and it popped up every so often asking the telltale question: "Would you recommend this software to your colleagues?"

From this question, we had a lot of detractors—people that responded saying they would not recommend the software to colleagues. So the teams scrambled to understand the low NPS score. After suggesting we go out and talk to these so-called detractors, I set up quick zoom calls with the people who most recently filled it out.

I had a hypothesis. I believed people were saying they wouldn't recommend our software not because they were dissatisfied, but because the question didn't make sense in the context of their work ecosystem. Now, I love feedback and to be proven wrong, but all the calls went the same way:

"I'm the only one who works on the software within this company, so why would I recommend it to colleagues? Plus, I don't talk about software stuff with my friends, so it's not like it would come up in a conversation over brunch or something."

The NPS next cropped up at a B2C company, so the NPS question took the form of, "Would you recommend this website to friends or family?"

Our NPS was reasonably good, higher than I would have expected. The one problem? Our product simply was not that good. But, the team focused on the high NPS rather than looking at other issues and metrics. That meant the high "satisfaction" score overshadowed my research. The pain points, complaints, and issues our users faced daily did not stand up to our NPS and, unfortunately, led to much bad decision-making. The company eventually closed.

With those two experiences in mind, I am wary when people bring up the NPS to measure satisfaction.

First, how does the NPS work?

Before I tell you the NPS won't help you, let's first look into what the NPS is and how it works.

If you have ever tried out a product or service, you have probably seen this question before:

On a scale of 0–10, how likely are you to recommend <product or service> to a friend or colleague?

Your NPS is the accumulation of people who use your product or service taking the time to rate whether or not they would recommend your product or service to others. Based on the score they give your product or service, they fall into one of three buckets:

  1. Detractors: People who rate you from 0-6 and are supposedly unsatisfied with your product and may cause brand damage
  2. Passives: People who rate you from 7-8 and don't seem to have any strong feelings for your product, meaning competitive products could sway them.
  3. Promotors: People who rate you between 9-10 and are your biggest fans. They are likely to be loyal to the brand, repurchase, and help spread the (positive) word.

With this information, you can calculate a score that you can measure across time. The formula to calculate the score is:

Net Promoter Score = % of Promoter respondents minus % of Detractor respondents

We keep passive scores out of calculations, so any scores of 7–8 essentially do not exist in scoring.

As an example, let's say you received 100 responses to your survey:

  • 40 responses were in the 0–6 range (Detractors)
  • 30 responses were in the 7–8 range (Passives)
  • 30 responses were in the 9–10 range (Promoters)

When you calculate the percentages for each group, you get 40%, 30%, and 30%.

To finish up, subtract 40% (Detractors) from 30% (Promoters), which equals -10%. Since the Net Promoter Score is always shown as a number (not percentage), your NPS is -10. Scores can range from -100 to +100.

Creators of NPS, Bain & Company, suggest a score:

  • Above 0 is good
  • Above 20 is favorable
  • Above 50 is excellent
  • Above 80 is world-class

So, why is the NPS suboptimal?

There are a few reasons why I think NPS is not the ideal metric to use as a single metric. I am in no way saying, "never use the NPS again!" If you want to continue measuring NPS, just understand what you are measuring and consider additional metrics to track alongside the NPS.

But, when have you been using a product or service and thought, "huh, I'm satisfied, I better recommend this to a friend, family member, or colleague?"

Yes, there might be a few times this happens. For example, I have a few research tools I recommend to others because I think they are helpful. But I don't necessarily link my satisfaction to recommendations.

With the NPS, we are trying to tie together two variables that aren't necessarily correlational. So let's look at some other reasons I believe the NPS isn't the beacon of light when it comes to satisfaction:

Recommendations are contextual and subjective

Whether or not someone gives a recommendation can be entirely subjective and based on context. Would I recommend a movie I just saw to others? Maybe. It depends on the person.

If it were a comedy, I would potentially recommend it to my friends who like comedy, but not my friends who like action or thrillers. I could have also had a terrible night with bad popcorn and lumpy theater seats, causing me to dislike the movie because of circumstance rather than quality.

Or, I could have been having a bad day, which impacted my experience with a product/service, causing me to give a bad score. There are a multitude of reasons why recommendations are highly contextual and subjective.

NPS calculation overshadows success

The weird calculation of the NPS can hide improvement and success. The above example gave us an NPS of -10. Now, let's say we work super hard to make significant changes to the product/service. We know we are on the right track because we are using user research insights to help give direction to these changes (biassed, I know).

So, it turns out we have increased those scores, and we receive many more 6's and 7's. For some reason, the NPS counts the 6's as 0's (detractors), and the 7's don't matter (as they are passives). So, ultimately, our NPS is still -10, despite our improvements.

You do not know what type of user is responding

The NPS does not care or categorize the respondents to the survey. So, we don't know if the responding people are power users, new users, or even our target personas for the product.

If we have spent time building for certain types of personas but are getting scores from other people out of our scope, that can completely skew the results. It's also easy to skew the results in the other direction—you could be getting good reviews from people who you are not at all trying to target.

Cultural differences could impact your score

If you're a global company collecting responses from a multitude of countries, studying the NPS data per country is essential. There are vast differences in how cultures respond to these surveys, which can mess up your data if you are lumping them together. It is best to keep an eye on the country or region your responses are coming from and separate them.

An 11-point scale is enormous

The NPS is one of the largest scales. The distinction between the numbers is unclear and most likely does not make sense from person to person. You and I could have the same experience, but I would give the product a score of 7, and you give the score of a 6. What is the difference between 6 and 7? It is challenging for respondents to understand this difference and choose a meaningful response.

There is no data on 'why' someone gave a particular rating

Aside from these, my biggest gripe with NPS is that there is no understanding of why someone gave a specific score. Receiving a rating can be useless if you don't understand the motivation behind the score; without the 'why,' there is no way for a company to determine how to improve. Without understanding the 'why,' you can spend much time and energy trying to guess what went wrong and how to fix it.

It is entirely future-based

I do my best to avoid asking future-based questions. We want to focus on what people have done in the past, as most of us cannot predict our future behavior (think of all the gym memberships that go unused).

What to use instead (or in addition to)

Now that I have bashed the NPS as a single-source metric, I would love to give some NPS alternatives or measure alongside the NPS.

We all know people love numbers and measurements, and there are some other questions we could be asking (hint: they might not be as easy, but could be more effective). You can even continue to include the positive/negative aspect of the NPS.

Validated questionnaires:

The best way to go when starting to measure satisfaction is with reliable and validated surveys. I've included several questions and surveys I've used to measure satisfaction:

  1. ASQ: After-Scenario Questionnaire
  2. SUS: System Usability Scale
  3. NASA-TLX: NASA's task load index
  4. SEQ: Single Ease Question

If you are looking to create one, look below at some of the variations I have tried.

Frustration:

  • How delighted or frustrated were you today? — Extremely delighted (+2), delighted (+1), meh (0), frustrated (-1), extremely frustrated (-2)
  • Have you felt frustrated with our product/service in the past X weeks? — Yes (-1), Unsure (0), No (+1)

Loyalty:

  • Did you recommend us to a friend or family member in the last X weeks? — Yes(+1), No(-1)
  • Have you ever recommended us to a friend or family member? — Yes(+1), No(-1)
  • Were you recommended to us by a friend or family member? — Yes (+1), No (-1)
  • Have you considered [canceling your subscription, switching to another provider, etc.] in the last X weeks? — Yes (-1), No (+1)

Satisfaction:

  • How satisfied or unsatisfied are you with [company X]? — Very dissatisfied (-2), Dissatisfied (-1), Neither (0), Satisfied (+1), Very satisfied (+2)
  • How easy or difficult was it to complete your order online? — Very easy (+2), Easy, (+1), Neither (0), Difficult (-1), Very difficult (-2)

Always leave an open-ended text field asking something along the lines of, "how could we improve?" This additional question adds to the qualitative input you can receive to give you a better indication of what went wrong and the context of the situation. This will help in providing more clear direction towards product/service improvements.

Ultimately, we cannot boil user experience and satisfaction down to one singular number. So, let's keep an open mind about what we measure, how we measure, and why we are using specific metrics so we are best able to make better, well-rounded data-driven decisions. And, don't forget to add in some qualitative research as well!

Nikki Anderson-Stanier is the founder of User Research Academy and a qualitative researcher with 9 years in the field. She loves solving human problems and petting all the dogs. 


To get even more UXR nuggets, check out her user research membershipfollow her on LinkedIn, or subscribe to her Substack.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest