Skip to main content

Net Promoter Score  (NPS)

By Janna Bastow

Updated: May 6th, 2026

Reviewed by: Simon Cast

Fact checked by: Julie Hammers

Every Product Manager eventually runs into an NPS score and the immediate question of what to do about it. The number arrives without context. Stakeholders compare it to last quarter, line it up against a benchmark someone Googled, hold it up next to a competitor’s published figure that may use entirely different methodology. And then it sits there, waiting for a reaction it hasn’t earned. Companies use Net Promoter Score more than almost any other customer loyalty metric, and they consistently misuse it. The gap between how often teams collect NPS and how rarely it changes a roadmap is one of the most telling indicators of how product organizations actually work.


What is Net Promoter Score (NPS)?

Net Promoter Score (NPS) measures customer loyalty through a single survey question that asks how likely customers are to recommend your product. The score runs from -100 to +100, derived by subtracting the percentage of detractors from the percentage of promoters in your responses.

That sounds simple, and the simplicity is part of the appeal. The metric was introduced by Fred Reichheld in a 2003 Harvard Business Review article titled “The One Number You Need to Grow,” developed alongside Bain & Company and Satmetrix. Reichheld’s argument was that willingness to recommend was the single best predictor of customer loyalty and future business growth, outperforming more complex satisfaction surveys. The metric spread quickly because executives like a number they can put on a slide, and customer-facing teams like a question that takes thirty seconds to answer.

The core question is consistent across NPS implementations: “On a scale of 0 to 10, how likely are you to recommend [product/company] to a friend or colleague?” Most well-designed Net Promoter Score programs pair this with a free-text follow-up asking respondents to explain their score. The free-text response is, in practice, where the actual product insight lives, even though the headline number gets all the attention. That distinction is the difference between an NPS program that drives product decisions and one that produces wallpaper.

Net Promoter Score NPS calculation showing promoter, passive, and detractor groupings in ProdPad Product Management software

How is Net Promoter Score calculated?

The Net Promoter Score calculation is deliberately straightforward. Respondents are sorted into three groups based on their numerical answer, and the score is calculated as the percentage of promoters minus the percentage of detractors. Passives get excluded from the math entirely.

Promoters (9 to 10): Customers who are enthusiastic about the product. They are likely to continue using it, refer others, and act as informal advocates.

Passives (7 to 8): Customers who feel satisfied but not enthusiastic They will probably stay if nothing better comes along, and they will rarely recommend the product without prompting.

Detractors (0 to 6): Customers who are unhappy or indifferent. They are at risk of churning and may share negative experiences with others.

The formula is:

NPS = % Promoters − % Detractors

If a survey returns 100 responses with 50 promoters, 30 passives, and 20 detractors, the resulting NPS is 30. The exclusion of passives from the calculation is one of the more contentious aspects of the metric. The argument for excluding them is that neutral sentiment doesn’t actively help or hurt the business. The counterargument is that this throws away information and creates statistical noise, since the formula treats a respondent who scored a 7 and a respondent who scored a 0 identically.

Both arguments have merit. The simplicity is what made NPS adoptable. The simplicity is also what limits how much you can read into any given score in isolation.

What does a good Net Promoter Score look like?

Net Promoter Score values theoretically range from -100 (every respondent is a detractor) to +100 (every respondent is a promoter). In practice, the answer to the question depends almost entirely on the industry, the customer segment, and how teams administer the survey.

A score above zero indicates that you have more promoters than detractors, which most teams treat as a baseline for acceptable performance. A score above zero indicates that you have more promoters than detractors, which most teams treat as a baseline for acceptable performance. Anything above 20 sits in favorable territory. Past 50 reaches the excellent tier.. A score above 80 is rare and reserved for category-leading products. Most companies will never see a score above 80, regardless of how good their product is.

NPS benchmarks vary enormously by industry. B2B SaaS products typically see scores in the 30 to 45 range depending on the source, with most aggregated benchmarks landing around the mid-thirties. Consumer airlines, telecoms, and cable providers often sit in the single digits or in negative territory. Customer expectations and category dynamics in those industries depress the ceiling, regardless of product quality. Comparing your Net Promoter Score to a competitor in the same category and at a similar stage produces useful information. Comparing it to Apple’s published score does not.

The other complication is that NPS measurement methodology varies wildly between organizations. Some companies survey only paying customers. Others include free trial users. Some send surveys at carefully chosen moments. Others survey continuously. These methodological differences can produce score variations of 20 points or more for the same underlying customer experience, which makes cross-company benchmarking deeply unreliable.

Why the Net Promoter Score trend matters more than the score

The more useful comparison is your own score over time. A consistent Net Promoter Score methodology applied repeatedly to the same customer base produces a trend line, and trend lines are far more informative than absolute numbers.Recent industry research reinforces this point. Forrester’s 2025 data shows NPS scores fell in 20 of 39 industry-country pairs tracked, suggesting the metric is drifting downward across most categories. Against that backdrop, a flat year-over-year score may represent meaningful relative outperformance, and a declining score may reflect a category-wide pattern rather than something specific to your product. The internal trend, calibrated against the broader category direction, carries more diagnostic weight than any single number on a dashboard.

Net Promoter Score NPS trend versus absolute number comparison showing how the same score tells different stories in ProdPad Product Management software

How does Net Promoter Score relate to other customer satisfaction metrics?

Product Management teams commonly track Net Promoter Score alongside several other customer experience metrics, and the score makes the most sense when teams interpret it in relation to its peers rather than in isolation. Each metric captures a different dimension of the customer relationship, and a mature feedback program usually combines several.

Net Promoter Score vs CSAT (Customer Satisfaction Score)

CSAT measures satisfaction with a specific interaction or experience, usually on a scale of 1 to 5 or 1 to 7. Where Net Promoter Score asks about overall loyalty and likelihood to recommend, CSAT asks about a particular touchpoint: a support interaction, a feature, an onboarding flow. CSAT is more granular and often more actionable for diagnosing specific problems. A product team that only tracks NPS misses the diagnostic power of CSAT. A team that only tracks CSAT misses the strategic signal of overall loyalty. Most mature programs run both.

Net Promoter Score vs CES (Customer Effort Score)

CES measures how much effort a customer had to expend to accomplish a specific task. Research from CEB (now part of Gartner) suggested that effort is a stronger predictor of loyalty than satisfaction in many contexts, particularly in customer service interactions. CES is most useful for identifying friction in workflows or processes that should be effortless. Where Net Promoter Score gives you a high-level sentiment reading, CES gives you a targeted readout of where customers are working too hard.

Net Promoter Score vs Product/Market Fit Score

The Sean Ellis test, a widely used measure of product/market fit, asks: “How would you feel if you could no longer use this product?” Respondents typically choose from “very disappointed,” “somewhat disappointed,” or “not disappointed.” The threshold for product/market fit usually sits at 40 percent or more of users selecting “very disappointed.”

For early-stage products, the PMF score is often more informative than Net Promoter Score. NPS asks about willingness to recommend, which presumes the product has reached enough maturity for someone to recommend it in the first place. The PMF score asks about emotional dependence, which is a more direct measure of whether the product has found its market. For mature products with established customer bases, NPS becomes more useful as a longitudinal sentiment measure.

What are the limitations of Net Promoter Score?

Net Promoter Score attracts a substantial amount of criticism, much of it well-founded. Product Managers using the metric should understand its limitations, because pretending they don’t exist leads to overconfident decisions based on shaky data. ProdPad has written about these limitations directly, arguing that no single customer success metric tells the whole story and that NPS in particular is easy to misread.

Statistical fragility in the Net Promoter Score formula

The mathematical structure of NPS is unusual. Treating a 7 and a 0 as equivalent (both are categorized as “not promoters” in the calculation) discards a lot of information. A 10-point swing on the underlying scale can produce no change in the Net Promoter Score, while a 1-point swing in the right place can produce a 20-point change. This makes the metric noisier than it appears, particularly for small sample sizes. A score that moves from 32 to 38 might represent a genuine improvement, or it might represent random sampling variation. Without proper statistical analysis, it is difficult to tell. Many product teams treat Net Promoter Score movements as meaningful when those movements fall within the noise floor of the measurement.

Cultural and demographic skew

Different cultures rate things differently as a matter of convention. Customers in some regions are far less likely to give 9 or 10 ratings, even when they are highly satisfied, because cultural norms discourage extreme ratings. A global product can show wildly different NPS scores by region without any underlying difference in customer experience. Aggregating across regions flattens this signal and can produce a number that doesn’t accurately represent any of the constituent groups.

Single-question limitations

“Likelihood to recommend” is an approximation of loyalty rather than a direct measurement of it. Some customers say they would recommend a product but never actually do. Others recommend products they wouldn’t rate highly because they recognize the product fits a friend’s specific need. The correlation between stated likelihood to recommend and actual recommending behavior varies by product category and is rarely as strong as NPS proponents suggest.

In B2B contexts, this limitation is more acute. The person filling out an NPS survey is often a power user or champion who isn’t necessarily the person who would actually recommend the product to others. The score captures an individual’s sentiment alone, missing the organizational decision-making dynamics that drive B2B purchases.

Susceptibility to gaming

Net Promoter Score is unusually easy to manipulate without technically lying. A team that wants to improve its score can survey only its happiest users, time surveys to coincide with product wins, exclude customers who recently had a bad experience, or run surveys at moments of peak satisfaction. The score climbs. The product doesn’t change. Everyone celebrates a metric that has been engineered rather than earned. Tying compensation or team performance to the number amplifies this risk significantly. Once people are paid based on the score, the incentive to optimize the survey rather than the product becomes hard to resist.

How do you turn Net Promoter Score into actionable product insight?

The difference between a Net Promoter Score program that drives product decisions and one that produces dashboards is mostly about how teams treat the qualitative side of the survey.The score is the headline. The follow-up response is where the work happens.

Always include the open-ended follow-up

The single most valuable practice in any NPS program is collecting a “why?” response alongside the score. A 4 from a customer becomes useful when they explain that the new export workflow forced them to download three files instead of one. A 9 becomes useful when the customer explains exactly which feature has changed how their team operates. Without the follow-up, Net Promoter Score is a number with no narrative attached.

Many teams collect this response and then never read it. The free-text answers sit in a survey tool, exported occasionally, summarized into a paragraph for a board meeting. The actual content goes unread. This is the most common failure mode of Net Promoter Score programs and also the easiest to fix.

Segment your Net Promoter Score results

Aggregate NPS hides everything interesting. A score of 35 across all customers might mask a score of 60 among enterprise customers and 10 among self-serve customers. It might mask strong scores from long-tenured users and disastrous ones from new signups. Segmentation by plan tier, customer size, tenure, role, and feature usage turns a single number into a diagnostic. The questions to answer through segmentation include: which segments love us, which segments are at risk, and what do the differences tell us about where the product is and isn’t working. A product team that only looks at the aggregate Net Promoter Score loses access to the strategic signal that segmentation provides.

Track the Net Promoter Score trend over time

Individual NPS scores carry a lot of noise. Trends carry far more signal. A score that has moved from 25 to 35 over four quarters tells you something meaningful about direction. A score of 35 in isolation tells you very little. Read the trend alongside what the team has been working on. A roadmap focused on reducing onboarding friction should produce trend changes in the segments where onboarding matters most. If it doesn’t, that is diagnostic information about whether the work is hitting the mark.

Close the loop with detractors and promoters

Net Promoter Score surveys produce two extremely valuable lists: the customers who are unhappy enough to score 0 to 6, and the customers who are happy enough to score 9 or 10. Both lists are valuable for qualitative research.

Detractors will tell you exactly what is wrong, often in detail, if you reach out and ask. Many will be surprised that anyone is following up. The interview should focus on understanding their specific experience rather than defending the product. The goal is to extract specific, fixable problems from a general expression of dissatisfaction. ProdPad has covered the practical mechanics of running customer feedback sessions for product teams looking to build this into their process.

Promoters are equally useful, in different ways. They can articulate exactly what your product does well, often more clearly than your marketing team can. These customers make excellent case study candidates, reference customers, and beta testers for new features. Their feedback also tells you which customers you should be trying to acquire more of.

Pair Net Promoter Score with behavioral data

What customers say and what customers do are different things. NPS captures the said. User retention, feature adoption, and engagement metrics capture the done. The combination is far more reliable than either alone. A high Net Promoter Score paired with declining retention is a warning sign. Customers say they are happy, but they are leaving anyway. That gap is a story worth investigating. A low score paired with strong retention might suggest customers are sticking around for reasons unrelated to satisfaction (lock-in, switching costs, lack of alternatives), which is also worth understanding.

How do tools shape Net Promoter Score behavior?

The tools a team uses to collect, store, and analyze Net Promoter Score data shape how teams use the data (or don’t) more than most realize. The default tooling stack in most organizations actively works against turning NPS into product insight.

Survey tools focus on collection

Most NPS data lives in survey platforms like Delighted, Wootric, SurveyMonkey, or HubSpot. These tools are built to collect responses, calculate scores, and produce dashboards. Connecting customer feedback to product decisions sits outside their core purpose. Once the survey is complete, the data tends to stay in the survey tool, accessed primarily by customer experience teams. The Product Managers who could be acting on the qualitative responses often don’t see them, because the responses live in a tool they don’t open regularly.

Disconnected feedback systems fragment the signal

In most organizations, customer feedback arrives through dozens of channels: NPS surveys, support tickets, sales call notes, user interview transcripts, in-app feedback widgets, social media mentions, churn exit surveys. Each lives in a different tool. Each is owned by a different function. The Product Manager trying to understand the full picture has to manually piece together signals from systems that don’t talk to each other. This fragmentation is why teams end up making product decisions based on whichever feedback channel happens to be loudest. Net Promoter Score becomes one isolated input within a fragmented picture.

Connected feedback hubs change what’s possible

When customer feedback (including Net Promoter Score responses) lives in a system connected to the product roadmap, the path from signal to decision shortens dramatically. ProdPad’s Customer Feedback feature follows this principle: NPS responses, support tickets, and other customer signals link directly to ideas in the product backlog, so the qualitative reasoning behind every roadmap decision traces back to specific customer voices. ProdPad’s Signals feature surfaces themes across customer feedback automatically, so the patterns hiding inside hundreds of NPS comments come out without anyone needing a manual read-through. This shifts Net Promoter Score from a number on a dashboard into an active input that drives prioritization.

Customer feedback hub showing Net Promoter Score NPS responses connected to roadmap items in ProdPad Product Management software

What does a good Net Promoter Score program look like in practice?

Concrete examples make the abstract principles tangible. The right NPS implementation looks different depending on the product and business model, but the through-line is consistent: collect the score, mine the qualitative responses, segment the results, and close the loop with both detractors and promoters.

B2B SaaS example

A B2B SaaS company surveying its customers might run a relationship Net Promoter Score survey quarterly to its primary contacts at each customer account. The survey is sent to a consistent population, typically the named admin or product owner at each account rather than random users. Segmentation splits scores by plan tier, account size, tenure, and product usage intensity. The follow-up question prompts for specific reasons behind the score. Detractors receive personal outreach from customer success within 48 hours. The team flags promoters for case study interviews and reference requests.

The team reviews the score monthly at the product level, paying attention to trend movements within segments rather than the aggregate number. They link specific qualitative themes from the responses to roadmap items, so product decisions trace back to customer voices.

Ecommerce example

An ecommerce platform might run a transactional NPS survey triggered after order delivery, with the question framed around the shopping experience rather than the company overall. The survey captures not only the score but also the product purchased, the shipping experience, and any service interactions. Segmentation includes first-time vs returning buyers, product category, fulfillment region, and customer lifetime value tier. The team uses the Net Promoter Score data alongside cart abandonment rates and repeat purchase rates to identify where the experience is breaking down.

Enterprise platform example

An enterprise platform serving large customers might run a multi-stakeholder Net Promoter Score survey at each account, capturing scores from executive sponsors, day-to-day administrators, and end users separately. The differences between these stakeholder perspectives often reveal organizational dynamics that a single-respondent survey would miss. The team tailors follow-up qualitative questions to each stakeholder type: executives field questions about strategic value, admins about operational fit, end users about workflow experience. The aggregated insight informs both product roadmap decisions and customer success engagement strategy.

How should you review and act on Net Promoter Score?

Tracking Net Promoter Score without a structured review and action process produces dashboards rather than improvements. The cadence and depth of the review matters as much as the data collection itself.

Establish a regular review cadence

Most product organizations benefit from reviewing NPS at two levels. A team-level review monthly or quarterly focuses on movements within segments, qualitative themes from recent responses, and specific issues raised by detractors. A leadership-level review quarterly focuses on trends, segment-level patterns, and whether the current Net Promoter Score approach is producing useful signal. The team-level review should produce specific actions: customer outreach, hypotheses to investigate, ideas to add to the product backlog. The leadership review should produce strategic context: confidence in the product direction, validation of segment focus, and adjustments to how the team measures NPS.

Pair scores with context

A score in isolation invites overreaction. NPS dropped 5 points this quarter. The reason might be a recent release that changed a workflow, a major customer who had a bad experience, a survey sent at a different time, or a new segment included in the response set. Each of these explanations leads to a different action, and the score alone doesn’t tell you which is correct. Every Net Promoter Score review should include the context: what shipped, what changed in the customer base, what external events might be at play.

Use Net Promoter Score to surface questions for investigation

A dropping score is a signal that something is happening. The team’s job is to investigate what. That investigation typically involves reaching out to detractors directly, looking at qualitative themes in the responses, cross-referencing with behavioral data, and forming hypotheses about what is driving the change. Continuous discovery practices emphasize this connection between quantitative signals and qualitative investigation. Net Promoter Score can be the entry point. The customer interviews that follow are where the insight gets generated.

Connect Net Promoter Score insights to roadmap decisions

The final step that most NPS programs miss is the explicit connection back to the product roadmap. When a qualitative theme emerges from Net Promoter Score responses, that theme should feed into the backlog as an opportunity worth investigating. When a segment shows declining loyalty, that signal should inform prioritization decisions. Without this connection, NPS becomes a measurement system that runs in parallel to the product process. The metric becomes reporting overhead, sitting alongside the work rather than informing it.

Why Net Promoter Score Programs Break Down in Real Product Organizations

The theory of Net Promoter Score is straightforward. Survey customers. Calculate the score. Act on the qualitative responses. Improve the product. Repeat. The reality, in most product organizations, looks very different.

NPS programs break down when ownership is unclear, because no single function is fully responsible for converting the signal into product decisions. Customer success owns the survey. Marketing reports the headline number. Product is expected to act on it but doesn’t see the qualitative data. The result is a metric that everyone tracks and no one acts on.

They break down when the score becomes a target rather than a signal, because as soon as compensation or team performance depends on the number, the incentive shifts toward managing the survey. The score climbs through methodological adjustments while the underlying customer experience stays the same.

They break down when feedback systems sit disconnected from product systems, because the path from a customer’s NPS comment to a roadmap item involves manual translation between tools that don’t talk to each other. The friction is high enough that, in practice, teams rarely make the connection.

What makes a Net Promoter Score program actually work

The fix is structural. Net Promoter Score works when one clear owner takes accountability for converting the signal into product action, when the qualitative data sits alongside other customer feedback in a single connected system, and when the team reviews the program for its impact on product decisions rather than just for the score it produces. ProdPad supports exactly this kind of feedback-to-roadmap loop, where every customer signal links to ideas, ideas link to roadmap initiatives, and initiatives link to the strategic objectives the team commits to.

The metric itself is a useful starting point for understanding customer loyalty. Treated as a destination, it is a number that goes up and down without changing anything. Treated as an entry point into customer conversations, segmentation analysis, and roadmap input, Net Promoter Score earns its place in a Product Manager’s toolkit.

Enjoy a single source of truth for every product idea

Start a free trial and see how easy your Product Management life could be with ProdPad