Interviews, insight & analysis on digital media & marketing

NPS: Loved by decision-makers, hated by researchers. Isn’t there anything better?

These articles have been written by the latest cohort of the Practice Makes UnPerfect programme – a course that helps women find and finesse their public voices.

By Abi Wotton, Senior Insight Manager, News UK

NPS, invented 17 years ago, has gained popularity across many sectors. Used by at least ⅔ of Fortune 1000 companies, from financial services, airlines, telecoms and retailers.  

Organisations as diverse as the NHS and Deutsche Post obsess over it.  CEO’s track it daily on their performance dashboards, CMOs demand it in their brand trackers. Its appeal is understandable, requiring respondents to answer one question:

‘On a scale of zero to 10, how likely is it that you would recommend [Company Name] to a friend or colleague?’

It’s cheap to put in place; sent to customers through newsletters, or on brand trackers and customer panels, often at no extra cost to the company. Feedback can be continuous and depending on the scale of the programme, updated in real time.

It’s easy to interpret.  Advocates of the NPS score celebrate it as a single question that can link directly to business KPIs.  A 1% growth in revenue can correlate to an average increase of 7 in NPS (London School of Economics). 

In my experience, when people ask for a brand’s NPS score, they are not trying to understand loyalty or predict future growth. Rather, to unpick multi-layered complex issues around brand health and consumer perceptions.  Whether your brand has a good score or not, NPS is not a real reflection of how people feel about the brand. By trying to reduce this feeling to a number we are in danger of overlooking real insights.

One of the biggest flaws of NPS is that a consumer can’t be both a promoter and a detractor. But haven’t we all experienced being both in our own lives towards products and companies.  You might  recommend a product to one person but discourage another from using it. Especially if we didn’t feel it was the right fit for them. For example, much as I love Spotify, I wouldn’t recommend it to my dad.

As humans we are often contradictory. There are many factors at play with regards to whether we would recommend something. In fact, a study by C-Space found that 52% of all people who discouraged others from using a brand had also recommended it….

This contradiction can be troublesome when a brand has a negative NPS score but millions of people use it.  What would the right interpretation of this number be?  Do people use it in secret? Do they use it to see if it’s still as bad as they thought? Does it have bad press? Do the twitterati hate it? Are they unfamiliar with it?

Imagine instead, consumers responded to a different question.  One that seeks to understand their emotional connection in a more instinctive way. Many brand trackers now do exactly that.  They ask what emotion the brand evokes. These questions use the seven core emotional states identifiable in people’s faces, based on the seminal work of Paul Eckman. The brands who evoke stronger positive emotions perform better within their competitive sets.

So now we can see that a brand with a negative NPS score is instead evoking a feeling of anger or sadness. With this information, we are in a better position to understand how it needs to respond.

But when brands are connecting well with customers’ emotions the payoff can be huge.  Connected customers are more than twice as valuable as satisfied customers.  They buy more, visit more and are less price sensitive. So why wouldn’t you try to measure emotion?

In unpredictable periods, decision makers fall back on tried and tested measures. They know where they stand. Yet investing in this type of research is well worth your time. Like NPS, there are easy ways to measure emotions. You don’t need sprawling and lengthy surveys.  Social media listening tools, google alerts and well timed succinct surveys can help.

As marketers it is our responsibility to help our non-marketing colleagues, with a new set of metrics.  Of course we need to prove the benefits, but also we need to highlight the risks they run by relying on an oversimplified metric. You could be expensively preaching to the converted e.g. those that are both promoting and criticising you. It gives you no direction or understanding of how you got there, instead reducing human experience to a number instead of the rich tapestry we know it to be.