Analysing the wording of the NPS question

NPS (Net Promoter Score) is a popular way to measure customer satisfaction. The NPS score is supposed to correlate with growth and as such of course appeals to management teams.

The idea is simple, you ask the customer how likely he or she is to recommend your product/service to others on a scale from 0 to 10. Then you calculate the score by subtracting the sum of zeros to sixes from the sum of nines and tens. If the score is positive it is supposed to indicate growth, if it is negative it is supposed to indicate decline.

My employer is a news company publishing newspapers and sites mainly in swedish (some finnish too). Therefore we mainly use the key question in swedish, i.e. Hur sannolikt skulle du rekommendera X till dina vänner? This wording, although an exact mach to the original (How likely is it that you would recommend X to a friend?) seems a little bit clumsy in swedish. We would prefer to use a more direct wording, i.e. Skulle du rekommentera X till dina vänner? which would translate into Would you recommend X to a friend? However, we were a bit hesitant to change the wordin without solid proof that it would not affect the answers.

So we decided to test it. We randomely asked our readers either the original key question or the modified one. The total amount of answers was 1521. Then, using R and the wilcox.test() function, I analysed the answers and could conclude that there is no difference in the results whichever way we are asking the question.

There is some criticism out there about using the NPS and I catch myself wondering every now and again if people are getting too used to the scale for it to be accurate any more. Also, here in Finland there is a small risk that people mix the scale with the scale 4-10 which is commonly used in schools and therefore apply their opinions to their years old impression about what is considered good and what is considered bad. I’d very much like to see some research about it.

Nevertheless, we are nowaday happily using the shorter version of the NPS key question. And have not found any reason why not to. Perhaps it could be altered in other languages too?

 

 

Advertisements

Poor research is a real burden for media

With a vast experience of research my heart always cries when I come across poor research. Be it poorly designed or poorly presented – it’s such a waste of money! Sometimes I also get angry. Angry with the research institutes who sell fancy “truths” to gullible companies.  Most of the time, however, there’s not much you can do about that, other than hope the public isn’t stupid enough to believe everything they hear. For instance, when some poll tells you that a certain political party has gained in supporters at another party’s expence when in fact the margins of error make any such conclusions null and void.

But sometimes, when this poor research lands close to my own turf, I feel the need to act.

Last Friday I spent all day tearing a research concept to pieces. Comparing the results to the questionnaire and trying to make sense of it all. It’s a study that’s been done four times already and at the second and third round I was in the audience when the research institute presented the results. Both times I politely asked the researchers how they calculate certain key figures. But the answers never satisfied me. As the study was commissioned by our newspaper association and not our company, I decided to let it be, it was not my fight.

Then came the fourth round, using exactly the same concept, again with exactly the same dubious figures. So I sat down, once again, with the report and the questionnaire and pinpointed the problems with the study in a lengthy email and sent it to the persons responsible for commissioning the study. I just hope it is well received and at least leads to a thorough discussion.

Poor research should be banned. Even though we have the Esomar professional standards we are presented with way too much cr*p even from research institutes complying to the standards. The research institutes  really should go the extra mile on assuring the quality of their concepts and services because it isn’t easy to commission extensive surveys ( Esomar also has a guideline for commissioning research. Read it. And there are independent researchers out there who can help you with the commissioning. Use them.). There are so many factors to weigh in, ranging from the aim and the sample to the analysis and  conclusions. If you aren’t a research professional yourself you should be able to rely on the research institutes.

My personal favorites in the Esomar code are the following basic principle articles:

1a and 1b) “Market research shall be legal, honest, truthful and objective and be carried out in
accordance with appropriate scientific principles“. “Researchers shall not act in any way that could bring discredit on the market research profession or lead to a loss of public confidence in it.” – This is something all researchers should take to their hearts. Sadly enough many don’t. Just think about how often you stumble across crazy research and crazy conclusions. Research that does damage the reputation of market research as people either laugh at it or simply don’t believe in it.

4 c) “Researchers shall on request allow the client to arrange for checks on the quality of
data collection and data preparation.” This article implies that the quality of your work should be impeccable. You should be ready, at any time, to let the customer audit your work. Way too seldom customers ask for it though. Working at a research institute myself some years ago, I offered this option to sceptical customers – nobody has ever offered it to me.

The research on media in Finland is seldom good. Too much is lost in the margins of error, too many conclusions are derived from studying means.The ambition to cover too much has resulted in monstrous surveys that serve nobody well. Thankfully, the print media audience measures has been criticised publicly by more and more people and some improvement is under way.

If we make decisions based on mediocre studies and information that cannot hold for scrutiny we won’t end up with winning products. As long a we measure a total audience and try to describe that mass of heterogeneous people as one entity we fool our selves and we fool the advertisers. We need more detailed information, we need to open our eyes to see the multidimensional audience we have. Gone are the days when one product suited all and the audience could be treated as one. Thus we should also realise that the surveys we use to measure our audiences should be re-designed to fit the needs of today. Although we might lose some trends and many grand old men and ladies will grunt in discontent, we need the change. The poor research of today is only hampering us, so let’s throw it out and bring in research that really benefits us!