If you have read my articles in the past, you will know that I am an ardent fan of investigating the definitions of certain words and phrases. Yesterday, I had the pleasure of participating as a judge at the International Business Excellence Awards in Dubai. One of the awards entrants used the word ‘benchmarking’, multiple times in his presentation. In fact, ‘benchmarking’ is a word that I frequently hear as I go about my business around the world. So what exactly is the definition of ‘benchmarking’?
A measurement of the quality of an organisation’s policies, products, programmes, strategies, etc., and their comparison with standard measurements, or similar measurements of its peers
According to the business dictionary, the objectives of benchmarking are to determine what and where improvements are called for, to analyse how other organizations achieve their high performance levels, and to use this information to improve performance. This all makes sense – well to me anyway!
The principle of benchmarking is undeniably sound. Understanding how your organisation is performing in relation to others – both in your own sector and in other sectors is a very effective way of determining how well your business is evolving. When it comes to Customer Experience, benchmarking is regularly seen by many leaders as being an important ‘yardstick’ by which to determine the success of their business in achieving their customer focused objectives – if indeed they have customer focused objectives in the first place.
Only last week, I was asked by a company if I had access to NPS (Net Promoter Score) benchmarking scores. My response was as follows:
Urrgghhhh!! No – but you might find some stuff on here – https://www.npsbenchmarks.com/ – Can I ask why you want them? I am NOT a fan of NPS benchmarking!!
So before I go any further, I want to reiterate what I said in my response to the question I was asked – I am NOT a fan of NPS benchmarking. In fact, I am NOT a fan of benchmarking Customer Experience measurement in general. Funnily enough, the business leader who asked the question (and a very competent business leader at that), knew what my likely response was going to be! The reason the question was asked, is that their boss wanted the information – we are unsure as to why (although we can guess) – a very common scenario.
Over the years, I have been very conscious about the desire of business leaders to know if their organisations are ‘better than the competition’. More often than not, Customer Experience measurement (predominantly NPS), has been used as the justification for these leaders concluding that their company’s are performing ‘well’ with regards to Customer Experience. At times it has almost felt as though there has been a secret , members only, directors club. Without wanting to sound disrespectful (and possibly failing), directors, or members of the ‘C-Suite’, have frequented the ‘club’, schmoozing and mingling with each other, whilst proudly (or smugly) saying things like, ‘My Net Promoter Score is 45, what’s yours?’
To coin a phrase, ‘size isn’t everything’! As I have already stated, comparing your performance with others is not a bad idea in principle. However (there is always one of those), unless you have absolute clarity and certainty of exactly what you are actually comparing against, it is impossible to make a robust conclusion from a benchmarking exercise. Herein is the problem – whilst many organisations and their leaders can state a number as a ‘fact based’ reflection of their perceived customer focus, few of them can be certain what the number they are using is a representation of.
I have been quoted many times as saying that whilst many businesses measure the Customer Experience in some way, most do so rather badly. As a result, whether it be a Customer Satisfaction, Customer Effort or Net Promoter Score, there is no guarantee that the number being produced by an organisation is calculated in the same way as another – or indeed is actually a reflection of the same thing. Some companies capture and measure customer perception at specific ‘touch points’ in their customer journey – telephone interactions, for example – whilst others capture and measure perception across the entire end to end customer journey – some do both. When a business reports its measure of customer perception, there is no way of determining what the number is representative of.
As a result, if you compare one Net Promoter Score (published by a business) with another, you are very likely to NOT be comparing ‘apples with apples’. This is why I urge anyone who intends to use benchmarking as a way of evaluating performance and progress, that you need to do so with caution. Just because your published score is 45 and your competitors is 35, it does not guarantee that customers perceive you to be better than them. It is only when/if the way the score has been calculated and what it is representative of is IDENTICAL between the two organisations, that you can benchmark with confidence.
I know of businesses who are using different scales to that defined in the original method for Net Promoter Score. I know of companies who have manipulated the calculation, so their published scores can never be negative! You can NOT assume that the scores you see are genuinely representative of the truth.
Benchmarking does serve a purpose. I am not a fan of it, because most of the time it is not an accurate, like for like comparison. If your senior leaders demand benchmarking be used, then you must beware exactly what it is you are using. Failure to do so will likely result in you drawing either the wrong, or inaccurate conclusion from your benchmarking exercise.
Couldn’t agree more Ian! When challenged by a CEO ‘why is our NPS 48 and the (brand X) is at 75?’ we just so happened to have a ‘source’ who confirmed that brand X were only surveying buyers whereas we were asking a random selection of all customers who started the buying journey. And as with any metric, it’s about what you do with the data and understanding WHY you have achieved your score.
Improving your own customer experience – regardless of where you are with your CSAT, NPS, or customer effort – is what you can control, not the performance of others.
Good post Ian. While I am a big fan of benchmarking, I read your post carefully and think we may actually agree with each other. It is all a question of definition. Benchmarking is not just a random comparison of numbers between companies. When done well, it is ‘double blind’, or at least ‘single blind’. The people answering the question about a company and its competitors must have no way of knowing who is funding the study, at least where one of the companies on the list is providing the funding. The research on a set of companies must be done simultaneously, so that seasonality in the industry does not affect the comparison.
On the “mine is bigger” front, what matters is not the so much the number as the trend. If a company has a low score that is improving, it will take share from the leader with a higher, but stable or declining score.
All of the above actually reinforces one of your points: if you are going to use benchmarks, do so with caution, making sure you understand how the research was done, and whether you are able to communicate it all in a simple and concise way.
Finally, I will of course mention that in any customer research, the scores matter far less than your ability to gather and implement improvement suggestions.
Well said Ian. NPS is an easy to understand score that can say a lot without saying much.Most people accept that as a convenient way to gauge CX performance, The score in itself is easy to measure. The question remains whether we are looking for the best. solid-sense answer or just the most convenient. Benchmarks are important to businesses regardless. So what might be a better way to benchmark?